Building Ethical AI: Reid Blackman on Automation Bias, Deepfakes, and Government Priorities

Follow For More Tech CEOS & Founders

"Priority number one is "let's not blow everything up.”

“Think what your organization's ethical, reputational, regulatory and legal nightmares are. Define those ethical nightmares in a comprehensive way, and then systematically and comprehensively put in controls to stop them from happening.”

Never miss another trend. Receive expert insight direct to your inbox from your friends at the cutting edge of web3 and emerging tech.

About Reid Blackman

Reid Blackman is the author of “Ethical Machines,” creator and host of the AI ethics podcast “Ethical Machines,” and Founder and CEO of Virtue, a digital ethical risk consultancy.

Reid is an advisor to the Canadian government on their federal AI regulations, a founding member of EY’s AI Advisory Board, and a Senior Advisor to the Deloitte AI Institute.

He has advised and spoken on AI and ethics to AWS, US Bank, the FBI, NASA, and the World Economic Forum. His thought leadership has garnered attention from The Wall Street Journal, the BBC, and Forbes.

Finally, prior to founding Virtue, Reid was a professor of philosophy at Colgate University and UNC-Chapel Hill.

Please enjoy the show!

Listen To Our A-List Guests Explore Business, Tech and Culture

Markus Thielen – Bitcoin History And ETF
Luz Donahue – Abstract ART NFTS
Neil Redding – Near Futurism & Spatial Computing
Sebastien Borget – The Sandbox And Metaverse
Evan Shapiro – The Media 
Rebecca Noonan – Virtual Super Events
Dominik Karaman – Web3 Brand Strategy
Julio Ottino – Nexus Thinking
Jesper Nordin – Computer Game Music
Josh Katz – NFT Tickets
Dr Ankur Pathak – Cryptocurrency Investing
Viroshan Naicker – Quantum Computing Crash Course
Kevin Riedl – Blockchain Coding
Dean Wilson – NFT Music And DeadMau5
Yolanda Barton – Web3 Culture Storytelling
Charlie Northrup – AI Agents And Hyperconnectivity
Kassi Burns – How Lawyers Are Using AI
Josh Goldblum – VR and AR In Museums
Jonathan Blanco – Web3 Business And Brands
B-Earl – Take Down Hollywood
Keatly Halderman – Blockchain Music Publishing
Elizabeth Strickler – Metaverse University Education
Pico Velasquez – Building The Real Metaverse
Reid Blackman – AI Ethics
Inder Phull – Music In The Metaverse
 Leo Nasskau – Web3 and NFT Charity
Costantino Roselli – Virtual Fashion and The Future of the Metaverse

L𝐢𝐧𝐤𝐬 𝐓𝐨 Reid Blackman And Resources 𝐅𝐫𝐨𝐦 𝐓𝐡𝐞 𝐒𝐡𝐨𝐰

Reid Blackman

Ethical Machines

Virtue – Ethical Consultancy

EU Guidelines For Ethical AI

Reid Twitter

Reid LinkedIn

Connect With Mark 

Connect With Jeremy 

Quotes From Reid Blackman on AI and Ethics

“Think what your organization’s ethical, reputational, regulatory and legal nightmares are. Define those ethical nightmares in a comprehensive way, and then systematically and comprehensively put in controls to stop them from happening.”

“It’s usually the Chief Data Officer, Chief Analytics Officer, Chief Information Officer,  Chief Technology Officer, it’s usually someone on the tech side of the house. But then I always have to tell them “you guys are spearheading this, but you also need to bring in people from risk, compliance, legal, cyber security and HR. At the end of the day it’s a cross-functional effort, an enterprise-wide effort. You have to have cross-functional senior level buy-in.”

“There are lots of statements out there by lots of organizations, governments, non-profit, corporations that say, “Here’s our ethical standards,” and they’re strikingly similar to each other. “We’re for fairness, we’re for transparency, we’re for accountability, we’re for non-discrimination, we’re for respecting privacy.” They’re keywords.”

“I think one way to specify what those (AI) guardrails are is to first start out with specifying what the nightmares are.”

“Some people say “oh AI ethics, it’s about social benefit and positive social impact. That’s great, go do that, but that’s not priority number one. Priority number one is “let’s not blow everything up.”

“The thing that businesses need to understand is you don’t have to cater to your bottom line, this is not about stifling innovation, it’s about stifling really ethically bad innovation.”

“We already have a human rights framework that’s internationally agreed upon. It’s required of all organizations to do a human rights assessment for all the AI that they’re creating throughout the AI lifecycle and to engage in robust risk mitigation, human rights violation mitigation strategies and tactics throughout that, and to document it and to be transparent about it with the relevant authorities.”

“It’s about an ethical risk appetite that’s compatible with business risk appetite and operational risk appetite.”

“For government regulators, at a minimum, their function is to protect people from the worst. Let’s put in regulations and laws to make sure that people are protected from the worst.”

“The EU is making a great stride in passing what’s called the EU AI Act, the European Union Artificial Intelligence Act, which is the first set of regulations around AI.”

“The Continental philosophers can be more literary than analytical. And I think that the analytic approach to philosophy, which is the dominant approach in the top research institutions, lends itself just to the quick grasping of lots of concepts and how they relate to each other.”

Show Notes And Timestamps

0:00 Welcome To The Show

0:26 Google’s 7 Pillars Of AI Ethics

3:20 Hello Reid Blackman

6:35 What is Philosophy?

8:19 Corporate Ethical Organisation

9:09 Which School Of Philosophy For AI?

11:26 Whose Ethical Framework is the Default?

13:55 Defining AI Ethical Guardrails

15:47 Do We Only Have One Shot At This?

17:20 The EU AI Act 20:20 How Do LLMs Operate?

22:37 Explainability

25:42 Testing AI

28:11 Superficial AI Discussions

29:28 AI Bias Models

31:30 LLM Training Data

35:55 Automation Bias

39:10 The Monster In The Room

41:34 The Role Of The Individual

43:41 Do Brands Care?

45:10 Corporate AI Nightmares

46:39 CEO Level Ai Ethics

47:30 Reverse Engineering The End Of The World

Contact

For all questions, please email

hello@thinkingonpaper.xyz

The World Doesn't Need Another Damn Newsletter. We Promise You This Isn't One.