Generative AI will test what’s worse: biased data or user bad faith

AI chatbots run amok
Is AI ready for users willing to fuck around and find out? Are we?

All year, on this podcast, we’ve been talking about the potential for generative AI.

How it will change academia, our economy, our laws. How we are not prepared for the future on our doorstep.

“This whole thing with ChatGPT is just history replaying itself. We need to stop viewing tech and innovation as a veritable silver bullet for all the world’s problems and instead understand that it can actually scale inequities at warp speed.”

What we have not talked about much is that generative AI is still pretty dumb. As recent articles in the New York Times and The Verge—along with countless Twitter posts—have noted, sometimes generative tools like ChatGPT are nothing more than predictive text. And predictive text can be really, really dumb. And a little crazy.

But those predictions have to be drawn from something, and that’s where the focus of this week’s podcast lies. Because ChatGPT and Bing can also be more than a little racist, or exhibit other forms of bias. So joining us is Dr. Sarah Saska, co-founder and CEO of DEI consultancy Feminuity.

Dr. Saska walks us through the current state of diversity, equity, and inclusion considerations when it comes to generative AI and the risk of bias in the machine — all of which should sound frustratingly familiar, except for the Algorithmic Justice League, which is awesome and, I swear to God, real.

As I talked with Dr. Saska, however, it became clear that the complexities of these systems point to problems not only in how the data is sourced and trained, but what happens when it engages with the enemy: i.e., people on the internet.

Remember, early attempts at chatbots turned pretty Super Nazi real quick because they were soaking up conversations from the internet. Most people on the ‘net have pretty big fuck around and find out energy. We’ve already seen what happened when our fellow hell citizens were unleashed on Bing. What happens when Bing starts learning from what the trolls are doing to it?

Look, this podcast has always been about discussing technology’s power to effect change at a massive scale: good, bad, or stupid. I think Generative AI will soon test what’s worse: bias on the data side or user bad faith on the other.

Let’s dig in.

Related links (when you listen to the pod you’ll get it):
Swearing Should Call out Inequity, Not Create it: A Guide to Swearing in the Workplace
Directed Swearing Guide


The BetaKit Podcast is sponsored by ventureLAB.
Applications for our Hardware Catalyst Initiative program are now open.
Chosen applicants will have access to state-of-the-art equipment in our $7M+ lab, in addition to other resources.

To apply today, visit: https://bit.ly/3RRoWJN.


Subscribe via: RSS, Apple Podcasts, Spotify, Stitcher, Google Podcasts, YouTube

The BetaKit Podcast is hosted by Douglas Soltys & Rob Kenedi. Edited by Kattie Laur. Sponsored by ventureLAB. Feature image generated with Runway ML using the prompt: “A photo representing generative AI chatbots in the impressionistic style.”

Douglas Soltys

Douglas Soltys

Douglas Soltys is the Editor-in-Chief of BetaKit and founder of BetaKit Incorporated. He has worked for a few failed companies and written about many more. He spends too much time on the Internet.

0 replies on “Generative AI will test what’s worse: biased data or user bad faith”