Ethics in the Age of AI: Webinar Recap

Apr 20, 2023
Ethics in the Age of AI: Webinar Recap
Photo by Mojahid Mottakin / Unsplash

In March, we hosted Seth Berman and Noah Feldman, experts in the field of AI ethics and co-founders of Ethical Compass Advisors, where they help companies identify and navigate moral quandaries.

We’re recapping their thoughts on the ways generative AI can be leveraged across key industries, what it can do well, and where the potential pitfalls might be. They also discussed the regulatory environment and what businesses can do proactively to prevent issues and mitigate risks.

Interested in more than a recap? You can watch the full recording anytime.

What do we mean when we talk about AI?

Not everyone agrees on a common definition of AI, so what exactly are we talking about here? This webinar focused on “large language models of artificial intelligence,” as Noah Feldman explained it, which simulate humans in their conversations. You’ll probably recognize them from your own experimentation or from the news, where they’ve featured heavily since the appearance of OpenAI’s ChatGPT-3.

Why is AI such a big deal?

Conversations around AI are complex. The primary insight behind this webinar is the fact that people are not yet prepared to interact with a human-like entity that isn’t actually human. The risk, Feldman says, is that “we ascribe to it the intentions of a person, the morality of a person, the feelings of a person.”

The AI we interact with today does not have those features.

Humans can make situational judgments based on common sense or understood norms, but we’re not there yet for these large language models. That doesn’t mean AI won’t eventually be taught to operate from a moral compass, but that’s not our current reality. Because of this, companies have to think seriously about the applications of AI, how they’re using it, and what kinds of judgments they’re allowing AI to make.

With all this in mind, how can businesses leverage AI while avoiding potential pitfalls? That’s the focus of this conversation.

Use cases

Our panelists discussed several different sectors of business, with application examples, possible consequences, and takeaways for the use of AI moving forward.

E-commerce and DTC

The eternal problem of e-commerce is finding the right product among many options. AI has the potential to be invaluable when it comes to suggesting products to a consumer, having a more encyclopedic knowledge than any human.

The issue comes when AI gets too good “at convincing people to buy things, so good that if it were a person doing it, we might call it fraud,” explains Seth Berman.

Whether or not it’s conscious, people have a point where we’ll stop, limited by an internal morality or a set of boundaries. We’re not willing to go to jail or lose our jobs for a sale. AI doesn’t have that limitation, Berman says. “They're going to do whatever they're programmed for, and if it's not in their limiting behavior to [stop], they're just going to do it.” If your business uses AI, and that AI behaves in a way that feels like fraud, customers—and probably regulators—are going to react as if you committed fraud.

“Companies will be held responsible for what AIs that they are deploying do,” says Feldman. “it's the same phenomenon [of ascribing humanity], thinking that there's a person behind there, and if there's no actual person, they'll find a person. And that person may be you.” Even if you didn’t ask the AI to commit fraud, the consequences will likely be your responsibility.

Social media

Ethical Compass Advisors has advised a myriad of social media companies; in their experience, content moderation is a primary concern across the industry.

“If you're worried about [content]... whether it's misinformation or radicalization or bullying, you have to worry about all of those things in the context of large language models.” Feldman says history shows “the public and eventually the legislature start to hold the [host] responsible for what's said on the site, even if it's generated by users rather than by the company itself.”

Changes to the market for user generated content present another concern. As AI produces content to respond to existing content, it creates the possibility of “regularized feedback loops in which humans are not the primary participants,” according to Feldman. There’s tremendous uncertainty around the implications for content and interaction-driven algorithms, and ultimately, how people will interact with social media platforms in the future.

B2B SaaS

Large language models are excellent at generating basic software. This presents an opportunity for teams that don’t have extensive programming resources, but it’ll also have consequences throughout the industry as norms around software, modification, and intellectual property shift.

At the everyday level, though, we return to the issue of AI cycling. Currently, as Feldman explained, you have “iterative interactions between what are usually human beings and other human beings who are going to use a product for a range of different purposes. As those interactions start to have an AI on one side of them, they raise questions: is the conversation proceeding the way it should? Are the actual aligned goals of the person deploying the AI working?”

If something goes wrong, we return again to attribution. The interactions AI has at the customer service level, at the sales level, etc., will be attributed to your company—to you. Intense quality control will be absolutely essential.

How can companies mitigate risk when using AI?

Self-assess

Be incredibly clear on why and how your company will engage with AI. How do you make judgements, what touchpoints do you use, and where are you vulnerable? How does AI fit in with or change those processes? Finally, how will you communicate about your use of AI, both internally and externally?

Preparing in advance and looking around corners will go a long way to help your team anticipate and mitigate risks.

Operate with a standard system

“The most important thing,” Berman says, “is that the system needs to be regularized… not only in terms of the structure, who in the company is responsible for this, but also how are they going to make decisions? What are the questions they're asking?”

A standard system gives you the ability to explain decisions in hindsight or make strategic adjustments as new information comes to light.

What if something goes wrong?

Ethical Compass Advisors guides clients through a few main steps when something goes wrong. “It’s not about covering yourself from a PR perspective,” they explain. “It’s about convincing people you’re going to do better next time.”

  1. Maintain transparency about what went wrong. Feldman advised, “No company is too small to have a story go public. Transparency upfront is a tremendous advantage.”
  2. Don’t blame the AI. Instead, admit to employing AI when you shouldn’t have, or to being too lax on quality control. Explain what happened and how it was caused by AI, but don’t attempt to duck responsibility.
  3. Explain what you’re going to do differently in the future—and why. Give reasons for your conduct, explain what went wrong, and what will change moving forward.
  4. Give life to your principles. Connect the changes you’re making to your company values, offering up a set of principles you’re open to be measured by.

People are cynical, especially when it comes to corporate actors. It will take time, Feldman says, but “you can rebuild trust and actually achieve a substantial amount of legitimacy… [people] will believe what they see demonstrated over time. There's no shortcut.”

Learn more

Eager for more detailed answers? Interested in our panelists’ thoughts on current and future regulations around AI?

This webinar is packed full of great insights, real-life examples, and more. You can watch the full recording anytime