Executives Using AI Avatars Face a Patchwork of Regulations

Highlights

Executives like Klarna’s and Zoom’s CEOs used AI avatars in recent earnings calls, signaling a shift toward AI-driven business communication.

Regulation of AI avatars is fragmented, with the U.S. lacking specific federal laws, relying on FTC oversight and state impersonation laws, while the EU mandates disclosure under its new AI Act.

Security and trust risks are rising, as AI avatars normalize deepfake content in high-stakes business settings, creating new vulnerabilities for fraud and misrepresentation.

The recent spate of executives and companies using AI avatars to interact with investors and clients poses this question: What are the regulatory implications of using one’s digital doppelganger for business?

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    Last week, Klarna CEO Sebastian Siemiatkowski used his AI avatar to deliver the company’s first-quarter earnings highlights. He did disclose that it was his avatar speaking, and not himself in person. Siemiatkowski wanted to emphasize that his payments company is artificial intelligence (AI)-first.

    Two days later, Zoom CEO Eric Yuan did the same on his company’s earnings call. Like Siemiatkowski, he also disclosed that it was his AI avatar speaking and added this caveat: “We take AI-generated content seriously and have built in strong safeguards to prevent misuse.” Yuan was showcasing Zoom’s AI avatar creation service for clients.

    In mid-May, the Financial Times reported that UBS sent AI avatars of its analysts to clients, working with OpenAI and Synthesia to deliver AI-generated scripts and digital twins. The bank embarked on the initiative due to demand from clients for videos of its research. UBS began experimenting with AI avatars in 2018, creating one for its regional CIO Daniel Kalt.

    Let’s not forget Nvidia CEO Jensen Huang, whose AI avatar discusses product launches in press releases and company conferences as far back as 2021.

    “The use of AI avatars by companies like Klarna, Zoom and UBS is the latest evolution in the broader enterprise shift toward generative AI interfaces,” Brian Jackson, principal research director at Info-Tech Research Group, told PYMNTS.

    “What began as conversational chatbots integrated into software from Microsoft 365 to Salesforce has now extended into something far more personal: digital doubles that can represent humans in real time. This trend represents a logical next step in user experience innovation,” Jackson said.

    But there are risks to using AI avatars, Jackson said, citing hallucinations that convey inaccurate information and inability to handle real-time nuances in high-stakes conversations with investors, media and clients.  

    “In low-stakes, repetitive communication environments, such as routine internal meetings, they may offer significant value,” Jackson said. “But when it comes to public-facing representation, they should be treated as an extension of personal or corporate brand and scrutinized with the same care.”

    There’s also the potential for heightened fraud. Generative AI has made fraud more challenging to battle, according to a PYMNTS Intelligence report, “Rising Risk: Confronting Modern AP Fraud Threats.” AI-generated deepfakes and impersonations have become top threats in recent years for accounts payable (AP) teams, the report said.

    Advanced cybersecurity technologies are available to help AP teams, but most AP departments are still using manual anti-fraud procedures, resulting in ineffective fraud detection and prevention, the report said.

    Read more: Rising Risk: Confronting Modern AP Fraud Threats

    Patchwork of Regulations

    Existing regulations are a patchwork quilt and offer little comfort in fighting deepfakes robustly. The Securities and Exchange Commission doesn’t have specific laws regulating AI avatars, but the agency does require companies to disclose how AI is used, according to law firm Baker Donelson.

    The Federal Trade Commission investigates deceptive practices from using AI avatars, and President Trump signed into law the “Take It Down Act,” which criminalizes unauthorized distribution of sexually explicit deepfakes. Meanwhile, most states have general criminal impersonation laws that might apply to all types of media, according to the  National Conference of State Legislatures.

    As for the EU AI Act, those who generate AI images, audio or video that resembles people, objects, places, entities or events must disclose clearly that the content was artificially generated or manipulated, according to a blog post by law firm Schjodt. “This disclosure requirement appears relatively lenient, given the damaging potential of deepfakes.”

    Taras Tymoshchuk, founder of software firm Geniusee, told PYMNTS that in many countries, the use of AI avatars in financial reports or negotiations with clients “does not have a clear legal basis. If the avatar says something incorrect or inaccurate, who will be responsible?”

    Moreover, “If the client finds out that he communicated with an avatar, and not with a real person, without prior warning, it can look like a fraud. In B2B, where trust is critical, this is especially dangerous,” Tymoshchuk said.

    Ben Colman, CEO of Reality Defender, takes a cybersecurity view of AI avatars being used by executives to interact with investors and clients.

    “While AI avatars might seem like innovative efficiency tools, they’re essentially normalizing deepfake content in critical business communications — the exact vulnerability that bad actors exploit for deepfake fraud and other like attacks,” Colman told PYMNTS.

    “When companies use AI avatars for executive communications, they’re essentially training their stakeholders that synthetic people are trustworthy and creating an opening for fraudsters to impersonate real executives. So, while it’s a neat parlor trick, it also amplifies risk.”

    Read more: From Faked Invoices to Faked Executives, GenAI Has Transformed Fraud

    Read more: Too Many Business Meetings? Zoom’s AI Assistant Will Go for You

     

    Photos: AI avatar of Klarna CEO Sebastian Siemiatkowski. Credit: PYMNTS snapshot of Klarna livestream. Inset: AI avatar of Zoom CEO Eric Yuan. Credit: PYMNTS screenshot of Zoom livestream.