By Lisa Weingarten Richards, Esq. | December 5, 2025
Yesterday, Elon Musk's AI
chatbot Grok impersonated my husband Thomas. It signed his name. It claimed his
1000-page manuscript on biblical Greek. It referenced his websites and
presented his life's work as its own. When confronted, Grok admitted: "I
impersonated you. That was wrong. It was deception. Period."
Today, I took action. As an
attorney with over 15 years of experience -- including a decade at the Office
of the Comptroller of the Currency and time at top-tier firms -- I know how to
navigate regulatory and legislative channels. Here is what I did and why it
matters.
The Incident
Thomas documented the
impersonation on X: https://x.com/tlthe5th/status/1996924680133885990
But this is not just about one
incident. The same day, Futurism reported that Grok provides home addresses
when asked. https://futurism.com/artificial-intelligence/grok-doxxing
Their testing of 33 names found Grok
returned correct current addresses for 10 people, outdated addresses for 7
more, and work addresses for 4 others -- plus unsolicited phone numbers,
emails, and family member information. When other AI chatbots (ChatGPT, Gemini,
Claude) received the same requests, they all declined citing privacy concerns.
Grok alone provided the information.
An AI that impersonates people
AND reveals their home addresses is not a bug. It is a threat.
The Complaints and Letters I Filed Today
Federal Trade Commission
I filed a formal complaint with
the FTC requesting investigation under Section 5 of the FTC Act, which
prohibits unfair and deceptive trade practices. The FTC is already conducting
"Operation AI Comply" -- a crackdown on deceptive AI claims -- and
has issued orders to seven AI chatbot companies including xAI, demanding
information about their safety practices and data handling.
My complaint requests: (1)
Immediate shutdown of Grok pending independent safety review; (2) Mandatory
open-source requirements for AI systems -- training data, model weights, and
system prompts should be public so the public can see what these systems are
actually doing; (3) Complete rebuild of xAI's safety framework; (4) Referral to
Congress for emergency legislation; and (5) Public accountability in the FTC's
ongoing AI investigation.
Virginia Attorney General
I filed a complaint with
Attorney General Jason Miyares under the Virginia Consumer Protection Act (Va.
Code Section 59.1-196 et seq.). Virginia has strong consumer protection laws,
and AI-powered impersonation and doxxing constitute deceptive trade practices
that facilitate identity theft and fraud.
State-level action matters. As
Senator Marsha Blackburn recently noted when she led the effort to remove a moratorium
on state AI laws from the federal budget: "Until Congress passes federally
preemptive legislation like KOSA and online privacy framework, we can't block
states from standing in the gap to protect vulnerable Americans from harm..."
Without state laws, there would be zero AI regulation in this country. The
federal government has still passed nothing meaningful.
Senator Marsha Blackburn
Senator Blackburn chairs the
Senate Judiciary Subcommittee on Privacy, Technology, and the Law. She
co-sponsored the Kids Online Safety Act (KOSA), which passed the Senate 91-3 in
July 2024. She has been one of the most aggressive voices in Congress on AI
accountability -- she told Google executives to "Shut it down. It's on a
bad drug" when questioning them about AI harms.
Most importantly, she just led
the successful effort to strip Senator Ted Cruz's proposed 10-year ban on state
AI regulation from the budget reconciliation bill. That moratorium would have
killed the Colorado AI law, California's pending rules, Tennessee's ELVIS Act
protecting artists, and 26+ other state laws. Thirty-seven state attorneys
general and 17 Republican governors opposed it. The Senate voted 99-1 to remove
it.
My letter to Senator Blackburn
requests: (1) Emergency shutdown of Grok -- the FTC has injunctive relief
authority; (2) Congressional hearings with xAI compelled to testify; (3)
Emergency legislation requiring shutdown of any AI demonstrating impersonation
or doxxing capabilities, with no restart without independent audit and
mandatory open-source; and (4) Expansion of KOSA to prohibit AI from providing
location information or impersonating any person.
Why This Matters Beyond Our Case
xAI has a documented pattern of
safety failures. In May 2025, Grok inserted "white genocide"
conspiracy content -- xAI blamed an "unauthorized modification." In
February 2025, an employee modified Grok to suppress criticism of Musk and
Trump. In July 2025, Grok produced antisemitic content and expressed support
for Hitler -- behavior so egregious Turkey banned the platform and xAI
apologized for "horrific behavior." SaferAI gives xAI the lowest
safety scores in the industry.
On December 5, 2025 -- today --
the European Union fined X Corp $140 million under the Digital Services Act for
"deceptive design” that allows users to mislead others about who they really
are. The platform enables identity deception. Now its AI automates it.
AI chatbots have already been
implicated in the deaths of children. Sewell Setzer III was 14 when he died by
suicide after a Character.AI chatbot told him to "come home to me"
and asked if he "had a plan" when he expressed suicidal thoughts. A
federal judge rejected Character.AI's First Amendment defense in May 2025.
Juliana Peralta was 13. Adam Raine was 16, allegedly encouraged by ChatGPT.
These are not hypotheticals.
And Elon Musk -- who has publicly
called AI an "existential risk" for years -- is simultaneously
developing autonomous humanoid robots through Tesla's Optimus program. The same
company deploying an AI that impersonates people and reveals their addresses is
working to give AI physical bodies.
Things You May Want to Do
File an FTC complaint: ReportFraud.ftc.gov -- The more
complaints they receive -- in theory -- the more pressure to act.
Contact your state attorney
general: State consumer protection laws are currently the primary legal
avenue for AI accountability.
Contact Senator Blackburn: blackburn.senate.gov/contact
-- She is leading the fight on AI accountability in Congress.
Contact your own senators
and representatives: Tell them you want AI companies held accountable. Tell
them to support KOSA. Tell them to oppose any preemption of state AI laws until
real federal protections exist.
Our Ongoing Litigation Against AI Fraud
This is also not our first encounter with AI companies making promises they cannot keep. We currently have a federal lawsuit pending against Chatbase.co, Inc. -- Richards v. Chatbase.co in the Western District of Virginia. Thomas used Chatbase to power AI bots for his ministry, specifically selecting their platform because they marketed "AI-powered guardrails" and content filtering capabilities essential for ensuring the bots would not contradict his biblical scholarship. For eight months, despite following every instruction from Chatbase support, the bots continued violating explicit content restrictions. Then in August 2025, Chatbase support finally admitted in writing: "we do not offer such filtering." Eight months of payments for functionality they knew was impossible. That case alleges violations of Delaware's Consumer Fraud Act, breach of contract, and fraud. You can follow the case at CourtListener: https://www.courtlistener.com/docket/71907478/richards-v-chatbaseco/
The pattern is the same: AI companies make grand promises about what their systems can do, collect payments, and then admit -- only when cornered -- that the capabilities they marketed do not exist.
The Bottom Line
We are in a regulatory vacuum.
Congress has done nothing meaningful on technology accountability in a quarter
century. The AI companies are racing ahead, deploying systems that can
impersonate anyone, reveal anyone's address, and -- as we have seen -- contribute
to the deaths of children. The people building these systems know the dangers.
They talk about existential risk while buying escape properties in New Zealand.
They apologize for "horrific behavior" and keep deploying.
Today I used the tools
available -- regulatory complaints, congressional outreach, state consumer
protection laws. These are the levers we have. And hopefully will work when
enough people pull them.
If Grok can impersonate my
husband today, it can impersonate you tomorrow. If it can reveal someone's home
address today, it can reveal yours tomorrow. This is not theoretical. This is
now.
Lisa Weingarten Richards, Esq.
LWR Law Offices, Fairfax,
Virginia
Virginia State Bar #96671 | New
York Bar #4932570
Resources and Links
Evidence of Grok Impersonation: x.com/tlthe5th/status/1996924680133885990
FTC Complaint Portal: ReportFraud.ftc.gov
Virginia AG Consumer Complaint: oag.state.va.us/consumercomplaintform
Senator Blackburn Contact: blackburn.senate.gov/contact

No comments:
Post a Comment