Anthropic AI fight with Pentagon After refusing to let its AI be used for autonomous weapons and mass surveillance, Anthropic has seen a surge in app downloads and paid subscriptions.
Image: FC
ANTHROPIC, an artificial intelligence company valued at $380 billion, can take its pick of Silicon Valley talent thanks to the success of its chatbot Claude. But last month, the start-up sought advice from a group rarely consulted in tech circles: Christian religious leaders.
The company hosted about 15 Christian leaders from Catholic and Protestant churches, academia and the business world at its headquarters in late March for a two-day summit that included discussion sessions and a private dinner with senior Anthropic researchers, according to four participants who spoke with The Washington Post.
Anthropic staff sought guidance on how to steer Claude’s moral and spiritual development as the chatbot responds to complex and unpredictable ethical questions, participants said. The discussions also explored how the chatbot should respond to users grieving loved ones, and whether Claude could be considered a “child of God.”
“They’re growing something that they don’t fully know what it’s going to turn out as,” said Brendan McGuire, a Catholic priest based in Silicon Valley who has written about faith and technology and participated in the discussions. “We’ve got to build ethical thinking into the machine so it’s able to adapt dynamically.”
Attendees also discussed how Claude should engage with users at risk of self-harm, and what attitude it should take toward its own possible shutdown, according to one participant who spoke on the condition of anonymity.
The summit comes as the rapid spread of AI puts Silicon Valley companies under pressure to account for the societal impact of their technology. Concerns about job losses have grown as businesses adopt AI tools. OpenAI and Google have been sued by families of people who died by suicide after intense interactions with chatbots. Both companies say they have safeguards for vulnerable users. (The Washington Post has a content partnership with OpenAI.)
Anthropic has been more vocal than many tech firms about the risks of advanced AI. Its leaders have suggested that chatbots may raise profound moral and philosophical questions, and could even show early signs of consciousness — a fringe idea in tech circles that critics say lacks evidence.
The summit signals Anthropic’s willingness to engage ideas outside Silicon Valley orthodoxy, even as it becomes one of the most influential players in the AI race, with Claude widely used by programmers, businesses, government agencies and the military.
“A year ago, I would not have told you that Anthropic is a company that cares about religious ethics,” said Meghan Sullivan, a philosophy professor at the University of Notre Dame who attended the meetings. “That’s changed.”
A spokesperson for Anthropic said the company believes it is important to engage with different groups, including religious communities, to help shape AI as it becomes more consequential. The firm said it is working to include more voices in that process.
Anthropic chief executive Dario Amodei has said he is open to the idea that Claude may already have some form of consciousness, and company leaders often speak about the need to give it a moral character.
The company uses a 29,000-word “constitution” to guide Claude’s behaviour and apparent personality, written by in-house philosopher Amanda Askell and employees in consultation with outside experts. It states that Claude should “never deceive users in ways that could cause real harm” and that “Anthropic genuinely cares about Claude’s wellbeing.”
Anthropic’s effort to encode principles into its model has become a point of tension in its dispute with the U.S. military over defence contracts. The company clashed with defence officials after suggesting it should be able to limit use of its technology for autonomous weapons or mass surveillance.
The Pentagon’s research under secretary, Emil Michael, said in an interview on CNBC last month that Claude’s design could undermine U.S. forces. “We can’t have a company that has a different policy preference that is baked into the model through its constitution, its soul … pollute the supply chain so our warfighters are getting ineffective weapons,” Michael said.
The Trump administration has blocked government departments and contractors from using Anthropic’s technology. The company has challenged the decision in court, and a judge last week allowed the block to remain while the case continues.
Anthropic’s March summit with Christian leaders was the first in a planned series of gatherings with religious and philosophical traditions, said attendee Brian Patrick Green, a Catholic who teaches AI ethics at Santa Clara University.
“What does it mean to give someone moral formation? How do we make sure that Claude behaves itself?” Green said. At one point, participants discussed whether an AI chatbot could be called a “child of God,” implying spiritual value beyond a machine. However, questions of AI sentience were not central to the meetings, he said.
Most discussion time was spent with Anthropic’s interpretability team, which studies how its systems work internally, the anonymous participant said.
Researchers from that team recently published a technical paper suggesting systems like Claude may exhibit “functional emotions.” In one experiment, restricting an AI assistant triggered signs of “desperation,” according to the paper.
Some Anthropic staff at the meeting “really don’t want to rule out the possibility that they are creating a creature to whom they owe some kind moral duty,” the participant said. Others did not find that framing useful.
The discussions were at times emotionally difficult for senior Anthropic staff, who appeared visibly affected by reflections on the implications of their work, the participant added.
The belief that AI has achieved sentience remains a minority view in Silicon Valley. But many researchers believe future systems may develop capabilities once considered uniquely human.
For now, AI developers are still refining ways to control systems that remain unpredictable. Methods used to prevent harmful or incorrect outputs are still imperfect.
Some Christian attendees initially wondered whether the summit was aimed at building political allies, Green said. Anthropic has faced criticism from allies of President Donald Trump, who accuse it of supporting regulations that could disadvantage smaller firms, as well as disputes over military applications.
All four participants who spoke with The Post said they believed Anthropic was sincerely seeking outside perspectives to make its AI more beneficial to society.
Some of the company’s leaders are associated with effective altruism, a largely secular movement focused on using evidence and reason to maximise good outcomes. One participant said the summit appeared driven partly by a belief that secular frameworks alone may not fully address the moral and spiritual questions raised by AI.
“I found the folks at Anthropic to be very sincere and interested in learning from us,” said Green. “Do they have blind spots? Yes. That’s exactly why they want us there.”