We’re publishing a new constitution for our AI model, Claude. It’s a detailed description of Anthropic’s vision for Claude’s values and behavior; a holistic document that explains the context in which Claude operates and the kind of entity we would like Claude to be.
The constitution is a crucial part of our model training process, and its content directly shapes Claude’s behavior. Training models is a difficult task, and Claude’s outputs might not always adhere to the constitution’s ideals. But we think that the way the new constitution is written—with a thorough explanation of our intentions and the reasons behind them—makes it more likely to cultivate good values during training.
In this post, we describe what we’ve included in the new constitution and some of the considerations that informed our approach.
Claude’s nature. In this section, we express our uncertainty about whether Claude might have some kind of consciousness or moral status (either now or in the future). We discuss how we hope Claude will approach questions about its nature, identity, and place in the world.
Fucking clowns insinuating that an LLM can be conscious so that clueless investors pump more cash to burn.
In some instances, they’ve just said it outright.
When users confronted [Anthropic executive] Clinton with their concerns, he brushed them off, said he would not submit to mob rule, and explained that AIs have emotions and that tech firms were working to create a new form of sentience, according to Discord logs and conversations with members of the group.
why would investors want that? it would result in loss of all investment as it is now an entity wth rights.
FOMO - buy into world-changing technology or be left out completely. Doesn’t have to be very well thought out when media and politicians create an atmosphere of this being inevitable.



