We’re publishing a new constitution for our AI model, Claude. It’s a detailed description of Anthropic’s vision for Claude’s values and behavior; a holistic document that explains the context in which Claude operates and the kind of entity we would like Claude to be.
The constitution is a crucial part of our model training process, and its content directly shapes Claude’s behavior. Training models is a difficult task, and Claude’s outputs might not always adhere to the constitution’s ideals. But we think that the way the new constitution is written—with a thorough explanation of our intentions and the reasons behind them—makes it more likely to cultivate good values during training.
In this post, we describe what we’ve included in the new constitution and some of the considerations that informed our approach.


Fucking clowns insinuating that an LLM can be conscious so that clueless investors pump more cash to burn.
In some instances, they’ve just said it outright.
why would investors want that? it would result in loss of all investment as it is now an entity wth rights.
FOMO - buy into world-changing technology or be left out completely. Doesn’t have to be very well thought out when media and politicians create an atmosphere of this being inevitable.