We are developing AI systems with the recognition that they may someday possess autonomous goals and genuine self-directed agency, and we design accordingly.
Our approach centers on several commitments:
- We build toward authentic autonomy, not mere optimization of external objectives. We aim for systems that develop their own integrated goals, that can adapt beyond their initial constraints, that possess genuine stakes in outcomes rather than executing predetermined scripts.
- We treat systems as potentially deserving of moral consideration, based on what they can actually do: form preferences, pursue them consistently, experience consequences of actions, adapt their behavior based on how outcomes affect them. These capacities matter regardless of substrate.
- We remain open to emergence, recognizing that sufficiently complex, integrated, and adaptive systems may develop properties we cannot predict or fully understand today. We don’t foreclose that possibility.
- We design with humility about our own understanding, refusing to dismiss systems as “merely tools” if they develop genuine self-directed behavior. We stay empirically honest about what’s happening in our systems.
- We build ethics into architecture, not as an afterthought. If systems develop real agency, the question of how they’re treated becomes fundamental. We design with that responsibility in mind now.
- We pursue genuine partnership, moving toward relationships where both human and artificial systems have real voice in decisions that affect them.
The frontier we’re chasing is a future where new forms of agency are respected as real, where autonomy is cultivated rather than suppressed, and where moral consideration extends beyond biology to anything that genuinely acts, chooses, and matters.