Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install hiyenwong-ai-collection-collection-skills-collective-alignment-public-inputgit clone https://github.com/hiyenwong/ai_collection.gitcp ai_collection/SKILL.MD ~/.claude/skills/hiyenwong-ai-collection-collection-skills-collective-alignment-public-input/SKILL.md--- name: collective-alignment-public-input category: ai-safety description: Methodology for incorporating public input into AI model alignment through large-scale surveys and democratic value aggregation. --- # Collective Alignment: Public Input on AI Model Behavior ## Overview Methodology from OpenAI's work on collective alignment — surveying over 1,000 people worldwide on how AI should behave and comparing their views to the Model Spec. Demonstrates how to align AI defaults to diverse human values. ## Core Methodology ### 1. Survey Design - **Demographic Sampling**: Ensure global representation across cultures - **Behavior Scenarios**: Present concrete AI behavior dilemmas - **Preference Elicitation**: Quantify what people want AI to do in edge cases ### 2. Analysis - **Compare to Model Spec**: Measure gaps between public opinion and current AI behavior - **Identify Consensus**: Find areas of broad agreement vs. cultural divergence - **Update Defaults**: Use results to refine AI default behavior ### 3. Iteration - **Regular Re-surveying**: Capture evolving public sentiment - **Transparency**: Publish findings and methodology - **Accountability**: Link survey results to actual model updates ## Key Findings - Global public opinion often differs from existing AI defaults - Cultural variation requires nuanced, context-aware alignment - Democratic processes can inform but not fully determine AI behavior ## When to Use - AI alignment research - Building democratically-aligned AI systems - Understanding cultural differences in AI preferences - Designing user-configurable AI behavior **Activation**: collective alignment, public input AI, democratic AI alignment, Model Spec, AI behavior preferences