Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install hiyenwong-ai-collection-collection-skills-human-ai-teaming-surveygit clone https://github.com/hiyenwong/ai_collection.gitcp ai_collection/SKILL.MD ~/.claude/skills/hiyenwong-ai-collection-collection-skills-human-ai-teaming-survey/SKILL.md---
name: human-ai-teaming-a-review-and-outlook
description: Skill for AI agent capabilities
---
# Human-AI Teaming: A Review and Outlook
## Description
A comprehensive review of Human-AI teaming, exploring the evolution of AI agents from passive tools to active collaborators. Based on Team Situation Awareness theory, identifies critical gaps and proposes a structured framework covering formulation, coordination, maintenance, and training of human-AI teams.
**Key Topics:**
- AI agents as active collaborators vs passive tools
- Shared mental models and trust-building
- Delegation strategies and responsibility distribution
- Team Situation Awareness theory
## Tools Used
- read: Load team configurations and protocols
- write: Document team strategies
- exec: Execute team coordination tasks
- browser: Access collaboration tools
- memory_search: Retrieve team patterns
## Instructions for Agents
### Key Research Gaps
1. **Value Alignment** - Difficulty aligning AI with human values
2. **Capability Utilization** - Underutilizing AI as team members
### Framework Components
1. **Formulation** - Team design and role assignment
2. **Coordination** - Interaction protocols and delegation
3. **Maintenance** - Trust-building and conflict resolution
4. **Training** - Skill adaptation and learning
## Overview
**Source:** arXiv:2504.05755v2
**Utility:** 0.93
**Scope:** Comprehensive review + research agenda
## Activation Keywords
- human AI teaming
- human AI collaboration
- AI team member
- human AI coordination
- shared mental models
---
## Evolution of AI in Teams
### From Tools to Teammates
| Stage | AI Role | Characteristics |
|-------|---------|-----------------|
| Tool | Passive | Responds to commands |
| Assistant | Semi-active | Proactive suggestions |
| Collaborator | Active | Autonomous operation |
| Teammate | Full partner | Shared responsibility |
### Paradigm Shift
```
Traditional: Human → Tool → Output
Modern: Human ⟷ AI Teammate → Shared Outcome
```
---
## Team Situation Awareness Framework
### Four Key Aspects
```python
class HumanAITeamingFramework:
def __init__(self):
self.formulation = FormulationPhase() # Team design
self.coordination = CoordinationPhase() # Interaction
self.maintenance = MaintenancePhase() # Trust/conflict
self.training = TrainingPhase() # Skill adaptation
```
### 1. Formulation
```python
class TeamFormulation:
def design_team(self, task_requirements):
# Determine team composition
team = {
'humans': self.identify_human_roles(task_requirements),
'ai_agents': self.identify_ai_roles(task_requirements),
'protocols': self.define_interaction_protocols()
}
# Assign responsibilities
self.distribute_responsibilities(team)
return team
```
### 2. Coordination
```python
class TeamCoordination:
def __init__(self):
self.delegation_strategies = []
self.interaction_protocols = []
def delegate_task(self, task, team):
# Determine best agent for task
best_agent = self.evaluate_capabilities(task, team)
# Set up monitoring
self.setup_monitoring(best_agent, task)
return best_agent.assign(task)
```
### 3. Maintenance
```python
class TeamMaintenance:
def build_trust(self, human, ai_agent):
# Demonstrate reliability
ai_agent.prove_reliability()
# Enable transparency
ai_agent.explain_decisions()
# Allow override
ai_agent.accept_human_override()
def resolve_conflict(self, conflict):
# Identify conflict type
if conflict.type == 'value_mismatch':
return self.align_values(conflict)
elif conflict.type == 'disagreement':
return self.mediate(conflict)
```
### 4. Training
```python
class TeamTraining:
def adapt_skills(self, team, feedback):
# Human learns AI capabilities
self.train_human(team.humans, team.ai_agents)
# AI adapts to human preferences
self.adapt_ai(team.ai_agents, feedback)
# Develop shared mental models
self.build_shared_models(team)
```
---
## Shared Mental Models
### Types of Mental Models
| Type | Description | Importance |
|------|-------------|------------|
| Task Model | Understanding the task | High |
| Team Model | Understanding teammates | Critical |
| Process Model | How work gets done | High |
### Building Shared Models
```python
def build_shared_mental_models(human, ai_agent):
# Share capabilities
human.learn_ai_capabilities(ai_agent.capabilities)
ai_agent.learn_human_preferences(human.preferences)
# Align goals
shared_goals = negotiate_goals(human.goals, ai_agent.objectives)
# Establish communication protocols
protocols = establish_communication_patterns()
return SharedMentalModel(goals=shared_goals, protocols=protocols)
```
---
## Trust Building
### Trust Dimensions
| Dimension | Human Need | AI Behavior |
|-----------|------------|-------------|
| Competence | Can it do the job? | Performance reliability |
| Integrity | Will it act ethically? | Consistent values |
| Benevolence | Does it care? | Human-centric decisions |
### Trust Calibration
```python
class TrustCalibration:
def calibrate(self, human_trust, ai_reliability):
# Avoid overtrust
if human_trust > ai_reliability:
self.demonstrate_limits()
# Avoid undertrust
if human_trust < ai_reliability:
self.demonstrate_capabilities()
return optimal_trust_level
```
---
## Delegation Strategies
### Delegation Framework
```python
class DelegationFramework:
def should_delegate(self, task, ai_agent):
# Assess task characteristics
task_assessment = {
'complexity': task.complexity,
'time_pressure': task.deadline,
'risk': task.risk_level,
'ai_capability': ai_agent.skill_match(task)
}
# Decision criteria
if task_assessment['risk'] > HIGH:
return False # Human decides
if task_assessment['ai_capability'] > HIGH:
return True # AI can handle
return self.negotiate(task, ai_agent)
```
### Levels of Autonomy
| Level | Description | Human Role |
|-------|-------------|------------|
| 1 | Human executes | Supervisor |
| 2 | AI suggests | Approver |
| 3 | AI executes, human monitors | Monitor |
| 4 | AI autonomous, reports | Informed |
| 5 | Full AI autonomy | Unaware |
---
## Conflict Resolution
### Conflict Types
```python
class ConflictResolver:
def resolve(self, conflict):
if conflict.type == 'information':
# Different information
return self.share_information(conflict.parties)
elif conflict.type == 'values':
# Value misalignment
return self.align_values(conflict)
elif conflict.type == 'process':
# Disagreement on approach
return self.negotiate_approach(conflict)
```
---
## Team Compositions
### Composition Types
| Type | Human:AI Ratio | Use Case |
|------|----------------|----------|
| Human-dominant | High | High-risk decisions |
| Balanced | Equal | Collaborative tasks |
| AI-dominant | Low | Data-intensive tasks |
---
## Evaluation Metrics
| Metric | Description |
|--------|-------------|
| Team Performance | Task success rate |
| Trust Level | Human confidence in AI |
| Coordination Cost | Time spent coordinating |
| Adaptation Speed | Learning curve |
---
## Best Practices
1. **Clear role definition** - Who does what
2. **Transparent AI** - Explainable decisions
3. **Override capability** - Human can intervene
4. **Feedback loops** - Continuous improvement
5. **Regular calibration** - Trust adjustments
---
## Research Agenda
| Area | Open Questions |
|------|----------------|
| Formulation | Optimal team design? |
| Coordination | Best delegation strategies? |
| Maintenance | Trust repair mechanisms? |
| Training | Skill transfer methods? |
---
## Examples
### Example 1: Basic Application
**User:** I need to apply Human-AI Teaming: A Review and Outlook to my analysis.
**Agent:** I'll help you apply human-ai-teaming-survey. First, let me understand your specific use case...
**Context:** Apply the methodology
### Example 2: Advanced Scenario
**User:** Complex analysis scenario
**Agent:** Based on the methodology, I'll guide you through the advanced application...
### Example 2: Advanced Application
**User:** What are the key considerations for human-ai-teaming-survey?
**Agent:** Let me search for the latest research and best practices...
## References
- Paper: https://arxiv.org/abs/2504.05755
- DOI: https://doi.org/10.48550/arXiv.2504.05755
---
**Created:** 2026-03-28
**Source:** arXiv:2504.05755v2 - "Unraveling Human-AI Teaming: A Review and Outlook"