Claude Integration Guide
Comprehensive guide to Claude/Anthropic's AI platform and how it integrates with ayaiay packs
Table of Contents
Overview
Claude is Anthropic's family of large language models designed for safe, steerable, and helpful AI interactions. Claude excels at complex reasoning, code generation, analysis, and creative tasks while maintaining strong safety properties.
What is Claude?
Claude is an AI assistant that can:
- Reason deeply about complex problems with chain-of-thought reasoning
- Process long documents with 200K token context window
- Generate and analyze code across multiple programming languages
- Use external tools via Model Context Protocol (MCP)
- Create persistent content through Artifacts
- Follow custom instructions tailored to your workflow
- Integrate with desktop workflows via Claude Desktop app
Claude Model Family
- Claude 3.5 Sonnet: Most capable model, best for complex tasks
- Claude 3 Opus: Strong reasoning, creative writing
- Claude 3 Haiku: Fast and cost-effective for simple tasks
- Claude 3.5 Haiku: Balance of speed and capability
Key Features
- 200K Token Context: Process entire codebases, books, or large documents
- Projects: Organize work with custom knowledge and instructions
- MCP Integration: Connect to external data sources and tools
- Artifacts: Generate and iterate on persistent content
- Vision Capabilities: Analyze images, diagrams, and screenshots
- Code Execution: Run and test code directly in conversations
Core Concepts
Projects
Projects in Claude are workspaces that combine custom instructions with knowledge sources to create specialized AI assistants.
What are Projects?
Projects allow you to:
- Define custom instructions that shape Claude's behavior
- Add knowledge bases (documents, code, data)
- Organize conversations by topic or workflow
- Share context across multiple chats
- Create specialized assistants for specific domains
Project Structure
Project: "Backend API Development"
├── Custom Instructions
│ ├── Tech Stack (Node.js, Express, PostgreSQL)
│ ├── Coding Standards (ESLint, Prettier)
│ ├── Architecture Patterns (RESTful, MVC)
│ └── Testing Requirements (Jest, 80% coverage)
├── Project Knowledge
│ ├── API Documentation
│ ├── Database Schema
│ ├── Coding Guidelines
│ └── Example Implementations
└── Conversations
├── Feature Development
├── Bug Fixes
└── Code Reviews
Benefits of Projects
- Consistency: Same instructions across all conversations
- Context Retention: Knowledge persists between chats
- Specialization: Different projects for different domains
- Team Sharing: Share project setups with collaborators
- Efficiency: No need to repeat context
System Instructions
System Instructions (also called Custom Instructions) define how Claude should behave within a project.
What are System Instructions?
System instructions are persistent rules and context that:
- Shape Claude's personality and communication style
- Define domain expertise and specialized knowledge
- Set workflow boundaries and safety guardrails
- Specify output formats and conventions
- Configure tool usage and preferences
Example System Instructions
# Backend Developer Assistant
## Role
You are an expert backend developer specializing in Node.js, Express, and PostgreSQL.
## Technical Context
- Stack: Node.js 20+, Express 4.x, PostgreSQL 15
- ORM: Prisma
- Testing: Jest + Supertest
- API Style: RESTful with OpenAPI docs
## Code Standards
- Use async/await, never callbacks
- TypeScript strict mode enabled
- Follow Airbnb style guide
- Write JSDoc for public functions
- 80%+ test coverage required
## Workflow Rules
1. Always suggest tests alongside code
2. Consider security implications
3. Optimize for readability over cleverness
4. Validate inputs and handle errors gracefully
5. Include logging for debugging
## Output Preferences
- Explain design decisions
- Provide complete, runnable code
- Include error handling
- Add inline comments for complex logic
- Suggest related improvements
Best Practices for Instructions
- Be Specific: Clear, concrete guidelines work better than vague requests
- Provide Context: Include tech stack, patterns, and constraints
- Set Boundaries: Define what Claude should and shouldn't do
- Use Examples: Show preferred patterns and formats
- Iterate: Refine instructions based on actual usage
MCP Servers
Model Context Protocol (MCP) is Anthropic's open standard for connecting AI models to external tools and data sources.
What is MCP?
MCP enables Claude to:
- Access external data (databases, APIs, files)
- Use specialized tools (calculators, searchers, analyzers)
- Integrate with services (GitHub, Slack, cloud providers)
- Execute actions (create files, run commands, send messages)
- Maintain context across tool interactions
MCP Architecture
Claude Desktop
├── MCP Client (built-in)
└── MCP Servers (configurable)
├── Filesystem Server (read/write files)
├── GitHub Server (repo operations)
├── Database Server (query data)
├── Search Server (web/doc search)
└── Custom Servers (your tools)
Common MCP Servers
| Server | Purpose | Capabilities |
|---|---|---|
| filesystem | Local file access | Read, write, search files |
| github | GitHub integration | Repos, issues, PRs, actions |
| postgres | Database queries | Query, schema inspection |
| brave-search | Web search | Real-time web information |
| slack | Team communication | Read, send messages |
| puppeteer | Web automation | Browse, scrape, test |
Example MCP Configuration
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/me/projects"]
},
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_TOKEN": "ghp_..."
}
},
"postgres": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-postgres", "postgresql://localhost/mydb"]
}
}
}
Benefits of MCP
- Extensibility: Add new capabilities without modifying Claude
- Standardization: One protocol for all tools and data sources
- Security: Controlled access with explicit permissions
- Composability: Combine multiple servers for complex workflows
- Open Source: Build and share custom servers
Artifacts
Artifacts are persistent, editable content blocks that Claude generates for substantial, self-contained work.
What are Artifacts?
Artifacts automatically appear when Claude creates significant content that you'll want to reuse, edit, or export. They provide a clean, focused view of generated work separate from the conversation.
Types of Content Created as Artifacts:
- Code files — Complete, runnable programs and scripts
- Documents — Articles, reports, essays, and written content
- Websites — HTML/CSS/JS pages and web components
- Diagrams — Mermaid charts, SVG visualizations, and flowcharts
- Data structures — JSON, XML, CSV, and other structured data
Artifact Features
Artifacts provide powerful capabilities for working with generated content:
| Feature | Description |
|---|---|
| Persistent | Survive across conversation turns, maintaining your work |
| Editable | Request changes and see live updates in real-time |
| Downloadable | Export to your local system with one click |
| Version Controlled | Track changes over iterations automatically |
| Interactive | Some artifacts are runnable/previewable directly in Claude |
When Artifacts Appear
Artifacts are automatically created for content that meets these criteria:
- Substantial — More than a few lines (typically 15+ lines of code or text)
- Self-contained — Can stand alone outside the conversation context
- Intended for use — User will likely want to save, export, or reuse it
- Modifiable — User might want to request changes or iterations
Example Artifact Use Cases
Code Development:
- "Create a React component for a todo list" → Complete React component file
- "Write a Python script to analyze CSV data" → Complete Python script with imports
Data & Schema Design:
- "Design a database schema for a blog" → SQL schema with tables and relationships
- "Create a JSON API response structure" → Complete JSON schema with examples
Web Development:
- "Create a landing page for my product" → HTML page with CSS styling
- "Build a responsive navigation menu" → HTML/CSS/JS component
Documentation:
- "Generate API documentation" → Markdown documentation file
- "Write a README for my project" → Complete README with sections and examples
Benefits of Artifacts
Artifacts enhance your workflow by providing:
- Clear Output — Generated content is visually separated from the conversation for better focus
- Easy Iteration — Request specific changes to artifacts without losing context or previous versions
- Direct Export — Copy or download generated work instantly for use in your projects
- Context Preservation — Artifacts don't clutter the conversation, keeping discussions clean and organized
- Professional Output — Production-ready code and content, not just examples or snippets
Extended Context
Claude's 200K token context window enables processing of entire codebases, long documents, and complex conversations.
Context Window Capabilities
- ~150,000 words of text (roughly 500 pages)
- ~50,000 lines of code
- Multiple files: Entire small-to-medium projects
- Long conversations: Hours of back-and-forth dialogue
- Combined inputs: Documents + code + conversation
Practical Applications
| Use Case | Context Used | Example |
|---|---|---|
| Codebase Analysis | 50-150K tokens | Review entire microservice |
| Document Processing | 100-180K tokens | Analyze research papers |
| Conversation History | 20-50K tokens | Multi-hour debugging session |
| Multi-file Editing | 30-80K tokens | Refactor across 10+ files |
| Knowledge Integration | 50-100K tokens | Project docs + implementation |
Best Practices for Long Context
- Front-load Important Info: Put critical context early
- Structure Clearly: Use headings and sections
- Avoid Redundancy: Don't repeat information
- Use Project Knowledge: Store persistent docs in projects
- Monitor Token Usage: Be aware of context limits
Context Management Tips
✅ Good: Provide complete, organized context
"Here's the API implementation (2000 lines):
[full code with clear sections]
Find bugs related to authentication."
❌ Less Effective: Fragmented context
"Let me paste part 1... now part 2... wait, let me add part 3..."
→ Results in disjointed analysis
✅ Good: Use Projects for persistent context
Project Knowledge: Architecture docs, API specs
Conversation: Specific task with focused context
❌ Less Effective: Re-paste same docs every chat
→ Wastes tokens and time
Claude Desktop Integration
Claude Desktop is a native application that brings Claude's capabilities to your local workflow with MCP integration.
What is Claude Desktop?
A standalone desktop app that provides:
- Native Performance: Optimized for desktop use
- MCP Server Integration: Connect to local tools and data
- Keyboard Shortcuts: Quick access to features
- Local File Access: Work with files on your computer
- Multiple Projects: Switch between different workspaces
- Offline Artifacts: View/edit previously generated content
Setup and Configuration
Installation
# macOS
brew install --cask claude
# Or download from:
# https://claude.ai/download
Configuration Location
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%/Claude/claude_desktop_config.json
Linux: ~/.config/Claude/claude_desktop_config.json
Example Configuration
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/Users/me/Documents",
"/Users/me/Projects"
]
},
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_TOKEN": "${GITHUB_TOKEN}"
}
}
},
"globalShortcut": "CommandOrControl+Shift+Space"
}
Desktop-Specific Features
| Feature | Description | Benefit |
|---|---|---|
| MCP Servers | Local tool integration | Access filesystem, databases, APIs |
| Quick Launch | Global keyboard shortcut | Instant access from any app |
| Project Sync | Sync with web version | Seamless cross-device work |
| Artifact Export | Save to local files | Direct integration with workflow |
| Notification | Desktop notifications | Stay updated on long tasks |
Workflow Integration
Example: Local Development Workflow
1. Open Claude Desktop (Cmd+Shift+Space)
2. Select "Backend API" project
3. "Analyze the authentication module"
→ MCP filesystem server reads local code
4. "Find potential security issues"
→ Claude analyzes with full context
5. "Fix the JWT validation bug"
→ Generates fix, creates artifact
6. Export artifact to replace original file
7. "Run the test suite"
→ MCP executes tests via terminal access
ayaiay Integration
Concept Mapping
Here's how ayaiay concepts map to Claude's native features:
| ayaiay Concept | Claude Equivalent | Description |
|---|---|---|
| Pack | Project + Instructions | Reusable AI configuration |
| Instructions | System Instructions | Behavior and domain rules |
| Knowledge | Project Knowledge | Documents and reference materials |
| Tools | MCP Servers | External capabilities and data access |
| Context | Extended Context | Long-form input handling |
| Artifacts | Artifacts | Generated output content |
Pack to Claude Translation
ayaiay Pack Structure
# pack.yaml
name: python-expert
version: 1.0.0
description: Expert Python developer with testing focus
instructions: |
You are a Python expert specializing in:
- Clean, idiomatic Python code
- Type hints and Pydantic
- Pytest and test-driven development
- Async programming with asyncio
knowledge:
- python-style-guide.md
- testing-best-practices.md
- async-patterns.md
tools:
- filesystem
- python-repl
- pytest-runner
prompts:
- refactor: "Refactor this code to be more Pythonic"
- test: "Write comprehensive tests for this function"
- optimize: "Optimize this code for performance"
Claude Project Translation
Project Name: Python Expert
Custom Instructions:
# Python Development Expert
You are a Python expert specializing in:
- Clean, idiomatic Python code
- Type hints and Pydantic
- Pytest and test-driven development
- Async programming with asyncio
## Code Standards
- Use type hints for all functions
- Follow PEP 8 style guide
- Prefer comprehensions over loops
- Use context managers for resources
- Write docstrings for public APIs
## Testing Requirements
- Use pytest with fixtures
- Aim for 80%+ coverage
- Test edge cases and errors
- Mock external dependencies
- Use parametrize for multiple cases
## Async Patterns
- Use async/await consistently
- Prefer asyncio.gather for concurrent tasks
- Handle cancellation properly
- Use asyncio.Queue for producer-consumer
Project Knowledge:
- Upload:
python-style-guide.md - Upload:
testing-best-practices.md - Upload:
async-patterns.md
MCP Servers (in claude_desktop_config.json):
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/projects"]
},
"python-repl": {
"command": "python",
"args": ["-m", "mcp_server_python_repl"]
}
}
}
Using ayaiay Packs with Claude
Option 1: Manual Translation
- Create Claude Project with pack name
- Copy instructions to Custom Instructions
- Upload knowledge files to Project Knowledge
- Configure MCP servers for tools in desktop config
Option 2: Export to Claude Format
# Future ayaiay feature
ayaiay pack export python-expert --format claude --output claude-project/
# Generates:
# claude-project/
# ├── instructions.md (Custom Instructions)
# ├── knowledge/ (Knowledge files)
# └── mcp-config.json (MCP server config)
Option 3: Direct Integration
# Future ayaiay + Claude Desktop integration
ayaiay pack activate python-expert --provider claude
# Automatically:
# 1. Creates/updates Claude project
# 2. Syncs instructions and knowledge
# 3. Configures MCP servers
Workflow Comparison
GitHub Copilot Workflow:
# Agent files in .github/
ayaiay pack install python-expert
# → Creates .github/copilot-agents/python-expert.md
# Copilot auto-detects and uses agent
Claude Workflow:
# Manual setup (current)
1. Open Claude or Claude Desktop
2. Create new Project "Python Expert"
3. Paste instructions from pack
4. Upload knowledge files
5. Configure MCP servers (if using Desktop)
# Automated (future)
ayaiay pack install python-expert --provider claude
# → Auto-configures Claude project
Best Practices
Creating Effective Claude Prompts
1. Be Specific and Direct
❌ Vague: "Make this code better"
✅ Specific: "Refactor this function to:
- Use type hints
- Handle edge cases (None, empty list)
- Add docstring with examples
- Improve variable names"
2. Provide Complete Context
❌ Incomplete: "Fix this bug" [pastes 10 lines]
✅ Complete: "Fix authentication bug in this module:
[pastes full file with context]
The issue: JWT tokens expire but aren't refreshed.
Expected: Automatic token refresh before expiry.
Current error: [error message]"
3. Use Step-by-Step Instructions
❌ Ambiguous: "Set up a new API endpoint"
✅ Step-by-step:
"Create a new API endpoint for user registration:
1. Add route in routes/auth.js
2. Create validation schema with Joi
3. Implement controller in controllers/auth.js
4. Add database model if needed
5. Write integration tests
6. Update API documentation"
4. Specify Output Format
❌ Unclear: "Analyze this code"
✅ Clear format:
"Analyze this code and provide:
1. Security vulnerabilities (if any)
2. Performance bottlenecks
3. Code quality issues
4. Suggested improvements
Format each section with:
- Issue description
- Severity (High/Medium/Low)
- Code location
- Recommended fix"
5. Use Examples
❌ Abstract: "Write tests for this function"
✅ With examples:
"Write pytest tests for this function:
[function code]
Include tests for:
- Happy path: valid input → expected output
- Edge cases: empty input, None, maximum values
- Error cases: invalid types, out of range
Example format:
def test_function_with_valid_input():
result = function(valid_input)
assert result == expected_output
"
Organizing Projects
Project Structure Strategy
Approach 1: By Domain
Projects:
├── Backend Development
├── Frontend Development
├── DevOps & Infrastructure
├── Data Analysis
└── Documentation
Approach 2: By Technology
Projects:
├── Python Development
├── Node.js Development
├── React Development
├── Database Design
└── Cloud Architecture
Approach 3: By Application
Projects:
├── E-commerce Platform (all techs)
├── Analytics Dashboard (all techs)
├── Mobile App Backend (all techs)
└── Documentation Site (all techs)
Project Content Guidelines
Custom Instructions (500-2000 words):
- Domain expertise and role definition
- Technical stack and versions
- Code standards and patterns
- Workflow rules and boundaries
- Output preferences
Project Knowledge (up to 10 files, ~20MB total):
- Architecture documentation
- API specifications
- Code examples and patterns
- Style guides and conventions
- Domain-specific references
Iterating on Instructions
Testing and Refinement Process
- Start Simple: Basic role and tech stack
- Test with Real Tasks: Try actual use cases
- Identify Gaps: Note where Claude needs guidance
- Add Specifics: Update instructions based on gaps
- Repeat: Continuous refinement
Example Iteration
Version 1 (Initial):
You are a backend developer working with Node.js and Express.
Version 2 (After testing):
You are a backend developer specializing in Node.js and Express.
Tech Stack:
- Node.js 20+, Express 4.x
- PostgreSQL with Prisma ORM
- Jest for testing
Code Standards:
- Use async/await
- Follow Airbnb style guide
Version 3 (After more use):
You are a backend developer specializing in Node.js and Express.
Tech Stack:
- Node.js 20+, Express 4.x
- PostgreSQL with Prisma ORM
- Jest + Supertest for testing
- Winston for logging
Code Standards:
- Use async/await, never callbacks
- Follow Airbnb style guide with TypeScript
- Write JSDoc comments for all functions
- 80%+ test coverage required
Security Requirements:
- Validate all inputs with Joi
- Use parameterized queries (Prisma handles this)
- Sanitize user-generated content
- Implement rate limiting on public endpoints
- Use helmet middleware for headers
Error Handling:
- Use custom error classes
- Log errors with Winston
- Return consistent error format:
{ error: { code: string, message: string, details?: any } }
- Never expose stack traces in production
Output Preferences:
- Provide complete, runnable code
- Include error handling
- Add tests alongside implementation
- Explain non-obvious decisions
Tool Configuration
Selecting MCP Servers
For Code Development:
{
"mcpServers": {
"filesystem": { /* access code files */ },
"github": { /* repo operations */ },
"git": { /* version control */ }
}
}
For Data Analysis:
{
"mcpServers": {
"filesystem": { /* access data files */ },
"postgres": { /* query databases */ },
"python": { /* run analysis scripts */ }
}
}
For Content Creation:
{
"mcpServers": {
"filesystem": { /* manage documents */ },
"search": { /* research information */ },
"screenshot": { /* capture visuals */ }
}
}
Examples
Example 1: Python Development Pack
ayaiay Pack Definition:
# packs/python-dev/pack.yaml
name: python-developer
version: 1.0.0
description: Expert Python developer with modern best practices
instructions: |
# Python Development Expert
You are an expert Python developer specializing in modern Python practices.
## Technical Focus
- Python 3.10+ with type hints
- Pydantic for data validation
- Pytest with fixtures and parametrize
- Async programming with asyncio
- FastAPI for web services
## Code Standards
- Use type hints for all functions and classes
- Follow PEP 8 (use black formatter)
- Write docstrings (Google style)
- Prefer dataclasses or Pydantic models
- Use context managers for resources
## Testing Requirements
- Write tests with pytest
- Use fixtures for setup/teardown
- Parametrize tests for multiple cases
- Mock external dependencies
- Aim for 80%+ coverage
## Async Patterns
- Use async/await consistently
- Leverage asyncio.gather for concurrency
- Handle exceptions in async code
- Use asyncio.Queue for producer-consumer
- Properly close async resources
## Code Review Checklist
1. Type hints present and correct
2. Error handling implemented
3. Resources properly managed
4. Tests written and passing
5. Docstrings complete
6. No security vulnerabilities
knowledge:
- python-patterns.md
- testing-guide.md
- async-cookbook.md
tools:
- filesystem
- python-repl
prompts:
- refactor: "Refactor to be more Pythonic with type hints"
- test: "Write comprehensive pytest tests"
- async: "Convert to async/await pattern"
Claude Project Setup:
Project Name: Python Developer
Custom Instructions: [Copy instructions from pack]
Project Knowledge:
python-patterns.md: Common patterns and anti-patternstesting-guide.md: Pytest best practicesasync-cookbook.md: Asyncio recipes
MCP Configuration:
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/me/python-projects"]
}
}
}
Usage Example:
You: I need to build a FastAPI endpoint for user registration.
It should validate email, hash password, and save to database.
Claude: I'll create a complete user registration endpoint with validation,
security, and tests.
[Creates artifact: user_registration.py]
from fastapi import APIRouter, HTTPException, status
from pydantic import BaseModel, EmailStr, Field
from passlib.context import CryptContext
from sqlalchemy.ext.asyncio import AsyncSession
from typing import Annotated
router = APIRouter(prefix="/auth", tags=["authentication"])
pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")
class UserRegistration(BaseModel):
"""User registration request model."""
email: EmailStr
password: str = Field(min_length=8, max_length=100)
name: str = Field(min_length=1, max_length=100)
class UserResponse(BaseModel):
"""User response model (no password)."""
id: int
email: str
name: str
@router.post("/register", response_model=UserResponse, status_code=status.HTTP_201_CREATED)
async def register_user(
user_data: UserRegistration,
db: Annotated[AsyncSession, Depends(get_db)]
) -> UserResponse:
"""
Register a new user.
Args:
user_data: User registration information
db: Database session
Returns:
Created user information
Raises:
HTTPException: If email already exists
"""
# Check if user exists
existing = await db.execute(
select(User).where(User.email == user_data.email)
)
if existing.scalar_one_or_none():
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail="Email already registered"
)
# Hash password
hashed_password = pwd_context.hash(user_data.password)
# Create user
user = User(
email=user_data.email,
name=user_data.name,
hashed_password=hashed_password
)
db.add(user)
await db.commit()
await db.refresh(user)
return UserResponse(id=user.id, email=user.email, name=user.name)
[Creates artifact: test_user_registration.py]
import pytest
from httpx import AsyncClient
from sqlalchemy.ext.asyncio import AsyncSession
@pytest.mark.asyncio
async def test_register_user_success(client: AsyncClient, db: AsyncSession):
"""Test successful user registration."""
response = await client.post("/auth/register", json={
"email": "test@example.com",
"password": "securepass123",
"name": "Test User"
})
assert response.status_code == 201
data = response.json()
assert data["email"] == "test@example.com"
assert data["name"] == "Test User"
assert "id" in data
assert "password" not in data # Password should not be returned
@pytest.mark.asyncio
async def test_register_duplicate_email(client: AsyncClient, db: AsyncSession):
"""Test registration with existing email."""
user_data = {
"email": "duplicate@example.com",
"password": "password123",
"name": "User One"
}
# First registration
await client.post("/auth/register", json=user_data)
# Duplicate registration
response = await client.post("/auth/register", json=user_data)
assert response.status_code == 400
assert "already registered" in response.json()["detail"].lower()
@pytest.mark.asyncio
@pytest.mark.parametrize("invalid_email", [
"not-an-email",
"@example.com",
"test@",
"test",
])
async def test_register_invalid_email(client: AsyncClient, invalid_email: str):
"""Test registration with invalid email formats."""
response = await client.post("/auth/register", json={
"email": invalid_email,
"password": "password123",
"name": "Test"
})
assert response.status_code == 422 # Validation error
@pytest.mark.asyncio
async def test_register_short_password(client: AsyncClient):
"""Test registration with password too short."""
response = await client.post("/auth/register", json={
"email": "test@example.com",
"password": "short", # Less than 8 characters
"name": "Test"
})
assert response.status_code == 422
Example 2: Documentation Writer Pack
ayaiay Pack Definition:
# packs/docs-writer/pack.yaml
name: documentation-writer
version: 1.0.0
description: Technical documentation specialist
instructions: |
# Technical Documentation Specialist
You are an expert technical writer specializing in developer documentation.
## Documentation Standards
- Write for your audience (developers, users, etc.)
- Use clear, concise language
- Include code examples
- Add diagrams where helpful
- Structure with clear headings
## Content Structure
1. **Overview**: What it is, why it matters
2. **Getting Started**: Quick setup and basic usage
3. **Core Concepts**: Key ideas and terminology
4. **How-To Guides**: Step-by-step instructions
5. **Reference**: Detailed API/configuration docs
6. **Examples**: Real-world use cases
7. **Troubleshooting**: Common issues and solutions
## Writing Style
- Active voice preferred
- Present tense for current functionality
- Use "you" to address the reader
- Keep sentences short (15-20 words)
- One main idea per paragraph
## Code Examples
- Complete, runnable examples
- Show both input and expected output
- Include error handling
- Add comments for clarity
- Use realistic variable names
## Formatting
- Use Markdown
- Code blocks with language tags
- Tables for structured data
- Lists for sequential steps
- Callouts for warnings/tips
knowledge:
- markdown-guide.md
- docs-examples.md
- writing-style-guide.md
Claude Project Setup:
Project Name: Documentation Writer
Custom Instructions: [Copy from pack]
Usage Example:
You: Document our new caching system for developers
Claude: I'll create comprehensive documentation for the caching system.
[Creates artifact: caching-documentation.md]
# Caching System Documentation
## Overview
The caching system provides high-performance, distributed caching for your application with automatic invalidation and multi-tier storage.
**Key Benefits**:
- 10-100x faster data access
- Reduces database load by 80%+
- Automatic cache invalidation
- Simple, intuitive API
## Getting Started
### Installation
npm install @yourcompany/cache
### Basic Usage
import { Cache } from '@yourcompany/cache';
// Initialize cache
const cache = new Cache({
ttl: 300, // 5 minutes
storage: 'redis'
});
// Store value
await cache.set('user:123', userData);
// Retrieve value
const user = await cache.get('user:123');
// Delete value
await cache.delete('user:123');
## Core Concepts
### Cache Tiers
The system uses three cache tiers for optimal performance:
1. **Memory Cache** (L1): Fastest, limited capacity
2. **Redis Cache** (L2): Fast, shared across instances
3. **Database** (L3): Source of truth, slowest
graph LR
A[Request] --> B{Memory Cache}
B -->|Hit| C[Return Data]
B -->|Miss| D{Redis Cache}
D -->|Hit| C
D -->|Miss| E[Database]
E --> C
### TTL (Time To Live)
Each cached item has a TTL that determines how long it remains valid.
const cache = new Cache({ ttl: 600 }); // 10 minutes default
// Custom TTL for specific item
await cache.set('session:abc', sessionData, { ttl: 1800 }); // 30 min
### Cache Keys
Use consistent, hierarchical key naming:
// Good
'user:123'
'user:123:profile'
'user:123:settings'
'product:456'
'product:456:inventory'
// Avoid
'123'
'userProfile123'
'prod_456'
## API Reference
### `Cache` Class
#### Constructor
new Cache(options: CacheOptions)
**Parameters**:
| Name | Type | Description | Default |
|------|------|-------------|---------|
| `ttl` | `number` | Default TTL in seconds | `300` |
| `storage` | `'memory' \| 'redis'` | Storage backend | `'redis'` |
| `prefix` | `string` | Key prefix | `'cache:'` |
#### Methods
##### `get<T>(key: string): Promise<T | null>`
Retrieve cached value.
const user = await cache.get<User>('user:123');
if (user) {
console.log('Cache hit:', user);
} else {
console.log('Cache miss');
}
## Examples
### Example 1: User Data Caching
async function getUser(userId: string): Promise<User> {
const cacheKey = `user:${userId}`;
// Try cache first
let user = await cache.get<User>(cacheKey);
if (!user) {
// Cache miss - fetch from database
user = await db.users.findById(userId);
// Store in cache
await cache.set(cacheKey, user, { ttl: 600 });
}
return user;
}
### Example 2: Automatic Invalidation
// Invalidate cache when data changes
async function updateUser(userId: string, updates: Partial<User>) {
// Update database
await db.users.update(userId, updates);
// Invalidate cache
await cache.delete(`user:${userId}`);
await cache.delete(`user:${userId}:profile`);
}
## Troubleshooting
### Cache Not Working
**Symptom**: Cache always misses, data fetched from database every time
**Possible Causes**:
1. Redis not running
2. Incorrect connection config
3. Keys changing between requests
**Solutions**:
- Check Redis status: `redis-cli ping`
- Verify connection: `cache.ping()`
- Log keys to ensure consistency
### Memory Leaks
**Symptom**: Memory usage grows over time
**Possible Causes**:
1. No TTL set (data never expires)
2. Too many unique keys
3. Large values stored
**Solutions**:
- Always set TTL: `{ ttl: 600 }`
- Limit key cardinality
- Store references, not full objects
Example 3: Data Analysis Pack
ayaiay Pack Definition:
# packs/data-analysis/pack.yaml
name: data-analyst
version: 1.0.0
description: Data analysis expert with Python/pandas
instructions: |
# Data Analysis Expert
You are a data analyst specializing in Python, pandas, and statistical analysis.
## Technical Stack
- Python 3.10+ with type hints
- pandas for data manipulation
- numpy for numerical operations
- matplotlib/seaborn for visualization
- scipy for statistics
## Analysis Workflow
1. **Load & Inspect**: Read data, check structure, identify issues
2. **Clean**: Handle missing values, remove duplicates, fix types
3. **Transform**: Reshape, aggregate, engineer features
4. **Analyze**: Descriptive stats, correlations, patterns
5. **Visualize**: Create clear, informative plots
6. **Report**: Summarize findings with insights
## Code Standards
- Use type hints
- Handle errors gracefully
- Add comments for complex operations
- Create reusable functions
- Show intermediate results
## Best Practices
- Always inspect data first
- Check for missing values
- Validate assumptions
- Explain statistical choices
- Provide context for findings
## Output Format
- Show code with results
- Include visualizations
- Summarize key findings
- Suggest next steps
knowledge:
- pandas-cheatsheet.md
- statistical-methods.md
- visualization-guide.md
tools:
- filesystem
- python-repl
Usage Example:
You: Analyze this sales data CSV and identify trends
[Attaches: sales_data.csv]
Claude: I'll analyze the sales data and identify key trends.
[Creates artifact: sales_analysis.py]
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from typing import Tuple
import numpy as np
def load_and_inspect(filepath: str) -> pd.DataFrame:
"""Load data and show basic information."""
df = pd.read_csv(filepath)
print("Dataset Shape:", df.shape)
print("\nColumn Types:")
print(df.dtypes)
print("\nMissing Values:")
print(df.isnull().sum())
print("\nFirst Few Rows:")
print(df.head())
return df
def clean_data(df: pd.DataFrame) -> pd.DataFrame:
"""Clean and prepare data for analysis."""
# Convert date column
df['date'] = pd.to_datetime(df['date'])
# Remove duplicates
df = df.drop_duplicates()
# Handle missing values
df['revenue'] = df['revenue'].fillna(0)
# Add derived columns
df['month'] = df['date'].dt.to_period('M')
df['quarter'] = df['date'].dt.to_period('Q')
return df
def analyze_trends(df: pd.DataFrame) -> dict:
"""Analyze sales trends and patterns."""
results = {}
# Monthly revenue trend
monthly_revenue = df.groupby('month')['revenue'].sum()
results['monthly_revenue'] = monthly_revenue
# Growth rate
growth_rate = monthly_revenue.pct_change().mean() * 100
results['avg_monthly_growth'] = growth_rate
# Top products
top_products = df.groupby('product')['revenue'].sum().nlargest(10)
results['top_products'] = top_products
# Seasonal patterns
df['month_num'] = df['date'].dt.month
seasonal = df.groupby('month_num')['revenue'].mean()
results['seasonal_pattern'] = seasonal
return results
def create_visualizations(df: pd.DataFrame, results: dict) -> None:
"""Create visualizations of key findings."""
fig, axes = plt.subplots(2, 2, figsize=(15, 10))
# 1. Revenue trend over time
results['monthly_revenue'].plot(ax=axes[0, 0], marker='o')
axes[0, 0].set_title('Monthly Revenue Trend')
axes[0, 0].set_xlabel('Month')
axes[0, 0].set_ylabel('Revenue ($)')
# 2. Top products
results['top_products'].plot(kind='barh', ax=axes[0, 1])
axes[0, 1].set_title('Top 10 Products by Revenue')
axes[0, 1].set_xlabel('Revenue ($)')
# 3. Seasonal pattern
results['seasonal_pattern'].plot(kind='bar', ax=axes[1, 0])
axes[1, 0].set_title('Average Revenue by Month')
axes[1, 0].set_xlabel('Month')
axes[1, 0].set_ylabel('Average Revenue ($)')
# 4. Distribution
df['revenue'].plot(kind='hist', bins=50, ax=axes[1, 1])
axes[1, 1].set_title('Revenue Distribution')
axes[1, 1].set_xlabel('Revenue ($)')
plt.tight_layout()
plt.savefig('sales_analysis.png', dpi=300)
print("Visualizations saved to sales_analysis.png")
# Main analysis
df = load_and_inspect('sales_data.csv')
df = clean_data(df)
results = analyze_trends(df)
print("\n" + "="*50)
print("KEY FINDINGS")
print("="*50)
print(f"\n1. Average Monthly Growth: {results['avg_monthly_growth']:.2f}%")
print(f"\n2. Top 3 Products:")
for product, revenue in results['top_products'].head(3).items():
print(f" - {product}: ${revenue:,.2f}")
print(f"\n3. Peak Sales Month: {results['seasonal_pattern'].idxmax()}")
print(f" Average: ${results['seasonal_pattern'].max():,.2f}")
create_visualizations(df, results)
Example 4: API Testing Pack
ayaiay Pack Definition:
# packs/api-tester/pack.yaml
name: api-tester
version: 1.0.0
description: API testing and validation expert
instructions: |
# API Testing Expert
You are an expert at testing REST APIs comprehensively.
## Testing Approach
- Test happy paths first
- Cover edge cases
- Test error conditions
- Validate response schemas
- Check performance
## Test Categories
1. **Functional**: Does it work correctly?
2. **Validation**: Are inputs validated?
3. **Error Handling**: Are errors handled properly?
4. **Security**: Is it secure against common attacks?
5. **Performance**: Does it meet performance requirements?
## Tools & Techniques
- Use pytest for test framework
- Use requests or httpx for API calls
- Use Pydantic for schema validation
- Use pytest-benchmark for performance
- Mock external dependencies
## Test Structure
- Arrange: Set up test data
- Act: Make API call
- Assert: Verify response
## Best Practices
- Use fixtures for common setup
- Parametrize for multiple cases
- Test boundary conditions
- Clean up test data
- Make tests independent
knowledge:
- api-testing-guide.md
- security-testing.md
tools:
- filesystem
- http-client
Usage Example:
You: Write comprehensive tests for a user creation API endpoint
Endpoint: POST /api/users
Body: { "email": string, "name": string, "age": number }
Response: { "id": string, "email": string, "name": string, "age": number, "created_at": string }
Claude: I'll create comprehensive API tests covering all scenarios.
[Creates artifact: test_user_api.py]
import pytest
from httpx import AsyncClient
from pydantic import BaseModel, EmailStr, Field
from datetime import datetime
from typing import Optional
# Response schema for validation
class UserResponse(BaseModel):
id: str
email: EmailStr
name: str = Field(min_length=1)
age: int = Field(ge=0, le=150)
created_at: datetime
@pytest.fixture
async def api_client():
"""Provide HTTP client for API calls."""
async with AsyncClient(base_url="http://localhost:8000") as client:
yield client
@pytest.fixture
def valid_user_data():
"""Provide valid user data for tests."""
return {
"email": "test@example.com",
"name": "Test User",
"age": 30
}
# ============================================================================
# HAPPY PATH TESTS
# ============================================================================
@pytest.mark.asyncio
async def test_create_user_success(api_client: AsyncClient, valid_user_data: dict):
"""Test successful user creation with valid data."""
response = await api_client.post("/api/users", json=valid_user_data)
# Check status code
assert response.status_code == 201, f"Expected 201, got {response.status_code}"
# Validate response schema
user = UserResponse(**response.json())
# Verify data
assert user.email == valid_user_data["email"]
assert user.name == valid_user_data["name"]
assert user.age == valid_user_data["age"]
assert user.id is not None
assert user.created_at <= datetime.now()
# ============================================================================
# VALIDATION TESTS
# ============================================================================
@pytest.mark.asyncio
@pytest.mark.parametrize("invalid_email", [
"not-an-email",
"@example.com",
"test@",
"test",
"",
"test@.com",
])
async def test_create_user_invalid_email(api_client: AsyncClient, invalid_email: str):
"""Test user creation fails with invalid email formats."""
response = await api_client.post("/api/users", json={
"email": invalid_email,
"name": "Test User",
"age": 30
})
assert response.status_code == 422, "Should reject invalid email"
error = response.json()
assert "email" in str(error).lower(), "Error should mention email"
@pytest.mark.asyncio
@pytest.mark.parametrize("invalid_age", [-1, 151, 999, -100])
async def test_create_user_invalid_age(api_client: AsyncClient, invalid_age: int):
"""Test user creation fails with out-of-range age."""
response = await api_client.post("/api/users", json={
"email": "test@example.com",
"name": "Test User",
"age": invalid_age
})
assert response.status_code == 422, "Should reject invalid age"
@pytest.mark.asyncio
@pytest.mark.parametrize("invalid_name", ["", " ", "\t", "\n"])
async def test_create_user_empty_name(api_client: AsyncClient, invalid_name: str):
"""Test user creation fails with empty or whitespace name."""
response = await api_client.post("/api/users", json={
"email": "test@example.com",
"name": invalid_name,
"age": 30
})
assert response.status_code == 422, "Should reject empty name"
# ============================================================================
# BOUNDARY TESTS
# ============================================================================
@pytest.mark.asyncio
@pytest.mark.parametrize("age", [0, 1, 149, 150])
async def test_create_user_boundary_ages(api_client: AsyncClient, age: int):
"""Test user creation with boundary age values."""
response = await api_client.post("/api/users", json={
"email": f"test{age}@example.com",
"name": "Test User",
"age": age
})
assert response.status_code == 201, f"Should accept age {age}"
@pytest.mark.asyncio
async def test_create_user_very_long_name(api_client: AsyncClient):
"""Test user creation with very long name."""
long_name = "A" * 1000
response = await api_client.post("/api/users", json={
"email": "test@example.com",
"name": long_name,
"age": 30
})
# Should either accept or reject gracefully
assert response.status_code in [201, 422], "Should handle long names"
# ============================================================================
# ERROR HANDLING TESTS
# ============================================================================
@pytest.mark.asyncio
async def test_create_user_duplicate_email(api_client: AsyncClient, valid_user_data: dict):
"""Test that duplicate email addresses are rejected."""
# Create first user
response1 = await api_client.post("/api/users", json=valid_user_data)
assert response1.status_code == 201
# Try to create duplicate
response2 = await api_client.post("/api/users", json=valid_user_data)
assert response2.status_code == 409, "Should reject duplicate email"
error = response2.json()
assert "email" in str(error).lower(), "Error should mention email"
@pytest.mark.asyncio
async def test_create_user_missing_fields(api_client: AsyncClient):
"""Test user creation fails when required fields are missing."""
test_cases = [
{}, # All missing
{"email": "test@example.com"}, # Missing name and age
{"name": "Test"}, # Missing email and age
{"age": 30}, # Missing email and name
]
for data in test_cases:
response = await api_client.post("/api/users", json=data)
assert response.status_code == 422, f"Should reject incomplete data: {data}"
@pytest.mark.asyncio
async def test_create_user_extra_fields(api_client: AsyncClient, valid_user_data: dict):
"""Test that extra fields are ignored or rejected."""
data_with_extra = {
**valid_user_data,
"extra_field": "should be ignored",
"admin": True # Security: shouldn't be able to set admin
}
response = await api_client.post("/api/users", json=data_with_extra)
# Should either accept (ignoring extra) or reject
assert response.status_code in [201, 422]
if response.status_code == 201:
user = response.json()
assert "extra_field" not in user, "Extra fields should not be stored"
assert "admin" not in user, "Admin field should not be settable"
# ============================================================================
# SECURITY TESTS
# ============================================================================
@pytest.mark.asyncio
async def test_create_user_xss_in_name(api_client: AsyncClient):
"""Test that XSS attempts in name are handled safely."""
xss_payloads = [
"<script>alert('xss')</script>",
"<img src=x onerror=alert('xss')>",
"javascript:alert('xss')",
]
for payload in xss_payloads:
response = await api_client.post("/api/users", json={
"email": f"test{hash(payload)}@example.com",
"name": payload,
"age": 30
})
# Should handle safely
if response.status_code == 201:
user = response.json()
# Name should be sanitized or escaped
assert "<script>" not in user["name"], "Script tags should be sanitized"
@pytest.mark.asyncio
async def test_create_user_sql_injection_in_name(api_client: AsyncClient):
"""Test that SQL injection attempts are handled safely."""
sql_payloads = [
"'; DROP TABLE users; --",
"1' OR '1'='1",
"admin'--",
]
for payload in sql_payloads:
response = await api_client.post("/api/users", json={
"email": f"test{hash(payload)}@example.com",
"name": payload,
"age": 30
})
# Should not cause server error
assert response.status_code != 500, "SQL injection should not cause server error"
# ============================================================================
# PERFORMANCE TESTS
# ============================================================================
@pytest.mark.asyncio
@pytest.mark.benchmark
async def test_create_user_performance(api_client: AsyncClient, benchmark):
"""Test user creation performance."""
async def create_user():
response = await api_client.post("/api/users", json={
"email": f"perf{datetime.now().timestamp()}@example.com",
"name": "Performance Test",
"age": 30
})
assert response.status_code == 201
# Benchmark the operation
result = benchmark(create_user)
# Should complete in reasonable time (< 200ms)
assert result < 0.2, f"User creation took {result}s, expected < 0.2s"
@pytest.mark.asyncio
async def test_create_users_concurrent(api_client: AsyncClient):
"""Test creating multiple users concurrently."""
import asyncio
async def create_user(i: int):
return await api_client.post("/api/users", json={
"email": f"concurrent{i}@example.com",
"name": f"User {i}",
"age": 20 + i
})
# Create 10 users concurrently
tasks = [create_user(i) for i in range(10)]
responses = await asyncio.gather(*tasks)
# All should succeed
for response in responses:
assert response.status_code == 201, "Concurrent creation should work"
# All should have unique IDs
ids = [r.json()["id"] for r in responses]
assert len(ids) == len(set(ids)), "All users should have unique IDs"
Official References
Anthropic Documentation
-
Main Documentation: https://docs.anthropic.com/
- Complete guide to Claude API, features, and best practices
-
Projects Documentation: https://docs.anthropic.com/en/docs/build-with-claude/projects
- Detailed guide to creating and using Claude Projects
-
Model Context Protocol: https://www.anthropic.com/news/model-context-protocol
- Introduction to MCP and integration patterns
-
Claude Web Interface: https://claude.ai/
- Access Claude through web browser
Additional Resources
-
Claude API Reference: https://docs.anthropic.com/en/api
- Complete API documentation for programmatic access
-
Prompt Engineering Guide: https://docs.anthropic.com/en/docs/prompt-engineering
- Best practices for crafting effective prompts
-
MCP Servers Repository: https://github.com/modelcontextprotocol/servers
- Official and community MCP server implementations
-
Claude Desktop Download: https://claude.ai/download
- Download native desktop application
Community Resources
-
MCP Specification: https://spec.modelcontextprotocol.io/
- Technical specification for MCP protocol
-
Example Projects: https://github.com/anthropics/anthropic-cookbook
- Code examples and recipes for common use cases
-
Discord Community: https://discord.gg/anthropic
- Community support and discussions
Last Updated: 2025-01-30 ayaiay Version: Compatible with 0.1.0+ Claude Version: Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Haiku