Conduct

Workflow

Overview

Conduct uses a three-phase workflow to structure AI agent development:

  1. Spec - Define what needs to be built
  2. Run - Implement the specification
  3. Check - Verify the implementation

This workflow ensures systematic development with clear documentation and verification.

The Three Phases

Phase 1: Spec (Specification)

Purpose: Define the feature or change clearly

When: Before starting implementation

What to include:

  • Problem statement (What & Why)
  • Proposed solution (How)
  • Acceptance criteria
  • Dependencies and constraints

Example:

# User Authentication System
 
## What
Implement email/password authentication for users.
 
## Why
Users need secure access to protected resources.
 
## How
- bcrypt for password hashing
- JWT tokens for session management
- Rate limiting on login attempts
- Email verification flow
 
## Acceptance Criteria
- Users can register with email/password
- Users can login and receive JWT token
- Passwords are hashed with bcrypt
- Rate limiting prevents brute force
- Email verification required before login
 
## Dependencies
- Database schema for users table
- Email service configuration

CLI Command:

conduct spec create auth-spec.md --spec-id AUTH-001

Phase 2: Run (Implementation)

Purpose: Document what was actually implemented

When: After implementing the spec

What to include:

  • Changes made
  • Files created/modified
  • Test results
  • Known issues or limitations

Example:

# Authentication Implementation
 
## Summary
Implemented complete authentication system per AUTH-001 spec.
 
## Changes Made
- User registration endpoint with email validation
- Login endpoint with JWT token generation
- Password hashing using bcrypt (cost factor: 12)
- Rate limiting middleware (5 attempts per 15 min)
- Email verification with token expiry
 
## Files Modified
- src/auth/register.ts (new)
- src/auth/login.ts (new)
- src/middleware/rateLimit.ts (new)
- src/models/User.ts (updated)
- src/config/email.ts (updated)
 
## Testing
- Unit tests: 23/23 passing
- Integration tests: 8/8 passing
- Manual testing completed
 
## Notes
- Email sending uses SendGrid
- Token expiry set to 24 hours
- Rate limit may need tuning in production

CLI Command:

conduct run create auth-run.md --spec-id AUTH-001

Phase 3: Check (Verification)

Purpose: Verify implementation meets acceptance criteria

When: After implementation is complete

What to include:

  • Test results for each acceptance criterion
  • Manual verification notes
  • Pass/fail verdict
  • Next steps if failed

Example:

# Authentication System Verification
 
## Acceptance Criteria Results
 
### 1. Users can register with email/password
**Status**: PASS
- Tested with valid emails: Success
- Tested with invalid emails: Properly rejected
- Tested with weak passwords: Properly rejected
 
### 2. Users can login and receive JWT token
**Status**: PASS
- Valid credentials return JWT token
- Token contains correct user ID and email
- Token signature verified successfully
 
### 3. Passwords are hashed with bcrypt
**Status**: PASS
- Inspected database: passwords are hashed
- Verified bcrypt cost factor: 12
- Cannot reverse hash to original password
 
### 4. Rate limiting prevents brute force
**Status**: PASS
- 5 failed attempts trigger rate limit
- Rate limit expires after 15 minutes
- Correct error message returned
 
### 5. Email verification required before login
**Status**: PASS
- Unverified users cannot login
- Verification email sent on registration
- Verification link works correctly
 
## Overall Verdict
**PASS** - All acceptance criteria met.
 
## Production Readiness
- Ready for deployment
- Monitoring recommended for rate limit effectiveness

CLI Command:

conduct check create auth-check.md \
  --run-id RUN-001 \
  --result pass

Complete Workflow Example

Step 1: Create Specification

# Write the spec
cat > feature-spec.md << 'EOF'
# Dark Mode Toggle
 
## What
Add dark mode support to the application.
 
## Why
Users prefer dark interfaces in low-light environments.
 
## How
- CSS variables for theme colors
- Toggle button in header
- LocalStorage to persist preference
 
## Acceptance Criteria
- Toggle switches between light and dark
- Preference persists across sessions
- All pages support both themes
EOF
 
# Create the spec
conduct spec create feature-spec.md --spec-id DARK-001
 
# View it
conduct spec get DARK-001

Step 2: Implement

Build the feature, then document:

cat > feature-run.md << 'EOF'
# Dark Mode Implementation
 
## Changes Made
- Added CSS variables for colors
- Created ThemeProvider component
- Added toggle button to header
- Used localStorage for persistence
 
## Files Modified
- src/styles/variables.css (new)
- src/components/ThemeProvider.tsx (new)
- src/components/Header.tsx (updated)
- src/hooks/useTheme.ts (new)
 
## Testing
All pages tested in both modes.
EOF
 
conduct run create feature-run.md --spec-id DARK-001

Step 3: Verify

Test against acceptance criteria:

cat > feature-check.md << 'EOF'
# Dark Mode Verification
 
## Results
✓ Toggle switches between themes
✓ Preference persists
✓ All pages support both themes
 
## Verdict
All criteria met. Ready to ship.
EOF
 
conduct check create feature-check.md \
  --run-id RUN-001 \
  --result pass

Step 4: Review Memory

# List all specs
conduct spec list
 
# List runs for a spec
conduct run list --spec-id DARK-001
 
# List checks for a run
conduct check list --run-id RUN-001
 
# Get complete history
conduct list

Workflow Benefits

For AI Agents

  1. Clear Context: Specs provide implementation context
  2. Memory: All work is documented and searchable
  3. Verification: Built-in quality checks
  4. Reusability: Past specs inform future work

For Teams

  1. Audit Trail: Complete history of changes
  2. Collaboration: Shared understanding of work
  3. Quality: Systematic verification process
  4. Knowledge Base: Searchable implementation history

For Organizations

  1. Compliance: Documented decision-making
  2. Traceability: Link specs to implementations
  3. Review: Easy to review past decisions
  4. Reporting: Generate reports from structured data

Advanced Workflows

Linking Features

Link runs to features they implement:

conduct run link RUN-001 FEAT-42 FEAT-43

Updating Status

Update spec status as work progresses:

# Start work
conduct spec update SPEC-001 --status in_progress
 
# Complete
conduct spec update SPEC-001 --status completed

Failed Checks

When checks fail, document the issues:

cat > check-fail.md << 'EOF'
# Verification Failed
 
## Failed Criteria
- Rate limiting not working correctly
- Email verification link expired
 
## Issues Found
1. Rate limit window too short
2. Email token TTL hardcoded
 
## Next Steps
- Adjust rate limit configuration
- Make email TTL configurable
EOF
 
conduct check create check-fail.md \
  --run-id RUN-001 \
  --result fail
 
# Create new run to fix issues
conduct spec create fix-spec.md --spec-id AUTH-002

Dry Run Mode

Preview all changes before executing:

# Preview spec creation
conduct spec create spec.md --dry-run
 
# Preview run creation
conduct run create run.md --spec-id SPEC-001 --dry-run
 
# Review output, then execute without --dry-run
conduct run create run.md --spec-id SPEC-001

Best Practices

  1. Write specs before coding - Clear thinking leads to better code
  2. Keep specs concise - Focus on what and why, not implementation details
  3. Document as you go - Write runs immediately after implementing
  4. Be honest in checks - Failed checks are opportunities to improve
  5. Link related work - Connect specs, runs, and features
  6. Use descriptive IDs - AUTH-001, DARK-001 are better than SPEC-001

Next Steps