Troubleshooting Guide
This guide helps you resolve common issues when using flujo
.
Installation Issues
1. Package Installation Fails
Symptoms:
- pip install
fails with dependency errors
- Version conflicts
- Missing system dependencies
Solutions: 1. Ensure Python 3.11+ is installed:
python --version
-
Create a fresh virtual environment:
python -m venv venv source venv/bin/activate # Linux/macOS # or .\venv\Scripts\activate # Windows
-
Upgrade pip:
pip install --upgrade pip
-
Install with verbose output:
pip install -v flujo
-
Check system dependencies:
# For Ubuntu/Debian sudo apt-get update sudo apt-get install python3-dev
2. Development Installation Issues
Symptoms:
- make pip-dev
fails
- Editable install doesn't work
- Missing development dependencies
Solutions: 1. Install development dependencies:
pip install -e ".[dev]"
-
Check Makefile:
make -n pip-dev # Show commands without executing
-
Install build tools:
pip install build wheel
Configuration Issues
1. API Key Problems
Symptoms: - Authentication errors - Rate limit errors - Model not found errors
Solutions:
1. Verify API keys in .env
:
cat .env | grep API_KEY
-
Check environment variables:
import os print(os.getenv("OPENAI_API_KEY"))
-
Test API key directly:
from openai import OpenAI client = OpenAI() client.models.list()
2. Model Configuration
Symptoms: - Model not available - Wrong model version - Performance issues
Solutions: 1. Check model availability:
from flujo import list_available_models
print(list_available_models())
- Verify model configuration:
from flujo.recipes.factories import make_default_pipeline pipeline = make_default_pipeline( review_agent=review_agent, solution_agent=solution_agent, validator_agent=validator_agent, model="openai:gpt-4", temperature=0.7, ) print(pipeline)
Runtime Issues
1. Pipeline Errors
Symptoms: - Pipeline fails to start - Steps fail unexpectedly - Wrong output format
Solutions: 1. Enable debug logging:
import logging
logging.basicConfig(level=logging.DEBUG)
- Check step configuration: ```python from flujo import Step, Flujo
pipeline = ( Step.review(make_review_agent()) >> Step.solution(make_solution_agent()) >> Step.validate(make_validator_agent()) )
# Print pipeline structure print(pipeline.structure()) ```
- Test steps individually:
# Test review step result = make_review_agent().run("Test prompt") print(result)
2. Tool Errors
Symptoms: - Tool execution fails - Wrong tool output - Timeout errors
Solutions: 1. Check tool configuration:
from pydantic_ai import Tool
tool = Tool(my_function)
print(tool.config)
-
Test tool directly:
result = tool.run("test input") print(result)
-
Enable tool debugging:
tool = Tool(my_function, debug=True)
3. Performance Issues
Symptoms: - Slow execution - High memory usage - Timeout errors
Solutions: 1. Profile execution:
from flujo import enable_profiling
with enable_profiling():
result = orchestrator.run("prompt")
-
Check memory usage:
import psutil import os process = psutil.Process(os.getpid()) print(process.memory_info().rss / 1024 / 1024) # MB
-
Optimize configuration:
from flujo.recipes.factories import make_default_pipeline pipeline = make_default_pipeline( review_agent=review_agent, solution_agent=solution_agent, validator_agent=validator_agent, model="openai:gpt-4", max_tokens=1000, # Limit token usage timeout=30, # Set reasonable timeout cache=True, # Enable caching )
4. Usage Governor Breach
Symptoms:
- Pipeline stops with UsageLimitExceededError
- Unexpected cost or token limits being hit
- Inconsistent cost calculations
Solutions:
-
Increase or remove the limits:
# Remove all limits runner = Flujo(pipeline, usage_limits=None) # Increase limits limits = UsageLimits( total_cost_usd_limit=5.0, # Increase from $1.0 to $5.0 total_tokens_limit=10000 # Increase from 5000 to 10000 ) runner = Flujo(pipeline, usage_limits=limits)
-
Check your cost configuration:
# flujo.toml - Verify pricing is correct [cost.providers.openai.gpt-4o] prompt_tokens_per_1k = 0.005 completion_tokens_per_1k = 0.015
-
Debug cost calculations:
import logging logging.getLogger("flujo.cost").setLevel(logging.DEBUG) # Run pipeline to see detailed cost calculation logs result = runner.run("Your prompt")
-
Reduce costs by using cheaper models:
# Use cheaper models for cost-sensitive operations cheap_agent = make_agent_async("openai:gpt-3.5-turbo", "Simple task", str) expensive_agent = make_agent_async("openai:gpt-4o", "Complex task", str) # Use cheap agent for simple tasks pipeline = Step.solution(cheap_agent) >> Step.validate(expensive_agent)
-
Set step-level limits for fine control:
# Set different limits for different steps solution_limits = UsageLimits(total_cost_usd_limit=0.10) validation_limits = UsageLimits(total_cost_usd_limit=0.05) pipeline = ( Step.solution(agent, usage_limits=solution_limits) >> Step.validate(validator, usage_limits=validation_limits) )
-
Monitor costs in real-time:
def log_step_costs(step_result): print(f"{step_result.name}: ${step_result.cost_usd:.4f} ({step_result.token_counts} tokens)") # Add logging to track costs as they occur for step_result in result.step_history: log_step_costs(step_result)
Telemetry Issues
1. Metrics Collection
Symptoms: - Missing metrics - Wrong metric values - Export failures
Solutions: 1. Check telemetry configuration:
from flujo.infra import init_telemetry
init_telemetry(
enable_export=True,
export_endpoint="http://localhost:4317"
)
- Verify metrics:
from flujo import get_metrics metrics = get_metrics() print(metrics)
2. Tracing Issues
Symptoms: - Missing traces - Incomplete traces - Export errors
Solutions: 1. Enable tracing:
from flujo import enable_tracing
with enable_tracing():
result = orchestrator.run("prompt")
- Check trace export:
from flujo import get_traces traces = get_traces() print(traces)
Common Error Messages
1. Authentication Errors
AuthenticationError: Invalid API key
Solutions: 1. Check API key format 2. Verify key is active 3. Ensure key has correct permissions
2. Model Errors
ModelError: Model not found
Solutions: 1. Verify model name 2. Check model availability 3. Update to latest version
3. Validation Errors
ValidationError: Invalid input
Solutions: 1. Check input format 2. Verify required fields 3. Review validation rules
4. Timeout Errors
TimeoutError: Operation timed out
Solutions: 1. Increase timeout 2. Check network 3. Optimize request
5. TypeError: Step '...' returned a Mock object.
This error almost always occurs during unit testing when a mock agent is not configured to return a concrete value.
Solution: Set a return value on your mock agent:
from unittest.mock import AsyncMock
agent = AsyncMock()
agent.run.return_value = "expected"
Getting Help
1. Debugging Tools
# Enable debug mode
import logging
logging.basicConfig(level=logging.DEBUG)
# Enable profiling
from flujo import enable_profiling
with enable_profiling():
result = orchestrator.run("prompt")
# Get detailed error info
from flujo import get_error_details
details = get_error_details(error)
print(details)
2. Support Resources
- Documentation
- Installation Guide
- Usage Guide
- API Reference
-
SQLite Backend Guide - For persistence and observability issues
-
Community
- GitHub Issues
-
Development
- Contributing Guide
- Development Guide
3. Reporting Issues
When reporting an issue, include:
-
Environment
python --version pip freeze
-
Error Details
import traceback print(traceback.format_exc())
-
Reproduction Steps
- Minimal code example
- Expected vs actual behavior
- Relevant logs
Next Steps
- Read the Usage Guide
- Check Advanced Topics
- Explore Use Cases
- Review the Adapter Step recipe for data-shaping tips