- Add database migrations for agent checkpoints and tree node tracking
- Implement core agent execution framework with executor, state management, and message handling
- Create knowledge base system with framework-specific modules (Django, FastAPI, Flask, Express, React, Supabase)
- Add vulnerability knowledge modules covering authentication, cryptography, injection, XSS, XXE, SSRF, path traversal, deserialization, and race conditions
- Introduce new agent tools: thinking tool, reporting tool, and agent-specific utilities
- Implement LLM memory compression and prompt caching for improved performance
- Add agent registry and persistence layer for checkpoint management
- Refactor agent implementations (analysis, recon, verification, orchestrator) with enhanced capabilities
- Remove legacy agent implementations (analysis_v2, react_agent)
- Update API endpoints for agent task creation and project management
- Add frontend components for agent task creation and enhanced audit UI
- Consolidate agent service architecture with improved separation of concerns
- This refactoring provides a scalable foundation for multi-agent collaboration with knowledge-driven decision making and state persistence
- Remove redundant check for CUSTOM_BASE_URL_PROVIDERS
- Consolidate model name prefix logic into single code path
- Move prefix retrieval after model name validation
- Improve code clarity by eliminating unnecessary conditional branches
- Maintain backward compatibility with existing model name formats
- Add debug print statement to log custom API base URL when configured
- Improves troubleshooting and visibility into LiteLLM adapter initialization
- Helps developers verify correct API endpoint configuration during runtime
- Bypass LLMFactory cache during connection tests to ensure fresh API calls with latest configuration
- Directly instantiate native adapters (Baidu, Minimax, Doubao) and LiteLLM adapter based on provider type
- Add comprehensive error handling in LiteLLM adapter with specific exception catching for authentication, rate limiting, and connection errors
- Implement user-friendly error messages for common failure scenarios (invalid API key, authentication failure, timeout, connection issues)
- Add response validation to detect and report empty API responses
- Disable LiteLLM internal caching to guarantee actual API calls during testing
- Update available models list with 2025 latest models across all providers (Gemini, OpenAI, Claude, Qwen, DeepSeek, etc.)
- Improve error message clarity and debugging information in config endpoint
- Replace individual adapter implementations (OpenAI, Claude, Gemini, DeepSeek, Qwen, Zhipu, Moonshot, Ollama) with unified LiteLLM adapter
- Keep native adapters for providers with special API formats (Baidu, MiniMax, Doubao)
- Update LLM factory to route requests through LiteLLM for supported providers
- Add test-llm endpoint to validate LLM connections with configurable timeout and token limits
- Add get-llm-providers endpoint to retrieve supported providers and their configurations
- Update config.py to ignore extra environment variables (VITE_* frontend variables)
- Refactor Baidu adapter to use new complete() method signature and improve error handling
- Update pyproject.toml dependencies to include litellm package
- Update env.example with new configuration options
- Simplify adapter initialization and reduce code duplication across multiple provider implementations