- Introduce new Agent audit functionality for autonomous code security analysis and vulnerability verification.
- Add API endpoints for managing Agent tasks and configurations.
- Implement UI components for Agent mode selection and embedding model configuration.
- Enhance the overall architecture with a focus on RAG (Retrieval-Augmented Generation) for improved code semantic search.
- Create a sandbox environment for secure execution of vulnerability tests.
- Update documentation to include details on the new Agent audit features and usage instructions.
- Remove redundant check for CUSTOM_BASE_URL_PROVIDERS
- Consolidate model name prefix logic into single code path
- Move prefix retrieval after model name validation
- Improve code clarity by eliminating unnecessary conditional branches
- Maintain backward compatibility with existing model name formats
- Add debug print statement to log custom API base URL when configured
- Improves troubleshooting and visibility into LiteLLM adapter initialization
- Helps developers verify correct API endpoint configuration during runtime
- Bypass LLMFactory cache during connection tests to ensure fresh API calls with latest configuration
- Directly instantiate native adapters (Baidu, Minimax, Doubao) and LiteLLM adapter based on provider type
- Add comprehensive error handling in LiteLLM adapter with specific exception catching for authentication, rate limiting, and connection errors
- Implement user-friendly error messages for common failure scenarios (invalid API key, authentication failure, timeout, connection issues)
- Add response validation to detect and report empty API responses
- Disable LiteLLM internal caching to guarantee actual API calls during testing
- Update available models list with 2025 latest models across all providers (Gemini, OpenAI, Claude, Qwen, DeepSeek, etc.)
- Improve error message clarity and debugging information in config endpoint
- Replace individual adapter implementations (OpenAI, Claude, Gemini, DeepSeek, Qwen, Zhipu, Moonshot, Ollama) with unified LiteLLM adapter
- Keep native adapters for providers with special API formats (Baidu, MiniMax, Doubao)
- Update LLM factory to route requests through LiteLLM for supported providers
- Add test-llm endpoint to validate LLM connections with configurable timeout and token limits
- Add get-llm-providers endpoint to retrieve supported providers and their configurations
- Update config.py to ignore extra environment variables (VITE_* frontend variables)
- Refactor Baidu adapter to use new complete() method signature and improve error handling
- Update pyproject.toml dependencies to include litellm package
- Update env.example with new configuration options
- Simplify adapter initialization and reduce code duplication across multiple provider implementations