- Add new versioned documentation for RA.Aid project.
- Include installation instructions, quick starts, and markdown features.
- Add configuration files for Docusaurus setup.
- Introduce new images and logos for branding.
- Create a sidebar for better navigation in documentation.
- Implement a .gitignore file for the docs directory to exclude unnecessary files.
feat(docs): add SVG illustrations for Docusaurus documentation to enhance visual appeal
feat(docs): create tsconfig.json for improved TypeScript support in Docusaurus
fix(pyproject.toml): update dependencies to latest versions for better compatibility and features
fix(__main__.py): improve expert provider selection logic based on available API keys
feat(llm.py): implement function to fetch available OpenAI models and select expert model
fix(file_listing.py): enhance file listing functionality to include hidden files option and improve error handling
fix(deepseek_chat.py): add timeout and max_retries parameters to ChatDeepseekReasoner initialization
fix(version.py): bump version to 0.14.1 for release readiness
feat(models_params.py): add default_temperature to model parameters for consistency and configurability
refactor(interactive.py): enhance run_interactive_command to support expected runtime and improve output capture
fix(prompts.py): update instructions to clarify file modification methods
refactor(provider_strategy.py): streamline expert model selection logic for clarity and maintainability
chore(tool_configs.py): update tool imports to reflect changes in write_file functionality
refactor(agent.py): enhance LLM initialization to include temperature and improve error handling
feat(memory.py): normalize file paths in emit_related_files to prevent duplicates
feat(programmer.py): add get_aider_executable function to retrieve the aider executable path
test: add comprehensive tests for new features and refactor existing tests for clarity and coverage
feat(main.py): add --experimental-fallback-handler argument to enable fallback handler
fix(agent_utils.py): modify init_fallback_handler to check for experimental fallback handler flag
fix(config.py): increase DEFAULT_MAX_TOOL_FAILURES to allow more retries before failure
* FEAT webui to run RA.Aid from a browser
* FEAT startin webui from ra-aid cmd
* FEAT updating readme
* FEAT adding ADR for webui
* FEAT marking webui as alpha feature
* feat: add research and planner provider/model options to enhance configurability for research and planning tasks
refactor: create get_effective_model_config function to streamline provider/model resolution logic
test: add unit tests for effective model configuration and environment validation for research and planner providers
* refactor(agent_utils.py): remove get_effective_model_config function to simplify code and improve readability
style(agent_utils.py): format debug log statements for better readability
fix(agent_utils.py): update run_agent functions to directly use config without effective model config
feat(agent_utils.py): enhance logging for command execution in programmer.py
test(tests): remove tests related to get_effective_model_config function as it has been removed
* chore(tests): remove outdated tests for research and planner agent configurations to clean up the test suite and improve maintainability
* style(tests): apply consistent formatting and spacing in test_provider_integration.py for improved readability and maintainability
* chore: Add DeepSeek provider environment variable support in env.py
* feat: Add DeepSeek provider validation strategy in provider_strategy.py
* feat: Add support for DEEPSEEK provider in initialize_llm function
* feat: Create ChatDeepseekReasoner for custom handling of R1 models
* feat: Configure custom OpenAI client for DeepSeek API integration
* chore: Remove unused json import from deepseek_chat.py
* refactor: Simplify invocation_params and update acompletion_with_retry method
* feat: Override _generate to ensure message alternation in DeepseekReasoner
* feat: Add support for ChatDeepseekReasoner in LLM initialization
* feat: Use custom ChatDeepseekReasoner for DeepSeek models in OpenRouter
* fix: Remove redundant condition for DeepSeek model initialization
* feat: Add DeepSeek support for expert model initialization in llm.py
* feat: Add DeepSeek model handling for OpenRouter in expert LLM initialization
* fix: Update model name checks for DeepSeek and OpenRouter providers
* refactor: Extract common logic for LLM initialization into reusable methods
* test: Add unit tests for DeepSeek and OpenRouter functionality
* test: Refactor tests to match updated LLM initialization and helpers
* fix: Import missing helper functions to resolve NameError in tests
* fix: Resolve NameError and improve environment variable fallback logic
* feat(readme): add DeepSeek API key requirements to documentation for better clarity on environment variables
feat(main.py): include DeepSeek as a supported provider in argument parsing for enhanced functionality
feat(deepseek_chat.py): implement ChatDeepseekReasoner class for handling DeepSeek reasoning models
feat(llm.py): add DeepSeek client creation logic to support DeepSeek models in the application
feat(models_tokens.py): define token limits for DeepSeek models to manage resource allocation
fix(provider_strategy.py): correct validation logic for DeepSeek environment variables to ensure proper configuration
chore(memory.py): refactor global memory structure for better readability and maintainability in the codebase
* test: Add unit tests for argument parsing in __main__.py
* test: Update tests to remove invalid argument and improve error handling
* test: Fix test_missing_message to handle missing argument cases correctly
* test: Fix test_missing_message to reflect argument parsing behavior
* test: Combine recursion limit tests and verify global config updates
* fix: Include recursion_limit in config for recursion limit tests
* test: Mock dependencies and validate recursion limit in global config
* test: Remove commented-out code and clean up test_main.py
* test: Remove self-evident comments and improve test assertions in test_main.py
* fix: Mock user input and handle temperature in global config tests
* fix: Fix test failures by correcting mock targets and handling temperature
* test: Update temperature validation to check argument passing to initialize_llm
* fix: Correct mock for ask_human and access kwargs in temperature test
* fix: Patch the entire ask_human function in test_chat_mode_implies_hil
* docs: Add recursion limit option to README documentation
* docs: Update README.md with all available command line arguments
* feat(config): add DEFAULT_RECURSION_LIMIT constant to set default recursion depth
feat(main.py): add --recursion-limit argument to configure maximum recursion depth for agent operations
fix(main.py): validate that recursion limit is positive before processing
refactor(main.py): use args.recursion_limit in agent configuration instead of hardcoded value
refactor(agent_utils.py): update agent configuration to use recursion limit from global memory or default value
refactor(run_research_agent): clean up comments and improve readability
refactor(run_web_research_agent): clean up comments and improve readability
refactor(run_planning_agent): clean up comments and improve readability
refactor(run_task_implementation_agent): clean up comments and improve readability
delete(test_main.py): remove obsolete test for chat mode and HIL configuration
- Add comprehensive usage documentation and examples to README
- Add warning about shell command execution risks
- Improve CLI with --message and --research-only flags
- Add environment validation for API keys
- Replace requirements.txt with pyproject.toml