This change introduces a new command-line option `--test-cmd-timeout` to allow users to set a timeout for the execution of test commands. The default timeout is set to 300 seconds. This enhancement provides users with more control over the execution time of their test commands, helping to prevent indefinite hangs during testing.
Additionally, the codebase has been updated to utilize this new timeout setting in relevant areas, ensuring consistent behavior across the application.
* FEAT webui to run RA.Aid from a browser
* FEAT startin webui from ra-aid cmd
* FEAT updating readme
* FEAT adding ADR for webui
* FEAT marking webui as alpha feature
* feat: add research and planner provider/model options to enhance configurability for research and planning tasks
refactor: create get_effective_model_config function to streamline provider/model resolution logic
test: add unit tests for effective model configuration and environment validation for research and planner providers
* refactor(agent_utils.py): remove get_effective_model_config function to simplify code and improve readability
style(agent_utils.py): format debug log statements for better readability
fix(agent_utils.py): update run_agent functions to directly use config without effective model config
feat(agent_utils.py): enhance logging for command execution in programmer.py
test(tests): remove tests related to get_effective_model_config function as it has been removed
* chore(tests): remove outdated tests for research and planner agent configurations to clean up the test suite and improve maintainability
* style(tests): apply consistent formatting and spacing in test_provider_integration.py for improved readability and maintainability
* chore: Add DeepSeek provider environment variable support in env.py
* feat: Add DeepSeek provider validation strategy in provider_strategy.py
* feat: Add support for DEEPSEEK provider in initialize_llm function
* feat: Create ChatDeepseekReasoner for custom handling of R1 models
* feat: Configure custom OpenAI client for DeepSeek API integration
* chore: Remove unused json import from deepseek_chat.py
* refactor: Simplify invocation_params and update acompletion_with_retry method
* feat: Override _generate to ensure message alternation in DeepseekReasoner
* feat: Add support for ChatDeepseekReasoner in LLM initialization
* feat: Use custom ChatDeepseekReasoner for DeepSeek models in OpenRouter
* fix: Remove redundant condition for DeepSeek model initialization
* feat: Add DeepSeek support for expert model initialization in llm.py
* feat: Add DeepSeek model handling for OpenRouter in expert LLM initialization
* fix: Update model name checks for DeepSeek and OpenRouter providers
* refactor: Extract common logic for LLM initialization into reusable methods
* test: Add unit tests for DeepSeek and OpenRouter functionality
* test: Refactor tests to match updated LLM initialization and helpers
* fix: Import missing helper functions to resolve NameError in tests
* fix: Resolve NameError and improve environment variable fallback logic
* feat(readme): add DeepSeek API key requirements to documentation for better clarity on environment variables
feat(main.py): include DeepSeek as a supported provider in argument parsing for enhanced functionality
feat(deepseek_chat.py): implement ChatDeepseekReasoner class for handling DeepSeek reasoning models
feat(llm.py): add DeepSeek client creation logic to support DeepSeek models in the application
feat(models_tokens.py): define token limits for DeepSeek models to manage resource allocation
fix(provider_strategy.py): correct validation logic for DeepSeek environment variables to ensure proper configuration
chore(memory.py): refactor global memory structure for better readability and maintainability in the codebase
* test: Add unit tests for argument parsing in __main__.py
* test: Update tests to remove invalid argument and improve error handling
* test: Fix test_missing_message to handle missing argument cases correctly
* test: Fix test_missing_message to reflect argument parsing behavior
* test: Combine recursion limit tests and verify global config updates
* fix: Include recursion_limit in config for recursion limit tests
* test: Mock dependencies and validate recursion limit in global config
* test: Remove commented-out code and clean up test_main.py
* test: Remove self-evident comments and improve test assertions in test_main.py
* fix: Mock user input and handle temperature in global config tests
* fix: Fix test failures by correcting mock targets and handling temperature
* test: Update temperature validation to check argument passing to initialize_llm
* fix: Correct mock for ask_human and access kwargs in temperature test
* fix: Patch the entire ask_human function in test_chat_mode_implies_hil
* docs: Add recursion limit option to README documentation
* docs: Update README.md with all available command line arguments
* feat(config): add DEFAULT_RECURSION_LIMIT constant to set default recursion depth
feat(main.py): add --recursion-limit argument to configure maximum recursion depth for agent operations
fix(main.py): validate that recursion limit is positive before processing
refactor(main.py): use args.recursion_limit in agent configuration instead of hardcoded value
refactor(agent_utils.py): update agent configuration to use recursion limit from global memory or default value
refactor(run_research_agent): clean up comments and improve readability
refactor(run_web_research_agent): clean up comments and improve readability
refactor(run_planning_agent): clean up comments and improve readability
refactor(run_task_implementation_agent): clean up comments and improve readability
delete(test_main.py): remove obsolete test for chat mode and HIL configuration
- Add comprehensive usage documentation and examples to README
- Add warning about shell command execution risks
- Improve CLI with --message and --research-only flags
- Add environment validation for API keys
- Replace requirements.txt with pyproject.toml