Compare commits

...

81 Commits

Author SHA1 Message Date
AI Christianson 18dd8a7c06 get rid of pointless fn 2025-03-16 12:53:00 -04:00
AI Christianson 80e8a712ac verbose console logging by default for server 2025-03-16 10:05:53 -04:00
AI Christianson 3c0319d50f server config 2025-03-15 22:45:15 -04:00
AI Christianson 8d44ba0824 expert/web enabled based on config 2025-03-15 22:16:50 -04:00
AI Christianson 1dc9326154 get model from config 2025-03-15 22:02:05 -04:00
AI Christianson c848c04ee3 only migrate in main 2025-03-15 21:48:51 -04:00
AI Christianson fee23fcc21 add /v1/spawn-agent 2025-03-15 21:35:43 -04:00
AI Christianson 510e1016f8 make it so we have only one server entrypoint 2025-03-15 16:34:49 -04:00
AI Christianson 64a04e2535 make 1818 the default port 2025-03-15 16:24:34 -04:00
AI Christianson c18c4dbd22 session API endpoint 2025-03-15 16:12:17 -04:00
AI Christianson 77cfbdeca7 webui -> server 2025-03-15 15:14:56 -04:00
AI Christianson e0aab1021b use pydantic models 2025-03-15 14:29:42 -04:00
Ariel Frischer 5d07a7f7b8
Merge pull request #137 from ariel-frischer/use-correct-37-sonnet-state-modifier
Use correct state_modifier when using openrouter claude 3.7
2025-03-15 09:54:27 -07:00
Ariel Frischer 6c159d39d4 feat(agent_utils.py): add get_model_name_from_chat_model function to improve model handling
refactor(build_agent_kwargs): simplify state modifier logic by using model name instead of model attribute
2025-03-15 09:48:52 -07:00
Andrew I. Christianson cde8eee4fa
Merge pull request #136 from ariel-frischer/fix-undefined-model-2
Fix undefined model.model when using openrouter sonnet 3.7
2025-03-15 12:41:09 -04:00
Ariel Frischer f1274b3164 refactor(anthropic_token_limiter.py): update model parameter type in state_modifier to BaseChatModel for better compatibility
feat(anthropic_token_limiter.py): add get_model_name_from_chat_model function to extract model name from BaseChatModel instances
style(anthropic_token_limiter.py): format code for better readability and consistency in function definitions and logging messages
2025-03-15 09:37:26 -07:00
Andrew I. Christianson 9225ec3f2a
Merge pull request #135 from ariel-frischer/fix-undefined-model
fix(agent_utils.py): add check for model attribute to prevent errors …
2025-03-15 12:25:37 -04:00
Ariel Frischer bef504d756 fix(agent_utils.py): add check for model attribute to prevent errors when model does not have 'model' attribute 2025-03-15 09:23:01 -07:00
AI Christianson 75636f0477 webui -> server 2025-03-15 10:02:05 -04:00
Andrew I. Christianson a3dfb81840
Merge pull request #133 from andrewdkennedy1/detect-shell-env
Update shell.py for native windows support
2025-03-14 20:32:11 -04:00
Andrew 05eb50bd97
Update shell.py
adding windows support so shell commands run native without wsl
2025-03-14 16:37:32 -07:00
AI Christianson 46dd75a7e3 fixed session panel 2025-03-14 18:02:21 -04:00
AI Christianson e692f383c4 logos 2025-03-14 17:46:33 -04:00
AI Christianson 6e5f58e18d move theme toggle to right side 2025-03-14 17:37:35 -04:00
AI Christianson 7671312435 get rid of Sessions heading 2025-03-14 17:29:32 -04:00
AI Christianson f7aaccec76 ux 2025-03-14 17:28:21 -04:00
AI Christianson f1277aadf1 session panel spacing 2025-03-14 17:08:14 -04:00
Andrew I. Christianson aaf09c5df6
Merge pull request #132 from ariel-frischer/fix-token-limiter-2
Fix Sonnet 3.7 Token Limiter - Adjust Effective Max Input Tokens
2025-03-14 16:42:39 -04:00
AI Christianson 997c5e7ea7 make session list take up full width 2025-03-14 16:33:04 -04:00
Ariel Frischer 92faf8fc2d feat(anthropic_token_limiter): add get_provider_and_model_for_agent_type function to streamline provider and model retrieval based on agent type
fix(anthropic_token_limiter): refactor get_model_token_limit to use the new get_provider_and_model_for_agent_type function for cleaner code
test(anthropic_token_limiter): add unit tests for get_provider_and_model_for_agent_type and adjust_claude_37_token_limit functions to ensure correctness and coverage
2025-03-14 13:31:51 -07:00
AI Christianson 7d85dc2b05 click overlap event issue 2025-03-14 16:27:10 -04:00
Ariel Frischer 29c9cac4f4 feat(main.py): reorganize litellm configuration to improve clarity and maintainability
feat(agent_utils.py): add model detection utilities for Claude 3.7 models
fix(agent_utils.py): update get_model_token_limit to handle Claude 3.7 token limits correctly
test(model_detection.py): add unit tests for model detection utilities
chore(agent_utils.py): remove deprecated is_anthropic_claude function and related tests
style(agent_utils.py): format code for better readability and consistency
2025-03-14 13:10:44 -07:00
Andrew I. Christianson fe3adbd241
Merge pull request #131 from therality/master
Remove get_aider_executable and associated test
2025-03-14 15:40:39 -04:00
Will 5445a5c4a9 Removing get_aider_executable test as no longer relevant 2025-03-14 15:35:34 -04:00
Will 39ed523288 Removing get_aidr_executable as no longer a depedency 2025-03-14 15:29:11 -04:00
Andrew I. Christianson 0fe019bc9a
Merge pull request #130 from therality/master
Adding prompt-toolkit as dependency
2025-03-14 15:26:25 -04:00
Will 3f28ea80aa
Merge branch 'ai-christianson:master' into master 2025-03-14 15:25:35 -04:00
AI Christianson 0c40fa72c3 style/hmr 2025-03-14 15:09:22 -04:00
AI Christianson 07c6c2e5b5 fix hot reload on dev server 2025-03-14 10:25:22 -04:00
AI Christianson fe3984329d make sure session list hides when open and window expanded 2025-03-14 10:16:23 -04:00
AI Christianson 0a46e3c92b FAB color 2025-03-14 10:11:42 -04:00
AI Christianson 8a507f245e floating action button for sessions panel 2025-03-14 10:08:17 -04:00
AI Christianson af16879dd6 ui styling 2025-03-14 09:15:11 -04:00
AI Christianson f29658fee8 ui styling 2025-03-14 08:54:24 -04:00
Will 996608e4e3 Adding prompt-toolkit as dependency 2025-03-13 21:37:04 -04:00
AI Christianson 262c9f7d77 fix dark colors 2025-03-13 20:02:50 -04:00
AI Christianson d5d250b215 fix dark theme 2025-03-13 19:53:00 -04:00
AI Christianson 9f24c6bef9 remove junk 2025-03-13 18:47:36 -04:00
AI Christianson 1ced6ece4c agent ui components 2025-03-13 18:25:21 -04:00
AI Christianson a9c7f92687 style 2025-03-13 16:48:29 -04:00
AI Christianson 4685550605 integrate shadcn 2025-03-13 15:19:11 -04:00
AI Christianson a2129641ae Revert "shadcn integration"
This reverts commit 9d585f38b5.
2025-03-13 14:13:04 -04:00
AI Christianson 9d585f38b5 shadcn integration 2025-03-13 13:51:35 -04:00
AI Christianson fa66066c07 set up frontend/ infra 2025-03-13 12:18:54 -04:00
AI Christianson c511cefc67 add check for fallback handler 2025-03-13 08:48:51 -04:00
AI Christianson f08e9455b6 version bump 2025-03-13 07:17:26 -04:00
AI Christianson be0b566edb fix ERROR - Error getting expert guidance for planning: module 'ra_aid.agent_utils' has no attribute 'process_thinking_content' 2025-03-13 07:11:32 -04:00
AI Christianson be415ca968 fix config param error 2025-03-13 07:02:25 -04:00
AI Christianson 715d5f483d version bump 2025-03-13 07:02:25 -04:00
Andrew I. Christianson 85cabe4d37
Merge pull request #126 from nahsra/patch-1
Fix dev dependencies instructions
2025-03-12 19:37:48 -04:00
Arshan Dabirsiaghi 80cafa9a40
fix dev dependencies instructions 2025-03-12 19:15:54 -04:00
AI Christianson 26b1dbe966 reasoning assistance docs 2025-03-12 17:09:27 -04:00
Andrew I. Christianson a9656552a9
Merge pull request #124 from ariel-frischer/fix-token-limiter
Fix Sonnet 3.7 Token Limiter API Errors
2025-03-12 14:59:33 -04:00
Ariel Frischer b6f0f6a577 fix(llm.py): remove unnecessary thinking_kwargs from ChatOpenAI parameters to streamline client creation 2025-03-12 11:50:32 -07:00
Ariel Frischer 77a256317a feat: add session and trajectory models to track application state and events
- Introduce a new `Session` model to store information about each program run, including command line arguments and environment details.
- Implement a `Trajectory` model to log significant events and errors during execution, enhancing debugging and monitoring capabilities.
- Update various repository classes to support session and trajectory management, allowing for better tracking of user interactions and system behavior.
- Modify existing functions to record relevant events in the trajectory, ensuring comprehensive logging of application activities.
- Enhance error handling by logging errors to the trajectory, providing insights into failures and system performance.

feat(vsc): add initial setup for VS Code extension "ra-aid" with essential files and configurations
chore(vsc): create tasks.json for managing build and watch tasks in VS Code
chore(vsc): add .vscodeignore to exclude unnecessary files from the extension package
docs(vsc): create CHANGELOG.md to document changes and updates for the extension
docs(vsc): add README.md with instructions and information about the extension
feat(vsc): include esbuild.js for building and bundling the extension
chore(vsc): add eslint.config.mjs for TypeScript linting configuration
chore(vsc): create package.json with dependencies and scripts for the extension
feat(vsc): implement extension logic in src/extension.ts with webview support
test(vsc): add initial test suite in extension.test.ts for extension functionality
chore(vsc): create tsconfig.json for TypeScript compiler options
docs(vsc): add vsc-extension-quickstart.md for guidance on extension development
2025-03-12 11:47:21 -07:00
Ariel Frischer fdd73f149c feat(agent_utils.py): add support for sonnet_35_state_modifier for Claude 3.5 models to enhance token management
chore(anthropic_message_utils.py): remove debug print statements to clean up code and improve readability
chore(anthropic_token_limiter.py): remove debug print statements and replace with logging for better monitoring
test(test_anthropic_token_limiter.py): update tests to verify correct behavior of sonnet_35_state_modifier without patching internal logic
2025-03-12 11:16:54 -07:00
AI Christianson 12d27952d5 add --show-cost flag 2025-03-12 10:21:06 -04:00
AI Christianson 826c53e01a improve prompts 2025-03-12 08:24:41 -04:00
Ariel Frischer 7cfbcb5a2e chore(anthropic_token_limiter.py): comment out max_input_tokens and related debug prints to clean up code and reduce clutter during execution 2025-03-12 00:12:39 -07:00
Ariel Frischer d15d249929 fix(test_agent_utils.py): add name parameter to mock_react calls to ensure consistency in agent creation tests 2025-03-11 23:55:43 -07:00
Ariel Frischer 8d2d273c6b refactor(tests): move token limit tests from test_agent_utils.py to test_anthropic_token_limiter.py for better organization and clarity 2025-03-11 23:53:37 -07:00
Ariel Frischer e42f281f94 chore(anthropic_message_utils.py): remove unused fix_anthropic_message_content function to clean up codebase
chore(anthropic_token_limiter.py): remove import of fix_anthropic_message_content as it is no longer needed
test: add unit tests for has_tool_use and is_tool_pair functions to ensure correct functionality
test: enhance test coverage for anthropic_trim_messages with tool use scenarios to validate message handling
2025-03-11 23:48:08 -07:00
Ariel Frischer 376d486db8 refactor(anthropic_message_utils.py): clean up whitespace and improve code readability by removing unnecessary blank lines and aligning code formatting
fix(anthropic_message_utils.py): add warning in docstring for anthropic_trim_messages function to indicate incomplete implementation and clarify behavior
fix(anthropic_message_utils.py): ensure consistent formatting in conditional statements and improve readability of logical checks
2025-03-11 23:38:31 -07:00
Ariel Frischer a3284c9d7e feat(anthropic_token_limiter.py): add dataclass import for future use and improve code readability by restructuring import statements 2025-03-11 23:37:20 -07:00
Ariel Frischer ee73c85b02 feat(anthropic_message_utils.py): add utilities for handling Anthropic-specific message formats and trimming to improve message processing
fix(agent_utils.py): remove debug print statement for max_input_tokens to clean up code
refactor(anthropic_token_limiter.py): update state_modifier to use anthropic_trim_messages for better token management and maintain message structure
2025-03-11 23:24:57 -07:00
Ariel Frischer 09ba1ee0b9 refactor(anthropic_token_limiter.py): rename messages_to_dict to message_to_dict for consistency and clarity
feat(anthropic_token_limiter.py): add convert_message_to_litellm_format function to standardize message format for litellm
fix(anthropic_token_limiter.py): update wrapped_token_counter to handle only BaseMessage objects and improve token counting logic
chore(anthropic_token_limiter.py): add debug print statements to track token counts before and after trimming messages
2025-03-11 21:26:57 -07:00
AI Christianson c8fbd942ac session model 2025-03-11 20:11:14 -04:00
Ariel Frischer 5c9a1e81d2 feat(main.py): refactor imports for better organization and readability
feat(main.py): add DEFAULT_MODEL constant to centralize model configuration
feat(main.py): enhance logging and error handling for better debugging
feat(main.py): implement state_modifier for managing token limits in agent state
feat(anthropic_token_limiter.py): create utilities for handling token limits with Anthropic models
feat(output.py): add print_messages_compact function for debugging message output
test(anthropic_token_limiter.py): add unit tests for token limit utilities and state management
2025-03-11 14:03:18 -07:00
AI Christianson 376fe18b83 activity panel 2025-03-11 14:55:43 -04:00
AI Christianson 89ee1d96ef vsc icon 2025-03-11 14:15:37 -04:00
AI Christianson 750c0d893b vscode extension 2025-03-11 13:32:46 -04:00
142 changed files with 27394 additions and 1356 deletions

5
.gitignore vendored
View File

@ -14,3 +14,8 @@ __pycache__/
.envrc
appmap.log
*.swp
/vsc/node_modules
/vsc/dist
node_modules/
/frontend/common/dist
/frontend/web/dist/

View File

@ -5,7 +5,14 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
## [0.17.1] 2025-03-13
### Fixed
- Fixed bug with `process_thinking_content` function by moving it from `agent_utils` to `ra_aid.text.processing` module
- Fixed config parameter handling in research request functions
- Updated development setup instructions in README to use `pip install -e ".[dev]"` instead of `pip install -r requirements-dev.txt`
## [0.17.0] 2025-03-12
### Added
- Added support for think tags in models with the new extract_think_tag function
@ -13,9 +20,28 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Added model parameters for think tag support
- Added comprehensive testing for think tag functionality
- Added `--show-thoughts` flag to show thoughts of thinking models
- Added `--show-cost` flag to display cost information during agent operations
- Enhanced cost tracking with AnthropicCallbackHandler for monitoring token usage and costs
- Added Session and Trajectory models to track application state and agent actions
- Added comprehensive environment inventory system for collecting and providing system information to agents
- Added repository implementations for Session and Trajectory models
- Added support for reasoning assistance in research phase
- Added new config parameters for managing cost display and reasoning assistance
### Changed
- Updated langchain/langgraph deps
- Improved trajectory tracking for better debugging and analysis
- Enhanced prompts throughout the system for better performance
- Improved token management with better handling of thinking tokens in Claude models
- Updated project information inclusion in prompts
- Reorganized agent code with better extraction of core functionality
- Refactored anthropic token limiting for better control over token usage
### Fixed
- Fixed binary file detection
- Fixed environment inventory sorting
- Fixed token limiter functionality
- Various test improvements and fixes
## [0.16.1] 2025-03-07

View File

@ -1,4 +1,4 @@
include LICENSE
include README.md
include CHANGELOG.md
recursive-include ra_aid/webui/static *
recursive-include ra_aid/server/static *

View File

@ -226,9 +226,9 @@ More information is available in our [Usage Examples](https://docs.ra-aid.ai/cat
- `--max-test-cmd-retries`: Maximum number of test command retry attempts (default: 3)
- `--test-cmd-timeout`: Timeout in seconds for test command execution (default: 300)
- `--version`: Show program version number and exit
- `--webui`: Launch the web interface (alpha feature)
- `--webui-host`: Host to listen on for web interface (default: 0.0.0.0) (alpha feature)
- `--webui-port`: Port to listen on for web interface (default: 8080) (alpha feature)
- `--server`: Launch the server with web interface (alpha feature)
- `--server-host`: Host to listen on for server (default: 0.0.0.0) (alpha feature)
- `--server-port`: Port to listen on for server (default: 1818) (alpha feature)
### Example Tasks
@ -305,30 +305,30 @@ Make sure to set your TAVILY_API_KEY environment variable to enable this feature
Enable with `--chat` to transform ra-aid into an interactive assistant that guides you through research and implementation tasks. Have a natural conversation about what you want to build, explore options together, and dispatch work - all while maintaining context of your discussion. Perfect for when you want to think through problems collaboratively rather than just executing commands.
### Web Interface
### Server with Web Interface
RA.Aid includes a modern web interface that provides:
RA.Aid includes a modern server with web interface that provides:
- Beautiful dark-themed chat interface
- Real-time streaming of command output
- Request history with quick resubmission
- Responsive design that works on all devices
To launch the web interface:
To launch the server with web interface:
```bash
# Start with default settings (0.0.0.0:8080)
ra-aid --webui
# Start with default settings (0.0.0.0:1818)
ra-aid --server
# Specify custom host and port
ra-aid --webui --webui-host 127.0.0.1 --webui-port 3000
ra-aid --server --server-host 127.0.0.1 --server-port 3000
```
Command line options for web interface:
- `--webui`: Launch the web interface
- `--webui-host`: Host to listen on (default: 0.0.0.0)
- `--webui-port`: Port to listen on (default: 8080)
Command line options for server with web interface:
- `--server`: Launch the server with web interface
- `--server-host`: Host to listen on (default: 0.0.0.0)
- `--server-port`: Port to listen on (default: 1818)
After starting the server, open your web browser to the displayed URL (e.g., http://localhost:8080). The interface provides:
After starting the server, open your web browser to the displayed URL (e.g., http://localhost:1818). The interface provides:
- Left sidebar showing request history
- Main chat area with real-time output
- Input box for typing requests
@ -541,7 +541,7 @@ source venv/bin/activate # On Windows use `venv\Scripts\activate`
3. Install development dependencies:
```bash
pip install -r requirements-dev.txt
pip install -e ".[dev]"
```
4. Run tests:

16
components.json Normal file
View File

@ -0,0 +1,16 @@
{
"$schema": "https://ui.shadcn.com/schema.json",
"style": "new-york",
"rsc": false,
"tsx": true,
"tailwind": {
"config": "frontend/common/tailwind.config.js",
"css": "frontend/common/src/styles/global.css",
"baseColor": "zinc",
"cssVariables": true
},
"aliases": {
"components": "@ra-aid/common/components",
"utils": "@ra-aid/common/utils"
}
}

View File

@ -0,0 +1,96 @@
# Reasoning Assistance
## Overview
Reasoning Assistance is a feature in RA.Aid that helps weaker models make better decisions about tool usage and task planning. It leverages a stronger model (typically your expert model) to provide strategic guidance to the main agent model at the beginning of each agent stage.
This feature is particularly useful when working with less capable models that may struggle with complex reasoning, tool selection, or planning. By providing expert guidance upfront, these models can perform more effectively and produce better results.
## How It Works
When reasoning assistance is enabled, RA.Aid performs the following steps at the beginning of each agent stage (research, planning, implementation):
1. Makes a one-off call to the expert model with a specialized prompt that includes:
- A description of the current task and stage
- The complete list of available tools
- Instructions to provide strategic guidance on approaching the task
2. Incorporates the expert model's response into the main agent's prompt.
3. The main agent then proceeds with execution, guided by the expert's recommendations on which tools to use and how to approach the task
## Configuration
### Command Line Flags
You can enable or disable reasoning assistance using these command-line flags:
```bash
# Enable reasoning assistance
ra-aid -m "Your task description" --reasoning-assistance
# Disable reasoning assistance (overrides model defaults)
ra-aid -m "Your task description" --no-reasoning-assistance
```
## Examples
### Using Reasoning Assistance with Weaker Models
```bash
# Use qwen-qwq-32b as the expert model to provide guidance
ra-aid --model qwen-32b-coder-instruct --expert-model qwen-qwq-32b --reasoning-assistance -m "Create a simple web server in Python"
```
### Disabling Reasoning Assistance for Strong Models
Reasoning assistance has different defaults depending on which model is used. If you would like to explicitly disable reasoning assistance, use the `--no-reasoning-assistance` flag.
```bash
# Use Claude 3 Opus without reasoning assistance
ra-aid -m "Create a simple web server in Python" --model claude-3-opus-20240229 --no-reasoning-assistance
```
## Benefits and Use Cases
Reasoning assistance provides several advantages:
1. **Better Tool Selection**: Helps models choose the right tools for specific tasks
2. **Improved Planning**: Provides strategic guidance on how to approach complex problems
3. **Reduced Errors**: Decreases the likelihood of tool misuse or inefficient approaches
4. **Model Flexibility**: Allows using weaker models more effectively by augmenting their reasoning capabilities
5. **Consistency**: Ensures more consistent behavior across different models
Common use cases include:
- Working with open-source models that have less robust tool use capabilities
- Tackling complex tasks that require careful planning and tool sequencing
- Ensuring consistent behavior when switching between different models
## Best Practices
For optimal results with reasoning assistance:
1. **Use Strong Expert Models**: The quality of reasoning assistance depends on the expert model's capabilities. Use the strongest model available for the expert role.
2. **Enable for Weaker Models**: Enable reasoning assistance by default for models known to struggle with tool selection or complex reasoning.
3. **Disable for Strong Models**: Models like Claude 3 Opus or GPT-4 typically don't need reasoning assistance and might perform better without it.
4. **Custom Tasks**: For highly specialized or unusual tasks, manually enabling reasoning assistance can be beneficial even for stronger models.
5. **Review Generated Guidance**: If debugging issues, examine the expert guidance provided to understand how it's influencing the agent's behavior.
## Troubleshooting
Common issues and solutions:
| Issue | Possible Solution |
|-------|-------------------|
| Reasoning assistance seems to make no difference | Verify both `--reasoning-assistance` flag is set and check the logs to confirm the expert model is being called |
| Expert model provides irrelevant or incorrect agent guidance | Try using a stronger expert model with `--expert-model` flag |
| Agent ignores expert guidance | Some models may not correctly follow the guidance format; try a different agent model |
| Slow performance | Reasoning assistance requires an additional model call at the start of each stage; disable it for simpler tasks if speed is critical |
| Conflicting approach with custom instructions | If you're providing specific instructions that conflict with reasoning assistance, use `--no-reasoning-assistance` |
If problems persist, check if the expert model and agent model are compatible, and consider adjusting the temperature setting to control randomness in both models.

View File

@ -0,0 +1,11 @@
import * as React from "react";
import { type VariantProps } from "class-variance-authority";
declare const buttonVariants: (props?: ({
variant?: "default" | "destructive" | "outline" | "secondary" | "ghost" | "link" | null | undefined;
size?: "default" | "sm" | "lg" | "icon" | null | undefined;
} & import("class-variance-authority/dist/types").ClassProp) | undefined) => string;
export interface ButtonProps extends React.ButtonHTMLAttributes<HTMLButtonElement>, VariantProps<typeof buttonVariants> {
asChild?: boolean;
}
declare const Button: React.ForwardRefExoticComponent<ButtonProps & React.RefAttributes<HTMLButtonElement>>;
export { Button, buttonVariants };

View File

@ -0,0 +1,44 @@
var __rest = (this && this.__rest) || function (s, e) {
var t = {};
for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p) && e.indexOf(p) < 0)
t[p] = s[p];
if (s != null && typeof Object.getOwnPropertySymbols === "function")
for (var i = 0, p = Object.getOwnPropertySymbols(s); i < p.length; i++) {
if (e.indexOf(p[i]) < 0 && Object.prototype.propertyIsEnumerable.call(s, p[i]))
t[p[i]] = s[p[i]];
}
return t;
};
import * as React from "react";
import { Slot } from "@radix-ui/react-slot";
import { cva } from "class-variance-authority";
import { cn } from "../../utils";
const buttonVariants = cva("inline-flex items-center justify-center whitespace-nowrap rounded-md text-sm font-medium transition-colors focus-visible:outline-none focus-visible:ring-1 focus-visible:ring-ring disabled:pointer-events-none disabled:opacity-50", {
variants: {
variant: {
default: "bg-primary text-primary-foreground shadow hover:bg-primary/90",
destructive: "bg-destructive text-destructive-foreground shadow-sm hover:bg-destructive/90",
outline: "border border-input bg-background shadow-sm hover:bg-accent hover:text-accent-foreground",
secondary: "bg-secondary text-secondary-foreground shadow-sm hover:bg-secondary/80",
ghost: "hover:bg-accent hover:text-accent-foreground",
link: "text-primary underline-offset-4 hover:underline",
},
size: {
default: "h-9 px-4 py-2",
sm: "h-8 rounded-md px-3 text-xs",
lg: "h-10 rounded-md px-8",
icon: "h-9 w-9",
},
},
defaultVariants: {
variant: "default",
size: "default",
},
});
const Button = React.forwardRef((_a, ref) => {
var { className, variant, size, asChild = false } = _a, props = __rest(_a, ["className", "variant", "size", "asChild"]);
const Comp = asChild ? Slot : "button";
return (React.createElement(Comp, Object.assign({ className: cn(buttonVariants({ variant, size, className })), ref: ref }, props)));
});
Button.displayName = "Button";
export { Button, buttonVariants };

View File

@ -0,0 +1,8 @@
import * as React from "react";
declare const Card: React.ForwardRefExoticComponent<React.HTMLAttributes<HTMLDivElement> & React.RefAttributes<HTMLDivElement>>;
declare const CardHeader: React.ForwardRefExoticComponent<React.HTMLAttributes<HTMLDivElement> & React.RefAttributes<HTMLDivElement>>;
declare const CardTitle: React.ForwardRefExoticComponent<React.HTMLAttributes<HTMLHeadingElement> & React.RefAttributes<HTMLParagraphElement>>;
declare const CardDescription: React.ForwardRefExoticComponent<React.HTMLAttributes<HTMLParagraphElement> & React.RefAttributes<HTMLParagraphElement>>;
declare const CardContent: React.ForwardRefExoticComponent<React.HTMLAttributes<HTMLDivElement> & React.RefAttributes<HTMLDivElement>>;
declare const CardFooter: React.ForwardRefExoticComponent<React.HTMLAttributes<HTMLDivElement> & React.RefAttributes<HTMLDivElement>>;
export { Card, CardHeader, CardFooter, CardTitle, CardDescription, CardContent };

View File

@ -0,0 +1,44 @@
var __rest = (this && this.__rest) || function (s, e) {
var t = {};
for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p) && e.indexOf(p) < 0)
t[p] = s[p];
if (s != null && typeof Object.getOwnPropertySymbols === "function")
for (var i = 0, p = Object.getOwnPropertySymbols(s); i < p.length; i++) {
if (e.indexOf(p[i]) < 0 && Object.prototype.propertyIsEnumerable.call(s, p[i]))
t[p[i]] = s[p[i]];
}
return t;
};
import * as React from "react";
import { cn } from "../../utils";
const Card = React.forwardRef((_a, ref) => {
var { className } = _a, props = __rest(_a, ["className"]);
return (React.createElement("div", Object.assign({ ref: ref, className: cn("rounded-xl border bg-card text-card-foreground shadow", className) }, props)));
});
Card.displayName = "Card";
const CardHeader = React.forwardRef((_a, ref) => {
var { className } = _a, props = __rest(_a, ["className"]);
return (React.createElement("div", Object.assign({ ref: ref, className: cn("flex flex-col space-y-1.5 p-6", className) }, props)));
});
CardHeader.displayName = "CardHeader";
const CardTitle = React.forwardRef((_a, ref) => {
var { className } = _a, props = __rest(_a, ["className"]);
return (React.createElement("h3", Object.assign({ ref: ref, className: cn("font-semibold leading-none tracking-tight", className) }, props)));
});
CardTitle.displayName = "CardTitle";
const CardDescription = React.forwardRef((_a, ref) => {
var { className } = _a, props = __rest(_a, ["className"]);
return (React.createElement("p", Object.assign({ ref: ref, className: cn("text-sm text-muted-foreground", className) }, props)));
});
CardDescription.displayName = "CardDescription";
const CardContent = React.forwardRef((_a, ref) => {
var { className } = _a, props = __rest(_a, ["className"]);
return (React.createElement("div", Object.assign({ ref: ref, className: cn("p-6 pt-0", className) }, props)));
});
CardContent.displayName = "CardContent";
const CardFooter = React.forwardRef((_a, ref) => {
var { className } = _a, props = __rest(_a, ["className"]);
return (React.createElement("div", Object.assign({ ref: ref, className: cn("flex items-center p-6 pt-0", className) }, props)));
});
CardFooter.displayName = "CardFooter";
export { Card, CardHeader, CardFooter, CardTitle, CardDescription, CardContent };

View File

@ -0,0 +1,9 @@
export * from './button';
export * from './card';
export * from './collapsible';
export * from './floating-action-button';
export * from './input';
export * from './layout';
export * from './sheet';
export * from './switch';
export * from './scroll-area';

View File

@ -0,0 +1,9 @@
export * from './button';
export * from './card';
export * from './collapsible';
export * from './floating-action-button';
export * from './input';
export * from './layout';
export * from './sheet';
export * from './switch';
export * from './scroll-area';

View File

@ -0,0 +1,5 @@
import * as React from "react";
export interface InputProps extends React.InputHTMLAttributes<HTMLInputElement> {
}
declare const Input: React.ForwardRefExoticComponent<InputProps & React.RefAttributes<HTMLInputElement>>;
export { Input };

View File

@ -0,0 +1,19 @@
var __rest = (this && this.__rest) || function (s, e) {
var t = {};
for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p) && e.indexOf(p) < 0)
t[p] = s[p];
if (s != null && typeof Object.getOwnPropertySymbols === "function")
for (var i = 0, p = Object.getOwnPropertySymbols(s); i < p.length; i++) {
if (e.indexOf(p[i]) < 0 && Object.prototype.propertyIsEnumerable.call(s, p[i]))
t[p[i]] = s[p[i]];
}
return t;
};
import * as React from "react";
import { cn } from "../../utils";
const Input = React.forwardRef((_a, ref) => {
var { className, type } = _a, props = __rest(_a, ["className", "type"]);
return (React.createElement("input", Object.assign({ type: type, className: cn("flex h-9 w-full rounded-md border border-input bg-background px-3 py-1 text-sm shadow-sm transition-colors file:border-0 file:bg-transparent file:text-sm file:font-medium placeholder:text-muted-foreground focus-visible:outline-none focus-visible:ring-1 focus-visible:ring-ring disabled:cursor-not-allowed disabled:opacity-50", className), ref: ref }, props)));
});
Input.displayName = "Input";
export { Input };

View File

@ -0,0 +1,4 @@
import * as React from "react";
import * as SwitchPrimitives from "@radix-ui/react-switch";
declare const Switch: React.ForwardRefExoticComponent<Omit<SwitchPrimitives.SwitchProps & React.RefAttributes<HTMLButtonElement>, "ref"> & React.RefAttributes<HTMLButtonElement>>;
export { Switch };

View File

@ -0,0 +1,21 @@
var __rest = (this && this.__rest) || function (s, e) {
var t = {};
for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p) && e.indexOf(p) < 0)
t[p] = s[p];
if (s != null && typeof Object.getOwnPropertySymbols === "function")
for (var i = 0, p = Object.getOwnPropertySymbols(s); i < p.length; i++) {
if (e.indexOf(p[i]) < 0 && Object.prototype.propertyIsEnumerable.call(s, p[i]))
t[p[i]] = s[p[i]];
}
return t;
};
import * as React from "react";
import * as SwitchPrimitives from "@radix-ui/react-switch";
import { cn } from "../../utils";
const Switch = React.forwardRef((_a, ref) => {
var { className } = _a, props = __rest(_a, ["className"]);
return (React.createElement(SwitchPrimitives.Root, Object.assign({ className: cn("peer inline-flex h-5 w-9 shrink-0 cursor-pointer items-center rounded-full border-2 border-transparent shadow-sm transition-colors focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring focus-visible:ring-offset-2 focus-visible:ring-offset-background disabled:cursor-not-allowed disabled:opacity-50 data-[state=checked]:bg-primary data-[state=unchecked]:bg-input", className) }, props, { ref: ref }),
React.createElement(SwitchPrimitives.Thumb, { className: cn("pointer-events-none block h-4 w-4 rounded-full bg-background shadow-lg ring-0 transition-transform data-[state=checked]:translate-x-4 data-[state=unchecked]:translate-x-0") })));
});
Switch.displayName = SwitchPrimitives.Root.displayName;
export { Switch };

11
frontend/common/dist/index.d.ts vendored Normal file
View File

@ -0,0 +1,11 @@
import './styles/global.css';
export * from './utils/types';
export * from './utils';
export * from './components/ui';
export * from './components/TimelineStep';
export * from './components/TimelineFeed';
export * from './components/SessionDrawer';
export * from './components/SessionSidebar';
export * from './components/DefaultAgentScreen';
export declare const hello: () => void;
export { getSampleAgentSteps, getSampleAgentSessions } from './utils/sample-data';

22
frontend/common/dist/index.js vendored Normal file
View File

@ -0,0 +1,22 @@
// Entry point for @ra-aid/common package
import './styles/global.css';
// Export types first to avoid circular references
export * from './utils/types';
// Export utility functions
export * from './utils';
// Export UI components
export * from './components/ui';
// Export timeline components
export * from './components/TimelineStep';
export * from './components/TimelineFeed';
// Export session navigation components
export * from './components/SessionDrawer';
export * from './components/SessionSidebar';
// Export main screens
export * from './components/DefaultAgentScreen';
// Export the hello function (temporary example)
export const hello = () => {
console.log("Hello from @ra-aid/common");
};
// Directly export sample data functions
export { getSampleAgentSteps, getSampleAgentSessions } from './utils/sample-data';

1572
frontend/common/dist/styles/global.css vendored Normal file

File diff suppressed because it is too large Load Diff

7
frontend/common/dist/utils.d.ts vendored Normal file
View File

@ -0,0 +1,7 @@
import { type ClassValue } from "clsx";
/**
* Merges class names with Tailwind CSS classes
* Combines clsx for conditional logic and tailwind-merge for handling conflicting tailwind classes
*/
export declare function cn(...inputs: ClassValue[]): string;
export * from './utils';

11
frontend/common/dist/utils.js vendored Normal file
View File

@ -0,0 +1,11 @@
import { clsx } from "clsx";
import { twMerge } from "tailwind-merge";
/**
* Merges class names with Tailwind CSS classes
* Combines clsx for conditional logic and tailwind-merge for handling conflicting tailwind classes
*/
export function cn(...inputs) {
return twMerge(clsx(inputs));
}
// Re-export everything from utils directory
export * from './utils';

3155
frontend/common/package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,43 @@
{
"name": "@ra-aid/common",
"version": "1.0.0",
"private": true,
"main": "src/index.ts",
"types": "src/index.ts",
"scripts": {
"build": "tsc && postcss src/styles/global.css -o dist/styles/global.css",
"dev": "tsc --watch",
"watch:css": "postcss src/styles/global.css -o dist/styles/global.css --watch",
"watch": "concurrently \"npm run dev\" \"npm run watch:css\"",
"prepare": "npm run build"
},
"dependencies": {
"@radix-ui/react-collapsible": "^1.1.3",
"@radix-ui/react-dialog": "^1.0.5",
"@radix-ui/react-label": "^2.0.2",
"@radix-ui/react-popover": "^1.0.7",
"@radix-ui/react-scroll-area": "^1.2.3",
"@radix-ui/react-select": "^2.0.0",
"@radix-ui/react-slot": "^1.0.2",
"@radix-ui/react-switch": "^1.1.3",
"class-variance-authority": "^0.7.0",
"clsx": "^2.1.0",
"lucide-react": "^0.363.0",
"tailwind-merge": "^2.2.0",
"tailwindcss-animate": "^1.0.7"
},
"devDependencies": {
"@types/react": "^18.2.64",
"@types/react-dom": "^18.2.21",
"autoprefixer": "^10.4.17",
"concurrently": "^8.2.2",
"postcss": "^8.4.35",
"postcss-cli": "^10.1.0",
"tailwindcss": "^3.4.1",
"typescript": "^5.0.0"
},
"peerDependencies": {
"react": ">=18.0.0",
"react-dom": ">=18.0.0"
}
}

View File

@ -0,0 +1,6 @@
module.exports = {
plugins: {
tailwindcss: {},
autoprefixer: {},
},
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

View File

@ -0,0 +1,258 @@
import React, { useState, useEffect } from 'react';
import { createPortal } from 'react-dom';
import { PanelLeft } from 'lucide-react';
import {
Button,
Layout
} from './ui';
import { SessionDrawer } from './SessionDrawer';
import { SessionList } from './SessionList';
import { TimelineFeed } from './TimelineFeed';
import { getSampleAgentSessions, getSampleAgentSteps } from '../utils/sample-data';
import logoBlack from '../assets/logo-black-transparent.png';
import logoWhite from '../assets/logo-white-transparent.gif';
/**
* DefaultAgentScreen component
*
* Main application screen for displaying agent sessions and their steps.
* Handles state management, responsive design, and UI interactions.
*/
export const DefaultAgentScreen: React.FC = () => {
// State for drawer open/close
const [isDrawerOpen, setIsDrawerOpen] = useState(false);
// State for selected session
const [selectedSessionId, setSelectedSessionId] = useState<string | null>(null);
// State for theme (dark is default)
const [isDarkTheme, setIsDarkTheme] = useState(true);
// Get sample data
const sessions = getSampleAgentSessions();
const allSteps = getSampleAgentSteps();
// Set up theme on component mount
useEffect(() => {
const isDark = setupTheme();
setIsDarkTheme(isDark);
}, []);
// Set initial selected session if none selected
useEffect(() => {
if (!selectedSessionId && sessions.length > 0) {
setSelectedSessionId(sessions[0].id);
}
}, [sessions, selectedSessionId]);
// Close drawer when window resizes to desktop width
useEffect(() => {
const handleResize = () => {
// Check if we're at desktop size (corresponds to md: breakpoint in Tailwind)
if (window.innerWidth >= 768 && isDrawerOpen) {
setIsDrawerOpen(false);
}
};
// Add event listener
window.addEventListener('resize', handleResize);
// Clean up event listener on component unmount
return () => window.removeEventListener('resize', handleResize);
}, [isDrawerOpen]);
// Filter steps for selected session
const selectedSessionSteps = selectedSessionId
? allSteps.filter(step => sessions.find(s => s.id === selectedSessionId)?.steps.some(s => s.id === step.id))
: [];
// Handle session selection
const handleSessionSelect = (sessionId: string) => {
setSelectedSessionId(sessionId);
setIsDrawerOpen(false); // Close drawer on selection (mobile)
};
// Toggle theme function
const toggleTheme = () => {
const newIsDark = !isDarkTheme;
setIsDarkTheme(newIsDark);
// Update document element class
if (newIsDark) {
document.documentElement.classList.add('dark');
} else {
document.documentElement.classList.remove('dark');
}
// Save to localStorage
localStorage.setItem('theme', newIsDark ? 'dark' : 'light');
};
// Render header content
const headerContent = (
<div className="w-full flex items-center justify-between h-full px-4">
<div className="flex-initial">
{/* Use the appropriate logo based on theme */}
<img
src={isDarkTheme ? logoWhite : logoBlack}
alt="RA.Aid Logo"
className="h-8"
/>
</div>
<div className="flex-initial ml-auto">
{/* Theme toggle button */}
<Button
variant="ghost"
size="icon"
onClick={toggleTheme}
aria-label={isDarkTheme ? "Switch to light mode" : "Switch to dark mode"}
>
{isDarkTheme ? (
// Sun icon for light mode toggle
<svg
xmlns="http://www.w3.org/2000/svg"
width="20"
height="20"
viewBox="0 0 24 24"
fill="none"
stroke="currentColor"
strokeWidth="2"
strokeLinecap="round"
strokeLinejoin="round"
>
<circle cx="12" cy="12" r="5" />
<line x1="12" y1="1" x2="12" y2="3" />
<line x1="12" y1="21" x2="12" y2="23" />
<line x1="4.22" y1="4.22" x2="5.64" y2="5.64" />
<line x1="18.36" y1="18.36" x2="19.78" y2="19.78" />
<line x1="1" y1="12" x2="3" y2="12" />
<line x1="21" y1="12" x2="23" y2="12" />
<line x1="4.22" y1="19.78" x2="5.64" y2="18.36" />
<line x1="18.36" y1="5.64" x2="19.78" y2="4.22" />
</svg>
) : (
// Moon icon for dark mode toggle
<svg
xmlns="http://www.w3.org/2000/svg"
width="20"
height="20"
viewBox="0 0 24 24"
fill="none"
stroke="currentColor"
strokeWidth="2"
strokeLinecap="round"
strokeLinejoin="round"
>
<path d="M21 12.79A9 9 0 1 1 11.21 3 7 7 0 0 0 21 12.79z" />
</svg>
)}
</Button>
</div>
</div>
);
// Sidebar content with sessions list
const sidebarContent = (
<div className="h-full flex flex-col px-4 py-3">
<SessionList
sessions={sessions}
onSelectSession={handleSessionSelect}
currentSessionId={selectedSessionId || undefined}
className="flex-1 pr-1 -mr-1"
/>
</div>
);
// Render drawer
const drawerContent = (
<SessionDrawer
sessions={sessions}
currentSessionId={selectedSessionId || undefined}
onSelectSession={handleSessionSelect}
isOpen={isDrawerOpen}
onClose={() => setIsDrawerOpen(false)}
/>
);
// Render main content
const mainContent = (
selectedSessionId ? (
<>
<h2 className="text-xl font-semibold mb-4">
Session: {sessions.find(s => s.id === selectedSessionId)?.name || 'Unknown'}
</h2>
<TimelineFeed
steps={selectedSessionSteps}
/>
</>
) : (
<div className="flex items-center justify-center h-full">
<p className="text-muted-foreground">Select a session to view details</p>
</div>
)
);
// Floating action button component that uses Portal to render at document body level
const FloatingActionButton = ({ onClick }: { onClick: () => void }) => {
// Only render the portal on the client side, not during SSR
const [mounted, setMounted] = useState(false);
useEffect(() => {
setMounted(true);
return () => setMounted(false);
}, []);
const button = (
<Button
variant="default"
size="icon"
onClick={onClick}
aria-label="Toggle sessions panel"
className="h-14 w-14 rounded-full shadow-xl bg-zinc-800 hover:bg-zinc-700 text-zinc-100 flex items-center justify-center border-2 border-zinc-700 dark:border-zinc-600"
>
<PanelLeft className="h-6 w-6" />
</Button>
);
const container = (
<div className="fixed bottom-6 right-6 z-[9999] md:hidden" style={{ pointerEvents: 'auto' }}>
{button}
</div>
);
// Return null during SSR, or the portal on the client
return mounted ? createPortal(container, document.body) : null;
};
return (
<>
<Layout
header={headerContent}
sidebar={sidebarContent}
drawer={drawerContent}
>
{mainContent}
</Layout>
<FloatingActionButton onClick={() => setIsDrawerOpen(true)} />
</>
);
};
// Helper function for theme setup
const setupTheme = () => {
// Check if theme preference is stored in localStorage
const storedTheme = localStorage.getItem('theme');
// Default to dark mode unless explicitly set to light
const isDark = storedTheme ? storedTheme === 'dark' : true;
// Apply theme to document
if (isDark) {
document.documentElement.classList.add('dark');
} else {
document.documentElement.classList.remove('dark');
}
return isDark;
};

View File

@ -0,0 +1,47 @@
import React from 'react';
import {
Sheet,
SheetContent,
SheetHeader,
SheetTitle,
SheetClose
} from './ui/sheet';
import { AgentSession } from '../utils/types';
import { getSampleAgentSessions } from '../utils/sample-data';
import { SessionList } from './SessionList';
interface SessionDrawerProps {
onSelectSession?: (sessionId: string) => void;
currentSessionId?: string;
sessions?: AgentSession[];
isOpen?: boolean;
onClose?: () => void;
}
export const SessionDrawer: React.FC<SessionDrawerProps> = ({
onSelectSession,
currentSessionId,
sessions = getSampleAgentSessions(),
isOpen = false,
onClose
}) => {
return (
<Sheet open={isOpen} onOpenChange={onClose}>
<SheetContent
side="left"
className="w-full sm:max-w-md border-r border-border p-4"
>
<SheetHeader className="px-2">
<SheetTitle>Sessions</SheetTitle>
</SheetHeader>
<SessionList
sessions={sessions}
currentSessionId={currentSessionId}
onSelectSession={onSelectSession}
className="h-[calc(100vh-9rem)] mt-4"
wrapperComponent={SheetClose}
/>
</SheetContent>
</Sheet>
);
};

View File

@ -0,0 +1,93 @@
import React from 'react';
import { ScrollArea } from './ui/scroll-area';
import { AgentSession } from '../utils/types';
import { getSampleAgentSessions } from '../utils/sample-data';
interface SessionListProps {
onSelectSession?: (sessionId: string) => void;
currentSessionId?: string;
sessions?: AgentSession[];
className?: string;
wrapperComponent?: React.ElementType;
closeAction?: React.ReactNode;
}
export const SessionList: React.FC<SessionListProps> = ({
onSelectSession,
currentSessionId,
sessions = getSampleAgentSessions(),
className = '',
wrapperComponent: WrapperComponent = 'button',
closeAction
}) => {
// Get status color
const getStatusColor = (status: string) => {
switch (status) {
case 'active':
return 'bg-blue-500';
case 'completed':
return 'bg-green-500';
case 'error':
return 'bg-red-500';
default:
return 'bg-gray-500';
}
};
// Format timestamp
const formatDate = (date: Date) => {
return date.toLocaleDateString([], {
month: 'short',
day: 'numeric',
hour: '2-digit',
minute: '2-digit'
});
};
return (
<ScrollArea className={className}>
<div className="space-y-1.5 pt-1.5 pb-2">
{sessions.map((session) => {
const buttonContent = (
<>
<div className={`w-2.5 h-2.5 rounded-full ${getStatusColor(session.status)} mt-1.5 mr-3 flex-shrink-0`} />
<div className="flex-1 min-w-0 pr-1">
<div className="font-medium text-sm+ break-words">{session.name}</div>
<div className="text-xs text-muted-foreground mt-1 break-words">
{session.steps.length} steps {formatDate(session.updated)}
</div>
<div className="text-xs text-muted-foreground mt-0.5 break-words">
<span className="capitalize">{session.status}</span>
</div>
</div>
</>
);
return React.createElement(
WrapperComponent,
{
key: session.id,
onClick: () => onSelectSession?.(session.id),
className: `w-full flex items-start px-3 py-2.5 text-left rounded-md transition-colors hover:bg-accent/50 ${
currentSessionId === session.id ? 'bg-accent' : ''
}`
},
closeAction ? (
<>
{buttonContent}
<div className="ml-2 flex-shrink-0 self-center">
{React.cloneElement(closeAction as React.ReactElement, {
onClick: (e: React.MouseEvent) => {
e.stopPropagation();
onSelectSession?.(session.id);
}
})}
</div>
</>
) : buttonContent
);
})}
</div>
</ScrollArea>
);
};

View File

@ -0,0 +1,32 @@
import React from 'react';
import { AgentSession } from '../utils/types';
import { getSampleAgentSessions } from '../utils/sample-data';
import { SessionList } from './SessionList';
interface SessionSidebarProps {
onSelectSession?: (sessionId: string) => void;
currentSessionId?: string;
sessions?: AgentSession[];
className?: string;
}
export const SessionSidebar: React.FC<SessionSidebarProps> = ({
onSelectSession,
currentSessionId,
sessions = getSampleAgentSessions(),
className = ''
}) => {
return (
<div className={`flex flex-col h-full ${className}`}>
<div className="p-4 border-b border-border">
<h3 className="font-medium text-lg">Sessions</h3>
</div>
<SessionList
sessions={sessions}
currentSessionId={currentSessionId}
onSelectSession={onSelectSession}
className="flex-1"
/>
</div>
);
};

View File

@ -0,0 +1,45 @@
import React, { useMemo } from 'react';
import { TimelineStep } from './TimelineStep';
import { AgentStep } from '../utils/types';
interface TimelineFeedProps {
steps: AgentStep[];
maxHeight?: string;
}
export const TimelineFeed: React.FC<TimelineFeedProps> = ({
steps,
maxHeight
}) => {
// Always use 'desc' (newest first) sort order
const sortOrder = 'desc';
// Sort steps with newest first (desc order)
const sortedSteps = useMemo(() => {
return [...steps].sort((a, b) => {
return b.timestamp.getTime() - a.timestamp.getTime();
});
}, [steps]);
return (
<div className="w-full rounded-md bg-background">
<div
className="px-3 py-3 space-y-4 overflow-auto"
style={{ maxHeight: maxHeight || undefined }}
>
{sortedSteps.length > 0 ? (
sortedSteps.map((step) => (
<TimelineStep key={step.id} step={step} />
))
) : (
<div className="text-center text-muted-foreground py-12 border border-dashed border-border rounded-md">
<svg xmlns="http://www.w3.org/2000/svg" className="h-8 w-8 mx-auto mb-2 text-muted-foreground/50" fill="none" viewBox="0 0 24 24" stroke="currentColor">
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M12 8v4l3 3m6-3a9 9 0 11-18 0 9 9 0 0118 0z" />
</svg>
<p>No steps to display</p>
</div>
)}
</div>
</div>
);
}

View File

@ -0,0 +1,99 @@
import React from 'react';
import { Collapsible, CollapsibleContent, CollapsibleTrigger } from './ui/collapsible';
import { AgentStep } from '../utils/types';
interface TimelineStepProps {
step: AgentStep;
}
export const TimelineStep: React.FC<TimelineStepProps> = ({ step }) => {
// Get status color
const getStatusColor = (status: string) => {
switch (status) {
case 'completed':
return 'bg-green-500';
case 'in-progress':
return 'bg-blue-500';
case 'error':
return 'bg-red-500';
case 'pending':
return 'bg-yellow-500';
default:
return 'bg-gray-500';
}
};
// Get icon based on step type
const getTypeIcon = (type: string) => {
switch (type) {
case 'tool-execution':
return '🛠️';
case 'thinking':
return '💭';
case 'planning':
return '📝';
case 'implementation':
return '💻';
case 'user-input':
return '👤';
default:
return '▶️';
}
};
// Format timestamp
const formatTime = (timestamp: Date) => {
return timestamp.toLocaleTimeString([], { hour: '2-digit', minute: '2-digit' });
};
return (
<Collapsible className="w-full mb-5 border border-border rounded-md overflow-hidden shadow-sm hover:shadow-md transition-all duration-200">
<CollapsibleTrigger className="w-full flex items-center justify-between p-4 text-left hover:bg-accent/30 cursor-pointer group">
<div className="flex items-center space-x-3 min-w-0 flex-1 pr-3">
<div className={`flex-shrink-0 w-3 h-3 rounded-full ${getStatusColor(step.status)} ring-1 ring-ring/20`} />
<div className="flex-shrink-0 text-lg group-hover:scale-110 transition-transform">{getTypeIcon(step.type)}</div>
<div className="min-w-0 flex-1">
<div className="font-medium text-foreground break-words">{step.title}</div>
<div className="text-sm text-muted-foreground line-clamp-2">
{step.type === 'tool-execution' ? 'Run tool' : step.content.substring(0, 60)}
{step.content.length > 60 ? '...' : ''}
</div>
</div>
</div>
<div className="text-xs text-muted-foreground flex flex-col items-end flex-shrink-0 min-w-[70px] text-right">
<span className="font-medium">{formatTime(step.timestamp)}</span>
{step.duration && (
<span className="mt-1 px-2 py-0.5 bg-secondary/50 rounded-full">
{(step.duration / 1000).toFixed(1)}s
</span>
)}
</div>
</CollapsibleTrigger>
<CollapsibleContent>
<div className="p-5 bg-card/50 border-t border-border">
<div className="text-sm break-words text-foreground leading-relaxed">
{step.content}
</div>
{step.duration && (
<div className="mt-4 pt-3 border-t border-border/50">
<div className="text-xs text-muted-foreground flex items-center">
<svg
xmlns="http://www.w3.org/2000/svg"
className="h-3.5 w-3.5 mr-1"
fill="none"
viewBox="0 0 24 24"
stroke="currentColor"
strokeWidth={2}
>
<circle cx="12" cy="12" r="10" />
<polyline points="12 6 12 12 16 14" />
</svg>
Duration: {(step.duration / 1000).toFixed(1)} seconds
</div>
</div>
)}
</div>
</CollapsibleContent>
</Collapsible>
);
};

View File

@ -0,0 +1,57 @@
import * as React from "react";
import { Slot } from "@radix-ui/react-slot";
import { cva, type VariantProps } from "class-variance-authority";
import { cn } from "../../utils";
const buttonVariants = cva(
"inline-flex items-center justify-center whitespace-nowrap rounded-md text-sm font-medium transition-colors focus-visible:outline-none focus-visible:ring-1 focus-visible:ring-ring disabled:pointer-events-none disabled:opacity-50",
{
variants: {
variant: {
default:
"bg-primary text-primary-foreground shadow hover:bg-primary/90",
destructive:
"bg-destructive text-destructive-foreground shadow-sm hover:bg-destructive/90",
outline:
"border border-input bg-background shadow-sm hover:bg-accent hover:text-accent-foreground",
secondary:
"bg-secondary text-secondary-foreground shadow-sm hover:bg-secondary/80",
ghost: "hover:bg-accent hover:text-accent-foreground",
link: "text-primary underline-offset-4 hover:underline",
},
size: {
default: "h-9 px-4 py-2",
sm: "h-8 rounded-md px-3 text-xs",
lg: "h-10 rounded-md px-8",
icon: "h-9 w-9",
},
},
defaultVariants: {
variant: "default",
size: "default",
},
}
);
export interface ButtonProps
extends React.ButtonHTMLAttributes<HTMLButtonElement>,
VariantProps<typeof buttonVariants> {
asChild?: boolean;
}
const Button = React.forwardRef<HTMLButtonElement, ButtonProps>(
({ className, variant, size, asChild = false, ...props }, ref) => {
const Comp = asChild ? Slot : "button";
return (
<Comp
className={cn(buttonVariants({ variant, size, className }))}
ref={ref}
{...props}
/>
);
}
);
Button.displayName = "Button";
export { Button, buttonVariants };

View File

@ -0,0 +1,76 @@
import * as React from "react";
import { cn } from "../../utils";
const Card = React.forwardRef<
HTMLDivElement,
React.HTMLAttributes<HTMLDivElement>
>(({ className, ...props }, ref) => (
<div
ref={ref}
className={cn(
"rounded-xl border bg-card text-card-foreground shadow",
className
)}
{...props}
/>
));
Card.displayName = "Card";
const CardHeader = React.forwardRef<
HTMLDivElement,
React.HTMLAttributes<HTMLDivElement>
>(({ className, ...props }, ref) => (
<div
ref={ref}
className={cn("flex flex-col space-y-1.5 p-6", className)}
{...props}
/>
));
CardHeader.displayName = "CardHeader";
const CardTitle = React.forwardRef<
HTMLParagraphElement,
React.HTMLAttributes<HTMLHeadingElement>
>(({ className, ...props }, ref) => (
<h3
ref={ref}
className={cn("font-semibold leading-none tracking-tight", className)}
{...props}
/>
));
CardTitle.displayName = "CardTitle";
const CardDescription = React.forwardRef<
HTMLParagraphElement,
React.HTMLAttributes<HTMLParagraphElement>
>(({ className, ...props }, ref) => (
<p
ref={ref}
className={cn("text-sm text-muted-foreground", className)}
{...props}
/>
));
CardDescription.displayName = "CardDescription";
const CardContent = React.forwardRef<
HTMLDivElement,
React.HTMLAttributes<HTMLDivElement>
>(({ className, ...props }, ref) => (
<div ref={ref} className={cn("p-6 pt-0", className)} {...props} />
));
CardContent.displayName = "CardContent";
const CardFooter = React.forwardRef<
HTMLDivElement,
React.HTMLAttributes<HTMLDivElement>
>(({ className, ...props }, ref) => (
<div
ref={ref}
className={cn("flex items-center p-6 pt-0", className)}
{...props}
/>
));
CardFooter.displayName = "CardFooter";
export { Card, CardHeader, CardFooter, CardTitle, CardDescription, CardContent };

View File

@ -0,0 +1,27 @@
import * as React from "react"
import * as CollapsiblePrimitive from "@radix-ui/react-collapsible"
import { cn } from "../../utils"
const Collapsible = CollapsiblePrimitive.Root
const CollapsibleTrigger = CollapsiblePrimitive.Trigger
const CollapsibleContent = React.forwardRef<
React.ElementRef<typeof CollapsiblePrimitive.Content>,
React.ComponentPropsWithoutRef<typeof CollapsiblePrimitive.Content>
>(({ className, children, ...props }, ref) => (
<CollapsiblePrimitive.Content
ref={ref}
className={cn(
"overflow-hidden data-[state=closed]:animate-accordion-up data-[state=open]:animate-accordion-down",
className
)}
{...props}
>
{children}
</CollapsiblePrimitive.Content>
))
CollapsibleContent.displayName = "CollapsibleContent"
export { Collapsible, CollapsibleTrigger, CollapsibleContent }

View File

@ -0,0 +1,36 @@
import React, { ReactNode } from 'react';
import { Button } from './button';
export interface FloatingActionButtonProps {
icon: ReactNode;
onClick: () => void;
ariaLabel?: string;
className?: string;
variant?: 'default' | 'destructive' | 'outline' | 'secondary' | 'ghost' | 'link';
}
/**
* FloatingActionButton component
*
* A button typically used for primary actions on mobile layouts
* Designed to be used with the Layout component's floatingAction prop
*/
export const FloatingActionButton: React.FC<FloatingActionButtonProps> = ({
icon,
onClick,
ariaLabel = 'Action button',
className = '',
variant = 'default'
}) => {
return (
<Button
variant={variant}
size="icon"
onClick={onClick}
aria-label={ariaLabel}
className={`h-14 w-14 rounded-full shadow-xl bg-blue-600 hover:bg-blue-700 text-white flex items-center justify-center border-2 border-white dark:border-gray-800 ${className}`}
>
{icon}
</Button>
);
};

View File

@ -0,0 +1,9 @@
export * from './button';
export * from './card';
export * from './collapsible';
export * from './floating-action-button';
export * from './input';
export * from './layout';
export * from './sheet';
export * from './switch';
export * from './scroll-area';

View File

@ -0,0 +1,25 @@
import * as React from "react";
import { cn } from "../../utils";
export interface InputProps
extends React.InputHTMLAttributes<HTMLInputElement> {}
const Input = React.forwardRef<HTMLInputElement, InputProps>(
({ className, type, ...props }, ref) => {
return (
<input
type={type}
className={cn(
"flex h-9 w-full rounded-md border border-input bg-background px-3 py-1 text-sm shadow-sm transition-colors file:border-0 file:bg-transparent file:text-sm file:font-medium placeholder:text-muted-foreground focus-visible:outline-none focus-visible:ring-1 focus-visible:ring-ring disabled:cursor-not-allowed disabled:opacity-50",
className
)}
ref={ref}
{...props}
/>
);
}
);
Input.displayName = "Input";
export { Input };

View File

@ -0,0 +1,56 @@
import React from 'react';
/**
* Layout component using Tailwind Grid utilities
* This component creates a responsive layout with:
* - Sticky header at the top (z-index 30)
* - Sidebar on desktop (hidden on mobile)
* - Main content area with proper positioning
* - Optional floating action button for mobile navigation
*/
export interface LayoutProps {
header: React.ReactNode;
sidebar?: React.ReactNode;
drawer?: React.ReactNode;
children: React.ReactNode;
floatingAction?: React.ReactNode;
}
export const Layout: React.FC<LayoutProps> = ({
header,
sidebar,
drawer,
children,
floatingAction
}) => {
return (
<div className="grid min-h-screen grid-cols-1 grid-rows-[64px_1fr] md:grid-cols-[280px_1fr] lg:grid-cols-[320px_1fr] xl:grid-cols-[350px_1fr] bg-background text-foreground relative">
{/* Header - always visible, spans full width */}
<header className="sticky top-0 z-30 h-16 flex items-center bg-background border-b border-border col-span-full">
{header}
</header>
{/* Sidebar - hidden on mobile, visible on tablet/desktop */}
{sidebar && (
<aside className="hidden md:block fixed top-16 bottom-0 w-[280px] lg:w-[320px] xl:w-[350px] overflow-y-auto z-20 bg-background border-r border-border">
{sidebar}
</aside>
)}
{/* Main content area */}
<main className="overflow-y-auto p-4 row-start-2 col-start-1 md:col-start-2 md:h-[calc(100vh-64px)]">
{children}
</main>
{/* Mobile drawer - rendered outside grid */}
{drawer}
{/* Floating action button for mobile */}
{floatingAction && (
<div className="fixed bottom-6 right-6 z-50 md:hidden">
{floatingAction}
</div>
)}
</div>
);
};

View File

@ -0,0 +1,47 @@
import * as React from "react"
import * as ScrollAreaPrimitive from "@radix-ui/react-scroll-area"
import { cn } from "../../utils"
const ScrollArea = React.forwardRef<
React.ElementRef<typeof ScrollAreaPrimitive.Root>,
React.ComponentPropsWithoutRef<typeof ScrollAreaPrimitive.Root>
>(({ className, children, ...props }, ref) => (
<ScrollAreaPrimitive.Root
ref={ref}
className={cn("relative overflow-hidden", className)}
{...props}
>
<ScrollAreaPrimitive.Viewport className="h-full w-full rounded-[inherit]">
{children}
</ScrollAreaPrimitive.Viewport>
<ScrollBar />
<ScrollBar orientation="horizontal" />
<ScrollAreaPrimitive.Corner />
</ScrollAreaPrimitive.Root>
))
ScrollArea.displayName = ScrollAreaPrimitive.Root.displayName
const ScrollBar = React.forwardRef<
React.ElementRef<typeof ScrollAreaPrimitive.ScrollAreaScrollbar>,
React.ComponentPropsWithoutRef<typeof ScrollAreaPrimitive.ScrollAreaScrollbar>
>(({ className, orientation = "vertical", ...props }, ref) => (
<ScrollAreaPrimitive.ScrollAreaScrollbar
ref={ref}
orientation={orientation}
className={cn(
"flex touch-none select-none transition-colors",
orientation === "vertical" &&
"h-full w-2.5 border-l border-l-transparent p-[1px]",
orientation === "horizontal" &&
"h-2.5 border-t border-t-transparent p-[1px]",
className
)}
{...props}
>
<ScrollAreaPrimitive.ScrollAreaThumb className="relative flex-1 rounded-full bg-border" />
</ScrollAreaPrimitive.ScrollAreaScrollbar>
))
ScrollBar.displayName = ScrollAreaPrimitive.ScrollAreaScrollbar.displayName
export { ScrollArea, ScrollBar }

View File

@ -0,0 +1,134 @@
import * as React from "react"
import * as SheetPrimitive from "@radix-ui/react-dialog"
import { cva, type VariantProps } from "class-variance-authority"
import { X } from "lucide-react"
import { cn } from "../../utils"
const Sheet = SheetPrimitive.Root
const SheetTrigger = SheetPrimitive.Trigger
const SheetClose = SheetPrimitive.Close
const SheetPortal = SheetPrimitive.Portal
const SheetOverlay = React.forwardRef<
React.ElementRef<typeof SheetPrimitive.Overlay>,
React.ComponentPropsWithoutRef<typeof SheetPrimitive.Overlay>
>(({ className, ...props }, ref) => (
<SheetPrimitive.Overlay
className={cn(
"fixed inset-0 z-70 bg-background/80 backdrop-blur-sm data-[state=open]:animate-in data-[state=closed]:animate-out data-[state=closed]:fade-out-0 data-[state=open]:fade-in-0",
className
)}
{...props}
ref={ref}
/>
))
SheetOverlay.displayName = SheetPrimitive.Overlay.displayName
const sheetVariants = cva(
"fixed z-70 gap-4 bg-background p-6 shadow-lg transition ease-in-out data-[state=open]:animate-in data-[state=closed]:animate-out data-[state=closed]:duration-300 data-[state=open]:duration-500",
{
variants: {
side: {
top: "inset-x-0 top-0 border-b data-[state=closed]:slide-out-to-top data-[state=open]:slide-in-from-top",
right: "inset-y-0 right-0 h-full w-3/4 border-l data-[state=closed]:slide-out-to-right data-[state=open]:slide-in-from-right sm:max-w-sm",
bottom: "inset-x-0 bottom-0 border-t data-[state=closed]:slide-out-to-bottom data-[state=open]:slide-in-from-bottom",
left: "inset-y-0 left-0 h-full w-full border-r data-[state=closed]:slide-out-to-left data-[state=open]:slide-in-from-left sm:max-w-sm",
},
},
defaultVariants: {
side: "right",
},
}
)
interface SheetContentProps
extends React.ComponentPropsWithoutRef<typeof SheetPrimitive.Content>,
VariantProps<typeof sheetVariants> {}
const SheetContent = React.forwardRef<
React.ElementRef<typeof SheetPrimitive.Content>,
SheetContentProps
>(({ side = "right", className, children, ...props }, ref) => (
<SheetPortal>
<SheetOverlay />
<SheetPrimitive.Content
ref={ref}
className={cn(sheetVariants({ side }), className)}
{...props}
>
{children}
<SheetPrimitive.Close className="absolute right-4 top-4 rounded-sm opacity-70 ring-offset-background transition-opacity hover:opacity-100 focus:outline-none focus:ring-2 focus:ring-ring focus:ring-offset-2 disabled:pointer-events-none data-[state=open]:bg-secondary">
<X className="h-4 w-4" />
<span className="sr-only">Close</span>
</SheetPrimitive.Close>
</SheetPrimitive.Content>
</SheetPortal>
))
SheetContent.displayName = SheetPrimitive.Content.displayName
const SheetHeader = ({
className,
...props
}: React.HTMLAttributes<HTMLDivElement>) => (
<div
className={cn(
"flex flex-col space-y-2 text-center sm:text-left",
className
)}
{...props}
/>
)
SheetHeader.displayName = "SheetHeader"
const SheetFooter = ({
className,
...props
}: React.HTMLAttributes<HTMLDivElement>) => (
<div
className={cn(
"flex flex-col-reverse sm:flex-row sm:justify-end sm:space-x-2",
className
)}
{...props}
/>
)
SheetFooter.displayName = "SheetFooter"
const SheetTitle = React.forwardRef<
React.ElementRef<typeof SheetPrimitive.Title>,
React.ComponentPropsWithoutRef<typeof SheetPrimitive.Title>
>(({ className, ...props }, ref) => (
<SheetPrimitive.Title
ref={ref}
className={cn("text-lg font-semibold text-foreground", className)}
{...props}
/>
))
SheetTitle.displayName = SheetPrimitive.Title.displayName
const SheetDescription = React.forwardRef<
React.ElementRef<typeof SheetPrimitive.Description>,
React.ComponentPropsWithoutRef<typeof SheetPrimitive.Description>
>(({ className, ...props }, ref) => (
<SheetPrimitive.Description
ref={ref}
className={cn("text-sm text-muted-foreground", className)}
{...props}
/>
))
SheetDescription.displayName = SheetPrimitive.Description.displayName
export {
Sheet,
SheetTrigger,
SheetClose,
SheetContent,
SheetHeader,
SheetFooter,
SheetTitle,
SheetDescription,
}

View File

@ -0,0 +1,27 @@
import * as React from "react";
import * as SwitchPrimitives from "@radix-ui/react-switch";
import { cn } from "../../utils";
const Switch = React.forwardRef<
React.ElementRef<typeof SwitchPrimitives.Root>,
React.ComponentPropsWithoutRef<typeof SwitchPrimitives.Root>
>(({ className, ...props }, ref) => (
<SwitchPrimitives.Root
className={cn(
"peer inline-flex h-5 w-9 shrink-0 cursor-pointer items-center rounded-full border-2 border-transparent shadow-sm transition-colors focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring focus-visible:ring-offset-2 focus-visible:ring-offset-background disabled:cursor-not-allowed disabled:opacity-50 data-[state=checked]:bg-primary data-[state=unchecked]:bg-input",
className
)}
{...props}
ref={ref}
>
<SwitchPrimitives.Thumb
className={cn(
"pointer-events-none block h-4 w-4 rounded-full bg-background shadow-lg ring-0 transition-transform data-[state=checked]:translate-x-4 data-[state=unchecked]:translate-x-0"
)}
/>
</SwitchPrimitives.Root>
));
Switch.displayName = SwitchPrimitives.Root.displayName;
export { Switch };

View File

@ -0,0 +1,33 @@
// Entry point for @ra-aid/common package
import './styles/global.css';
// Export types first to avoid circular references
export * from './utils/types';
// Export utility functions
export * from './utils';
// Export UI components
export * from './components/ui';
// Export timeline components
export * from './components/TimelineStep';
export * from './components/TimelineFeed';
// Export session navigation components
export * from './components/SessionDrawer';
export * from './components/SessionSidebar';
// Export main screens
export * from './components/DefaultAgentScreen';
// Export the hello function (temporary example)
export const hello = (): void => {
console.log("Hello from @ra-aid/common");
};
// Directly export sample data functions
export {
getSampleAgentSteps,
getSampleAgentSessions
} from './utils/sample-data';

View File

@ -0,0 +1,80 @@
@tailwind base;
@tailwind components;
@tailwind utilities;
@layer base {
:root {
--background: 0 0% 100%;
--foreground: 222.2 47.4% 11.2%;
--muted: 210 40% 96.1%;
--muted-foreground: 215.4 16.3% 46.9%;
--popover: 0 0% 100%;
--popover-foreground: 222.2 47.4% 11.2%;
--card: 0 0% 100%;
--card-foreground: 222.2 47.4% 11.2%;
--border: 214.3 31.8% 91.4%;
--input: 214.3 31.8% 91.4%;
--primary: 222.2 47.4% 11.2%;
--primary-foreground: 210 40% 98%;
--secondary: 210 40% 96.1%;
--secondary-foreground: 222.2 47.4% 11.2%;
--accent: 210 40% 96.1%;
--accent-foreground: 222.2 47.4% 11.2%;
--destructive: 0 100% 50%;
--destructive-foreground: 210 40% 98%;
--ring: 215 20.2% 65.1%;
--radius: 0.5rem;
}
.dark {
--background: 240 10% 3.9%; /* zinc-950 */
--foreground: 240 5% 96%; /* zinc-50 */
--card: 240 10% 3.9%; /* zinc-950 */
--card-foreground: 240 5% 96%; /* zinc-50 */
--popover: 240 10% 3.9%; /* zinc-950 */
--popover-foreground: 240 5% 96%; /* zinc-50 */
--primary: 240 5% 96%; /* zinc-50 */
--primary-foreground: 240 6% 10%; /* zinc-900 */
--secondary: 240 4% 16%; /* zinc-800 */
--secondary-foreground: 240 5% 96%; /* zinc-50 */
--muted: 240 4% 16%; /* zinc-800 */
--muted-foreground: 240 5% 65%; /* zinc-400 */
--accent: 240 4% 16%; /* zinc-800 */
--accent-foreground: 240 5% 96%; /* zinc-50 */
--destructive: 0 63% 31%; /* red-900 */
--destructive-foreground: 240 5% 96%; /* zinc-50 */
--border: 240 4% 16%; /* zinc-800 */
--input: 240 4% 16%; /* zinc-800 */
--ring: 240 5% 84%; /* zinc-300 */
--radius: 0.5rem;
}
}
@layer base {
* {
@apply border-border;
}
body {
@apply bg-background text-foreground;
font-feature-settings: "rlig" 1, "calt" 1;
}
}

24
frontend/common/src/types/image.d.ts vendored Normal file
View File

@ -0,0 +1,24 @@
declare module '*.png' {
const content: string;
export default content;
}
declare module '*.gif' {
const content: string;
export default content;
}
declare module '*.jpg' {
const content: string;
export default content;
}
declare module '*.jpeg' {
const content: string;
export default content;
}
declare module '*.svg' {
const content: string;
export default content;
}

View File

@ -0,0 +1,13 @@
import { clsx, type ClassValue } from "clsx";
import { twMerge } from "tailwind-merge";
/**
* Merges class names with Tailwind CSS classes
* Combines clsx for conditional logic and tailwind-merge for handling conflicting tailwind classes
*/
export function cn(...inputs: ClassValue[]) {
return twMerge(clsx(inputs));
}
// Re-export everything from utils directory
export * from './utils';

View File

@ -0,0 +1,13 @@
import { clsx, type ClassValue } from "clsx";
import { twMerge } from "tailwind-merge";
/**
* Merges class names with Tailwind CSS classes
* Combines clsx for conditional logic and tailwind-merge for handling conflicting tailwind classes
*/
export function cn(...inputs: ClassValue[]) {
return twMerge(clsx(inputs));
}
// Note: Sample data functions and types are now exported directly from the root index.ts
// to avoid circular references

View File

@ -0,0 +1,164 @@
/**
* Sample data utility for agent UI components demonstration
*/
import { AgentStep, AgentSession } from './types';
/**
* Returns an array of sample agent steps
*/
export function getSampleAgentSteps(): AgentStep[] {
return [
{
id: "step-1",
timestamp: new Date(Date.now() - 30 * 60000), // 30 minutes ago
status: 'completed',
type: 'planning',
title: 'Initial Planning',
content: 'I need to analyze the codebase structure to understand the existing components and their relationships.',
duration: 5200
},
{
id: "step-2",
timestamp: new Date(Date.now() - 25 * 60000), // 25 minutes ago
status: 'completed',
type: 'tool-execution',
title: 'List Directory Structure',
content: 'Executing: list_directory_tree(path="src/", max_depth=2)\n\n📁 /project/src/\n├── 📁 components/\n│ ├── 📁 ui/\n│ └── App.tsx\n├── 📁 utils/\n└── index.tsx',
duration: 1800
},
{
id: "step-3",
timestamp: new Date(Date.now() - 20 * 60000), // 20 minutes ago
status: 'completed',
type: 'thinking',
title: 'Component Analysis',
content: 'Based on the directory structure, I see that the UI components are organized in a dedicated folder. I should examine the existing component patterns before implementing new ones.',
duration: 3500
},
{
id: "step-4",
timestamp: new Date(Date.now() - 15 * 60000), // 15 minutes ago
status: 'completed',
type: 'tool-execution',
title: 'Read Component Code',
content: 'Executing: read_file_tool(filepath="src/components/ui/Button.tsx")\n\n```tsx\nimport { cn } from "../../utils";\n\nexport interface ButtonProps {\n // Component props...\n}\n\nexport function Button({ children, ...props }: ButtonProps) {\n // Component implementation...\n}\n```',
duration: 2100
},
{
id: "step-5",
timestamp: new Date(Date.now() - 10 * 60000), // 10 minutes ago
status: 'completed',
type: 'implementation',
title: 'Creating NavBar Component',
content: 'I\'m creating a NavBar component following the design system patterns:\n\n```tsx\nimport { cn } from "../../utils";\n\nexport interface NavBarProps {\n // New component props...\n}\n\nexport function NavBar({ ...props }: NavBarProps) {\n // New component implementation...\n}\n```',
duration: 6800
},
{
id: "step-6",
timestamp: new Date(Date.now() - 5 * 60000), // 5 minutes ago
status: 'in-progress',
type: 'implementation',
title: 'Styling Timeline Component',
content: 'Currently working on styling the Timeline component to match the design system:\n\n```tsx\n// Work in progress...\nexport function Timeline({ steps, ...props }: TimelineProps) {\n // Current implementation...\n}\n```',
},
{
id: "step-7",
timestamp: new Date(Date.now() - 2 * 60000), // 2 minutes ago
status: 'error',
type: 'tool-execution',
title: 'Running Tests',
content: 'Error executing: run_shell_command(command="npm test")\n\nTest failed: TypeError: Cannot read property \'steps\' of undefined',
duration: 3200
},
{
id: "step-8",
timestamp: new Date(), // Now
status: 'pending',
type: 'planning',
title: 'Next Steps',
content: 'Need to plan the implementation of the SessionDrawer component...',
}
];
}
/**
* Returns an array of sample agent sessions
*/
export function getSampleAgentSessions(): AgentSession[] {
const steps = getSampleAgentSteps();
return [
{
id: "session-1",
name: "UI Component Implementation",
created: new Date(Date.now() - 35 * 60000), // 35 minutes ago
updated: new Date(), // Now
status: 'active',
steps: steps
},
{
id: "session-2",
name: "API Integration",
created: new Date(Date.now() - 2 * 3600000), // 2 hours ago
updated: new Date(Date.now() - 30 * 60000), // 30 minutes ago
status: 'completed',
steps: [
{
id: "other-step-1",
timestamp: new Date(Date.now() - 2 * 3600000), // 2 hours ago
status: 'completed',
type: 'planning',
title: 'API Integration Planning',
content: 'Planning the integration with the backend API...',
duration: 4500
},
{
id: "other-step-2",
timestamp: new Date(Date.now() - 1.5 * 3600000), // 1.5 hours ago
status: 'completed',
type: 'implementation',
title: 'Implementing API Client',
content: 'Creating API client with fetch utilities...',
duration: 7200
},
{
id: "other-step-3",
timestamp: new Date(Date.now() - 1 * 3600000), // 1 hour ago
status: 'completed',
type: 'tool-execution',
title: 'Testing API Endpoints',
content: 'Running tests against API endpoints...',
duration: 5000
}
]
},
{
id: "session-3",
name: "Bug Fixes",
created: new Date(Date.now() - 5 * 3600000), // 5 hours ago
updated: new Date(Date.now() - 4 * 3600000), // 4 hours ago
status: 'error',
steps: [
{
id: "bug-step-1",
timestamp: new Date(Date.now() - 5 * 3600000), // 5 hours ago
status: 'completed',
type: 'planning',
title: 'Bug Analysis',
content: 'Analyzing reported bugs from issue tracker...',
duration: 3600
},
{
id: "bug-step-2",
timestamp: new Date(Date.now() - 4.5 * 3600000), // 4.5 hours ago
status: 'error',
type: 'implementation',
title: 'Fixing Authentication Bug',
content: 'Error: Unable to resolve dependency conflict with auth package',
duration: 2500
}
]
}
];
}

View File

@ -0,0 +1,28 @@
/**
* Common types for agent UI components
*/
/**
* Represents a single step in the agent process
*/
export interface AgentStep {
id: string;
timestamp: Date;
status: 'completed' | 'in-progress' | 'error' | 'pending';
type: 'tool-execution' | 'thinking' | 'planning' | 'implementation' | 'user-input';
title: string;
content: string;
duration?: number; // in milliseconds
}
/**
* Represents a session with multiple steps
*/
export interface AgentSession {
id: string;
name: string;
created: Date;
updated: Date;
status: 'active' | 'completed' | 'error';
steps: AgentStep[];
}

View File

@ -0,0 +1,18 @@
/** @type {import('tailwindcss').Config} */
module.exports = {
presets: [require('./tailwind.preset')],
content: [
'./src/**/*.{js,jsx,ts,tsx}',
],
safelist: [
'dark',
{
pattern: /^dark:/,
variants: ['hover', 'focus', 'active']
}
],
theme: {
extend: {},
},
plugins: [],
}

View File

@ -0,0 +1,70 @@
/** @type {import('tailwindcss').Config} */
module.exports = {
darkMode: ["class"],
theme: {
container: {
center: true,
padding: "2rem",
screens: {
"2xl": "1400px",
},
},
extend: {
colors: {
border: "hsl(var(--border))",
input: "hsl(var(--input))",
ring: "hsl(var(--ring))",
background: "hsl(var(--background))",
foreground: "hsl(var(--foreground))",
primary: {
DEFAULT: "hsl(var(--primary))",
foreground: "hsl(var(--primary-foreground))",
},
secondary: {
DEFAULT: "hsl(var(--secondary))",
foreground: "hsl(var(--secondary-foreground))",
},
destructive: {
DEFAULT: "hsl(var(--destructive))",
foreground: "hsl(var(--destructive-foreground))",
},
muted: {
DEFAULT: "hsl(var(--muted))",
foreground: "hsl(var(--muted-foreground))",
},
accent: {
DEFAULT: "hsl(var(--accent))",
foreground: "hsl(var(--accent-foreground))",
},
popover: {
DEFAULT: "hsl(var(--popover))",
foreground: "hsl(var(--popover-foreground))",
},
card: {
DEFAULT: "hsl(var(--card))",
foreground: "hsl(var(--card-foreground))",
},
},
borderRadius: {
lg: "var(--radius)",
md: "calc(var(--radius) - 2px)",
sm: "calc(var(--radius) - 4px)",
},
keyframes: {
"accordion-down": {
from: { height: "0" },
to: { height: "var(--radix-accordion-content-height)" },
},
"accordion-up": {
from: { height: "var(--radix-accordion-content-height)" },
to: { height: "0" },
},
},
animation: {
"accordion-down": "accordion-down 0.2s ease-out",
"accordion-up": "accordion-up 0.2s ease-out",
},
},
},
plugins: [require("tailwindcss-animate")],
}

View File

@ -0,0 +1,17 @@
{
"compilerOptions": {
"target": "ES6",
"module": "ESNext",
"moduleResolution": "node",
"declaration": true,
"jsx": "react",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true,
"outDir": "dist",
"rootDir": "src",
"lib": ["DOM", "DOM.Iterable", "ESNext", "ES2016"]
},
"include": ["src"]
}

8255
frontend/package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

13
frontend/package.json Normal file
View File

@ -0,0 +1,13 @@
{
"name": "frontend-monorepo",
"private": true,
"workspaces": [
"common",
"web",
"vsc"
],
"scripts": {
"install-all": "npm install",
"dev:web": "npm --workspace @ra-aid/web run dev"
}
}

View File

@ -0,0 +1,5 @@
import { defineConfig } from '@vscode/test-cli';
export default defineConfig({
files: 'out/test/**/*.test.js',
});

5
frontend/vsc/.vscode/extensions.json vendored Normal file
View File

@ -0,0 +1,5 @@
{
// See http://go.microsoft.com/fwlink/?LinkId=827846
// for the documentation about the extensions.json format
"recommendations": ["dbaeumer.vscode-eslint", "connor4312.esbuild-problem-matchers", "ms-vscode.extension-test-runner"]
}

21
frontend/vsc/.vscode/launch.json vendored Normal file
View File

@ -0,0 +1,21 @@
// A launch configuration that compiles the extension and then opens it inside a new window
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
{
"version": "0.2.0",
"configurations": [
{
"name": "Run Extension",
"type": "extensionHost",
"request": "launch",
"args": [
"--extensionDevelopmentPath=${workspaceFolder}"
],
"outFiles": [
"${workspaceFolder}/dist/**/*.js"
],
"preLaunchTask": "${defaultBuildTask}"
}
]
}

13
frontend/vsc/.vscode/settings.json vendored Normal file
View File

@ -0,0 +1,13 @@
// Place your settings in this file to overwrite default and user settings.
{
"files.exclude": {
"out": false, // set this to true to hide the "out" folder with the compiled JS files
"dist": false // set this to true to hide the "dist" folder with the compiled JS files
},
"search.exclude": {
"out": true, // set this to false to include "out" folder in search results
"dist": true // set this to false to include "dist" folder in search results
},
// Turn off tsc task auto detection since we have the necessary tasks as npm scripts
"typescript.tsc.autoDetect": "off"
}

64
frontend/vsc/.vscode/tasks.json vendored Normal file
View File

@ -0,0 +1,64 @@
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
{
"version": "2.0.0",
"tasks": [
{
"label": "watch",
"dependsOn": [
"npm: watch:tsc",
"npm: watch:esbuild"
],
"presentation": {
"reveal": "never"
},
"group": {
"kind": "build",
"isDefault": true
}
},
{
"type": "npm",
"script": "watch:esbuild",
"group": "build",
"problemMatcher": "$esbuild-watch",
"isBackground": true,
"label": "npm: watch:esbuild",
"presentation": {
"group": "watch",
"reveal": "never"
}
},
{
"type": "npm",
"script": "watch:tsc",
"group": "build",
"problemMatcher": "$tsc-watch",
"isBackground": true,
"label": "npm: watch:tsc",
"presentation": {
"group": "watch",
"reveal": "never"
}
},
{
"type": "npm",
"script": "watch-tests",
"problemMatcher": "$tsc-watch",
"isBackground": true,
"presentation": {
"reveal": "never",
"group": "watchers"
},
"group": "build"
},
{
"label": "tasks: watch-tests",
"dependsOn": [
"npm: watch",
"npm: watch-tests"
],
"problemMatcher": []
}
]
}

View File

@ -0,0 +1,14 @@
.vscode/**
.vscode-test/**
out/**
node_modules/**
src/**
.gitignore
.yarnrc
esbuild.js
vsc-extension-quickstart.md
**/tsconfig.json
**/eslint.config.mjs
**/*.map
**/*.ts
**/.vscode-test.*

View File

@ -0,0 +1,9 @@
# Change Log
All notable changes to the "ra-aid" extension will be documented in this file.
Check [Keep a Changelog](http://keepachangelog.com/) for recommendations on how to structure this file.
## [Unreleased]
- Initial release

71
frontend/vsc/README.md Normal file
View File

@ -0,0 +1,71 @@
# ra-aid README
This is the README for your extension "ra-aid". After writing up a brief description, we recommend including the following sections.
## Features
Describe specific features of your extension including screenshots of your extension in action. Image paths are relative to this README file.
For example if there is an image subfolder under your extension project workspace:
\!\[feature X\]\(images/feature-x.png\)
> Tip: Many popular extensions utilize animations. This is an excellent way to show off your extension! We recommend short, focused animations that are easy to follow.
## Requirements
If you have any requirements or dependencies, add a section describing those and how to install and configure them.
## Extension Settings
Include if your extension adds any VS Code settings through the `contributes.configuration` extension point.
For example:
This extension contributes the following settings:
* `myExtension.enable`: Enable/disable this extension.
* `myExtension.thing`: Set to `blah` to do something.
## Known Issues
Calling out known issues can help limit users opening duplicate issues against your extension.
## Release Notes
Users appreciate release notes as you update your extension.
### 1.0.0
Initial release of ...
### 1.0.1
Fixed issue #.
### 1.1.0
Added features X, Y, and Z.
---
## Following extension guidelines
Ensure that you've read through the extensions guidelines and follow the best practices for creating your extension.
* [Extension Guidelines](https://code.visualstudio.com/api/references/extension-guidelines)
## Working with Markdown
You can author your README using Visual Studio Code. Here are some useful editor keyboard shortcuts:
* Split the editor (`Cmd+\` on macOS or `Ctrl+\` on Windows and Linux).
* Toggle preview (`Shift+Cmd+V` on macOS or `Shift+Ctrl+V` on Windows and Linux).
* Press `Ctrl+Space` (Windows, Linux, macOS) to see a list of Markdown snippets.
## For more information
* [Visual Studio Code's Markdown Support](http://code.visualstudio.com/docs/languages/markdown)
* [Markdown Syntax Reference](https://help.github.com/articles/markdown-basics/)
**Enjoy!**

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.5 KiB

BIN
frontend/vsc/assets/RA.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.6 KiB

140
frontend/vsc/dist/extension.js vendored Normal file
View File

@ -0,0 +1,140 @@
"use strict";
var __create = Object.create;
var __defProp = Object.defineProperty;
var __getOwnPropDesc = Object.getOwnPropertyDescriptor;
var __getOwnPropNames = Object.getOwnPropertyNames;
var __getProtoOf = Object.getPrototypeOf;
var __hasOwnProp = Object.prototype.hasOwnProperty;
var __export = (target, all) => {
for (var name in all)
__defProp(target, name, { get: all[name], enumerable: true });
};
var __copyProps = (to, from, except, desc) => {
if (from && typeof from === "object" || typeof from === "function") {
for (let key of __getOwnPropNames(from))
if (!__hasOwnProp.call(to, key) && key !== except)
__defProp(to, key, { get: () => from[key], enumerable: !(desc = __getOwnPropDesc(from, key)) || desc.enumerable });
}
return to;
};
var __toESM = (mod, isNodeMode, target) => (target = mod != null ? __create(__getProtoOf(mod)) : {}, __copyProps(
// If the importer is in node compatibility mode or this is not an ESM
// file that has been converted to a CommonJS file using a Babel-
// compatible transform (i.e. "__esModule" has not been set), then set
// "default" to the CommonJS "module.exports" for node compatibility.
isNodeMode || !mod || !mod.__esModule ? __defProp(target, "default", { value: mod, enumerable: true }) : target,
mod
));
var __toCommonJS = (mod) => __copyProps(__defProp({}, "__esModule", { value: true }), mod);
// src/extension.ts
var extension_exports = {};
__export(extension_exports, {
activate: () => activate,
deactivate: () => deactivate
});
module.exports = __toCommonJS(extension_exports);
var vscode = __toESM(require("vscode"));
var RAWebviewViewProvider = class {
constructor(_extensionUri) {
this._extensionUri = _extensionUri;
}
/**
* Called when a view is first created to initialize the webview
*/
resolveWebviewView(webviewView, context, _token) {
webviewView.webview.options = {
// Enable JavaScript in the webview
enableScripts: true,
// Restrict the webview to only load resources from the extension's directory
localResourceRoots: [this._extensionUri]
};
webviewView.webview.html = this._getHtmlForWebview(webviewView.webview);
}
/**
* Creates HTML content for the webview with proper security policies
*/
_getHtmlForWebview(webview) {
const logoUri = webview.asWebviewUri(vscode.Uri.joinPath(this._extensionUri, "assets", "RA.png"));
const nonce = getNonce();
return `<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="Content-Security-Policy" content="default-src 'none'; img-src ${webview.cspSource} https:; style-src ${webview.cspSource} 'unsafe-inline'; script-src 'nonce-${nonce}';">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>RA.Aid</title>
<style>
body {
padding: 0;
color: var(--vscode-foreground);
font-size: var(--vscode-font-size);
font-weight: var(--vscode-font-weight);
font-family: var(--vscode-font-family);
background-color: var(--vscode-editor-background);
}
.container {
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
padding: 20px;
text-align: center;
}
.logo {
width: 100px;
height: 100px;
margin-bottom: 20px;
}
h1 {
color: var(--vscode-editor-foreground);
font-size: 1.3em;
margin-bottom: 15px;
}
p {
color: var(--vscode-foreground);
margin-bottom: 10px;
}
</style>
</head>
<body>
<div class="container">
<img src="${logoUri}" alt="RA.Aid Logo" class="logo">
<h1>RA.Aid</h1>
<p>Your research and development assistant.</p>
<p>More features coming soon!</p>
</div>
</body>
</html>`;
}
};
function getNonce() {
let text = "";
const possible = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789";
for (let i = 0; i < 32; i++) {
text += possible.charAt(Math.floor(Math.random() * possible.length));
}
return text;
}
function activate(context) {
console.log('Congratulations, your extension "ra-aid" is now active!');
const provider = new RAWebviewViewProvider(context.extensionUri);
const viewRegistration = vscode.window.registerWebviewViewProvider(
"ra-aid.view",
// Must match the view id in package.json
provider
);
context.subscriptions.push(viewRegistration);
const disposable = vscode.commands.registerCommand("ra-aid.helloWorld", () => {
vscode.window.showInformationMessage("Hello World from RA.Aid!");
});
context.subscriptions.push(disposable);
}
function deactivate() {
}
// Annotate the CommonJS export names for ESM import in node:
0 && (module.exports = {
activate,
deactivate
});
//# sourceMappingURL=extension.js.map

6
frontend/vsc/dist/extension.js.map vendored Normal file
View File

@ -0,0 +1,6 @@
{
"version": 3,
"sources": ["../src/extension.ts"],
"mappings": ";;;;;;;;;;;;;;;;;;;;;;;;;;;;;;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AACA,aAAwB;AAKxB,IAAM,wBAAN,MAAkE;AAAA,EAChE,YAA6B,eAA2B;AAA3B;AAAA,EAA4B;AAAA;AAAA;AAAA;AAAA,EAKlD,mBACL,aACA,SACA,QACA;AAEA,gBAAY,QAAQ,UAAU;AAAA;AAAA,MAE5B,eAAe;AAAA;AAAA,MAEf,oBAAoB,CAAC,KAAK,aAAa;AAAA,IACzC;AAGA,gBAAY,QAAQ,OAAO,KAAK,mBAAmB,YAAY,OAAO;AAAA,EACxE;AAAA;AAAA;AAAA;AAAA,EAKQ,mBAAmB,SAAiC;AAE1D,UAAM,UAAU,QAAQ,aAAoB,WAAI,SAAS,KAAK,eAAe,UAAU,QAAQ,CAAC;AAMhG,UAAM,QAAQ,SAAS;AAEvB,WAAO;AAAA;AAAA;AAAA;AAAA,0FAI+E,QAAQ,SAAS,sBAAsB,QAAQ,SAAS,uCAAuC,KAAK;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,sBAsCxK,OAAO;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EAO3B;AACF;AAKA,SAAS,WAAW;AAClB,MAAI,OAAO;AACX,QAAM,WAAW;AACjB,WAAS,IAAI,GAAG,IAAI,IAAI,KAAK;AAC3B,YAAQ,SAAS,OAAO,KAAK,MAAM,KAAK,OAAO,IAAI,SAAS,MAAM,CAAC;AAAA,EACrE;AACA,SAAO;AACT;AAGO,SAAS,SAAS,SAAkC;AAEzD,UAAQ,IAAI,yDAAyD;AAGrE,QAAM,WAAW,IAAI,sBAAsB,QAAQ,YAAY;AAC/D,QAAM,mBAA0B,cAAO;AAAA,IACrC;AAAA;AAAA,IACA;AAAA,EACF;AACA,UAAQ,cAAc,KAAK,gBAAgB;AAK3C,QAAM,aAAoB,gBAAS,gBAAgB,qBAAqB,MAAM;AAG5E,IAAO,cAAO,uBAAuB,0BAA0B;AAAA,EACjE,CAAC;AAED,UAAQ,cAAc,KAAK,UAAU;AACvC;AAGO,SAAS,aAAa;AAAC;",
"names": []
}

56
frontend/vsc/esbuild.js Normal file
View File

@ -0,0 +1,56 @@
const esbuild = require("esbuild");
const production = process.argv.includes('--production');
const watch = process.argv.includes('--watch');
/**
* @type {import('esbuild').Plugin}
*/
const esbuildProblemMatcherPlugin = {
name: 'esbuild-problem-matcher',
setup(build) {
build.onStart(() => {
console.log('[watch] build started');
});
build.onEnd((result) => {
result.errors.forEach(({ text, location }) => {
console.error(`✘ [ERROR] ${text}`);
console.error(` ${location.file}:${location.line}:${location.column}:`);
});
console.log('[watch] build finished');
});
},
};
async function main() {
const ctx = await esbuild.context({
entryPoints: [
'src/extension.ts'
],
bundle: true,
format: 'cjs',
minify: production,
sourcemap: !production,
sourcesContent: false,
platform: 'node',
outfile: 'dist/extension.js',
external: ['vscode'],
logLevel: 'silent',
plugins: [
/* add to the end of plugins array */
esbuildProblemMatcherPlugin,
],
});
if (watch) {
await ctx.watch();
} else {
await ctx.rebuild();
await ctx.dispose();
}
}
main().catch(e => {
console.error(e);
process.exit(1);
});

View File

@ -0,0 +1,28 @@
import typescriptEslint from "@typescript-eslint/eslint-plugin";
import tsParser from "@typescript-eslint/parser";
export default [{
files: ["**/*.ts"],
}, {
plugins: {
"@typescript-eslint": typescriptEslint,
},
languageOptions: {
parser: tsParser,
ecmaVersion: 2022,
sourceType: "module",
},
rules: {
"@typescript-eslint/naming-convention": ["warn", {
selector: "import",
format: ["camelCase", "PascalCase"],
}],
curly: "warn",
eqeqeq: "warn",
"no-throw-literal": "warn",
semi: "warn",
},
}];

5960
frontend/vsc/package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

67
frontend/vsc/package.json Normal file
View File

@ -0,0 +1,67 @@
{
"name": "ra-aid",
"displayName": "RA.Aid",
"description": "Develop software autonomously.",
"version": "0.0.1",
"engines": {
"vscode": "^1.98.0"
},
"categories": [
"Other"
],
"activationEvents": [],
"main": "./dist/extension.js",
"contributes": {
"viewsContainers": {
"activitybar": [
{
"id": "ra-aid-view",
"title": "RA.Aid",
"icon": "assets/RA-white-transp.png"
}
]
},
"views": {
"ra-aid-view": [
{
"type": "webview",
"id": "ra-aid.view",
"name": "RA.Aid"
}
]
},
"commands": [
{
"command": "ra-aid.helloWorld",
"title": "Hello World"
}
]
},
"scripts": {
"vscode:prepublish": "npm run package",
"compile": "npm run check-types && npm run lint && node esbuild.js",
"watch": "npm-run-all -p watch:*",
"watch:esbuild": "node esbuild.js --watch",
"watch:tsc": "tsc --noEmit --watch --project tsconfig.json",
"package": "npm run check-types && npm run lint && node esbuild.js --production",
"compile-tests": "tsc -p . --outDir out",
"watch-tests": "tsc -p . -w --outDir out",
"pretest": "npm run compile-tests && npm run compile && npm run lint",
"check-types": "tsc --noEmit",
"lint": "eslint src",
"test": "vscode-test"
},
"devDependencies": {
"@types/vscode": "^1.98.0",
"@types/mocha": "^10.0.10",
"@types/node": "20.x",
"@typescript-eslint/eslint-plugin": "^8.25.0",
"@typescript-eslint/parser": "^8.25.0",
"eslint": "^9.21.0",
"esbuild": "^0.25.0",
"npm-run-all": "^4.1.5",
"typescript": "^5.7.3",
"@vscode/test-cli": "^0.0.10",
"@vscode/test-electron": "^2.4.1"
}
}

View File

@ -0,0 +1,133 @@
// The module 'vscode' contains the VS Code extensibility API
import * as vscode from 'vscode';
/**
* WebviewViewProvider implementation for the RA.Aid panel
*/
class RAWebviewViewProvider implements vscode.WebviewViewProvider {
constructor(private readonly _extensionUri: vscode.Uri) {}
/**
* Called when a view is first created to initialize the webview
*/
public resolveWebviewView(
webviewView: vscode.WebviewView,
context: vscode.WebviewViewResolveContext,
_token: vscode.CancellationToken
) {
// Set options for the webview
webviewView.webview.options = {
// Enable JavaScript in the webview
enableScripts: true,
// Restrict the webview to only load resources from the extension's directory
localResourceRoots: [this._extensionUri]
};
// Set the HTML content of the webview
webviewView.webview.html = this._getHtmlForWebview(webviewView.webview);
}
/**
* Creates HTML content for the webview with proper security policies
*/
private _getHtmlForWebview(webview: vscode.Webview): string {
// Create a URI to the extension's assets directory
const logoUri = webview.asWebviewUri(vscode.Uri.joinPath(this._extensionUri, 'assets', 'RA.png'));
// Create a URI to the script file
// const scriptUri = webview.asWebviewUri(vscode.Uri.joinPath(this._extensionUri, 'dist', 'webview.js'));
// Use a nonce to whitelist scripts
const nonce = getNonce();
return `<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="Content-Security-Policy" content="default-src 'none'; img-src ${webview.cspSource} https:; style-src ${webview.cspSource} 'unsafe-inline'; script-src 'nonce-${nonce}';">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>RA.Aid</title>
<style>
body {
padding: 0;
color: var(--vscode-foreground);
font-size: var(--vscode-font-size);
font-weight: var(--vscode-font-weight);
font-family: var(--vscode-font-family);
background-color: var(--vscode-editor-background);
}
.container {
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
padding: 20px;
text-align: center;
}
.logo {
width: 100px;
height: 100px;
margin-bottom: 20px;
}
h1 {
color: var(--vscode-editor-foreground);
font-size: 1.3em;
margin-bottom: 15px;
}
p {
color: var(--vscode-foreground);
margin-bottom: 10px;
}
</style>
</head>
<body>
<div class="container">
<img src="${logoUri}" alt="RA.Aid Logo" class="logo">
<h1>RA.Aid</h1>
<p>Your research and development assistant.</p>
<p>More features coming soon!</p>
</div>
</body>
</html>`;
}
}
/**
* Generates a random nonce for CSP
*/
function getNonce() {
let text = '';
const possible = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789';
for (let i = 0; i < 32; i++) {
text += possible.charAt(Math.floor(Math.random() * possible.length));
}
return text;
}
// This method is called when your extension is activated
export function activate(context: vscode.ExtensionContext) {
// Use the console to output diagnostic information (console.log) and errors (console.error)
console.log('Congratulations, your extension "ra-aid" is now active!');
// Register the WebviewViewProvider
const provider = new RAWebviewViewProvider(context.extensionUri);
const viewRegistration = vscode.window.registerWebviewViewProvider(
'ra-aid.view', // Must match the view id in package.json
provider
);
context.subscriptions.push(viewRegistration);
// The command has been defined in the package.json file
// Now provide the implementation of the command with registerCommand
// The commandId parameter must match the command field in package.json
const disposable = vscode.commands.registerCommand('ra-aid.helloWorld', () => {
// The code you place here will be executed every time your command is executed
// Display a message box to the user
vscode.window.showInformationMessage('Hello World from RA.Aid!');
});
context.subscriptions.push(disposable);
}
// This method is called when your extension is deactivated
export function deactivate() {}

View File

@ -0,0 +1,15 @@
import * as assert from 'assert';
// You can import and use all API from the 'vscode' module
// as well as import your extension to test it
import * as vscode from 'vscode';
// import * as myExtension from '../../extension';
suite('Extension Test Suite', () => {
vscode.window.showInformationMessage('Start all tests.');
test('Sample test', () => {
assert.strictEqual(-1, [1, 2, 3].indexOf(5));
assert.strictEqual(-1, [1, 2, 3].indexOf(0));
});
});

View File

@ -0,0 +1,16 @@
{
"compilerOptions": {
"module": "Node16",
"target": "ES2022",
"lib": [
"ES2022"
],
"sourceMap": true,
"rootDir": "src",
"strict": true, /* enable all strict type-checking options */
/* Additional Checks */
// "noImplicitReturns": true, /* Report error when not all code paths in function return a value. */
// "noFallthroughCasesInSwitch": true, /* Report errors for fallthrough cases in switch statement. */
// "noUnusedParameters": true, /* Report errors on unused parameters. */
}
}

View File

@ -0,0 +1,48 @@
# Welcome to your VS Code Extension
## What's in the folder
* This folder contains all of the files necessary for your extension.
* `package.json` - this is the manifest file in which you declare your extension and command.
* The sample plugin registers a command and defines its title and command name. With this information VS Code can show the command in the command palette. It doesnt yet need to load the plugin.
* `src/extension.ts` - this is the main file where you will provide the implementation of your command.
* The file exports one function, `activate`, which is called the very first time your extension is activated (in this case by executing the command). Inside the `activate` function we call `registerCommand`.
* We pass the function containing the implementation of the command as the second parameter to `registerCommand`.
## Setup
* install the recommended extensions (amodio.tsl-problem-matcher, ms-vscode.extension-test-runner, and dbaeumer.vscode-eslint)
## Get up and running straight away
* Press `F5` to open a new window with your extension loaded.
* Run your command from the command palette by pressing (`Ctrl+Shift+P` or `Cmd+Shift+P` on Mac) and typing `Hello World`.
* Set breakpoints in your code inside `src/extension.ts` to debug your extension.
* Find output from your extension in the debug console.
## Make changes
* You can relaunch the extension from the debug toolbar after changing code in `src/extension.ts`.
* You can also reload (`Ctrl+R` or `Cmd+R` on Mac) the VS Code window with your extension to load your changes.
## Explore the API
* You can open the full set of our API when you open the file `node_modules/@types/vscode/index.d.ts`.
## Run tests
* Install the [Extension Test Runner](https://marketplace.visualstudio.com/items?itemName=ms-vscode.extension-test-runner)
* Run the "watch" task via the **Tasks: Run Task** command. Make sure this is running, or tests might not be discovered.
* Open the Testing view from the activity bar and click the Run Test" button, or use the hotkey `Ctrl/Cmd + ; A`
* See the output of the test result in the Test Results view.
* Make changes to `src/test/extension.test.ts` or create new test files inside the `test` folder.
* The provided test runner will only consider files matching the name pattern `**.test.ts`.
* You can create folders inside the `test` folder to structure your tests any way you want.
## Go further
* Reduce the extension size and improve the startup time by [bundling your extension](https://code.visualstudio.com/api/working-with-extensions/bundling-extension).
* [Publish your extension](https://code.visualstudio.com/api/working-with-extensions/publishing-extension) on the VS Code extension marketplace.
* Automate builds by setting up [Continuous Integration](https://code.visualstudio.com/api/working-with-extensions/continuous-integration).

16
frontend/web/index.html Normal file
View File

@ -0,0 +1,16 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<meta name="description" content="Demo page showcasing shadcn/ui components from the common package" />
<title>RA-Aid UI Components Demo</title>
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600;700&display=swap" rel="stylesheet">
</head>
<body>
<div id="root"></div>
<script type="module" src="/src/index.tsx"></script>
</body>
</html>

26
frontend/web/package.json Normal file
View File

@ -0,0 +1,26 @@
{
"name": "@ra-aid/web",
"version": "1.0.0",
"private": true,
"main": "dist/index.js",
"scripts": {
"dev": "vite",
"build": "vite build"
},
"dependencies": {
"react": "^18.0.0",
"react-dom": "^18.0.0",
"@ra-aid/common": "1.0.0"
},
"devDependencies": {
"vite": "^4.0.0",
"@vitejs/plugin-react": "^3.0.0",
"typescript": "^5.0.0",
"tailwindcss": "^3.4.1",
"postcss": "^8.4.35",
"autoprefixer": "^10.4.17"
},
"optionalDependencies": {
"@tailwindcss/forms": "^0.5.7"
}
}

View File

@ -0,0 +1,6 @@
module.exports = {
plugins: {
tailwindcss: {},
autoprefixer: {},
},
}

View File

@ -0,0 +1,19 @@
import React from 'react';
import ReactDOM from 'react-dom/client';
import { DefaultAgentScreen } from '@ra-aid/common';
/**
* Main application entry point
* Simply renders the DefaultAgentScreen component from the common package
*/
const App = () => {
return <DefaultAgentScreen />;
};
// Mount the app to the root element
const root = ReactDOM.createRoot(document.getElementById('root')!);
root.render(
<React.StrictMode>
<App />
</React.StrictMode>
);

View File

@ -0,0 +1,14 @@
/** @type {import('tailwindcss').Config} */
module.exports = {
presets: [require('../common/tailwind.preset')],
content: [
'./src/**/*.{js,jsx,ts,tsx}',
'../common/src/**/*.{js,jsx,ts,tsx}'
],
theme: {
extend: {},
},
plugins: [
require('@tailwindcss/forms')
],
}

View File

@ -0,0 +1,15 @@
{
"compilerOptions": {
"target": "ES6",
"module": "ESNext",
"moduleResolution": "node",
"jsx": "react-jsx",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true,
"outDir": "dist",
"rootDir": "src"
},
"include": ["src"]
}

View File

@ -0,0 +1,40 @@
import { defineConfig } from 'vite';
import react from '@vitejs/plugin-react';
import path from 'path';
import fs from 'fs';
// Get all component files from common package
const commonSrcDir = path.resolve(__dirname, '../common/src');
export default defineConfig({
plugins: [react()],
resolve: {
alias: {
// Direct alias to the source directory
'@ra-aid/common': path.resolve(__dirname, '../common/src')
},
preserveSymlinks: true
},
optimizeDeps: {
// Exclude the common package from optimization so it can trigger hot reload
exclude: ['@ra-aid/common']
},
server: {
hmr: true,
watch: {
usePolling: true,
interval: 100,
// Make sure to explicitly NOT ignore the common package
ignored: [
'**/node_modules/**',
'**/dist/**',
'!**/common/src/**'
]
}
},
build: {
commonjsOptions: {
transformMixedEsModules: true
}
}
});

View File

@ -50,6 +50,7 @@ dependencies = [
"platformdirs>=3.17.9",
"requests",
"packaging",
"prompt-toolkit"
]
[project.optional-dependencies]

View File

@ -5,24 +5,8 @@ import sys
import uuid
from datetime import datetime
# Add litellm import
import litellm
# Configure litellm to suppress debug logs
os.environ["LITELLM_LOG"] = "ERROR"
litellm.suppress_debug_info = True
litellm.set_verbose = False
# Explicitly configure LiteLLM's loggers
for logger_name in ["litellm", "LiteLLM"]:
litellm_logger = logging.getLogger(logger_name)
litellm_logger.setLevel(logging.WARNING)
litellm_logger.propagate = True
# Use litellm's internal method to disable debugging
if hasattr(litellm, "_logging") and hasattr(litellm._logging, "_disable_debugging"):
litellm._logging._disable_debugging()
from langgraph.checkpoint.memory import MemorySaver
from rich.console import Console
from rich.panel import Panel
@ -39,32 +23,41 @@ from ra_aid.agents.research_agent import run_research_agent
from ra_aid.agents import run_planning_agent
from ra_aid.config import (
DEFAULT_MAX_TEST_CMD_RETRIES,
DEFAULT_MODEL,
DEFAULT_RECURSION_LIMIT,
DEFAULT_TEST_CMD_TIMEOUT,
VALID_PROVIDERS,
)
from ra_aid.database.repositories.key_fact_repository import KeyFactRepositoryManager, get_key_fact_repository
from ra_aid.database.repositories.key_fact_repository import (
KeyFactRepositoryManager,
get_key_fact_repository,
)
from ra_aid.database.repositories.key_snippet_repository import (
KeySnippetRepositoryManager, get_key_snippet_repository
KeySnippetRepositoryManager,
get_key_snippet_repository,
)
from ra_aid.database.repositories.human_input_repository import (
HumanInputRepositoryManager, get_human_input_repository
HumanInputRepositoryManager,
get_human_input_repository,
)
from ra_aid.database.repositories.research_note_repository import (
ResearchNoteRepositoryManager, get_research_note_repository
ResearchNoteRepositoryManager,
get_research_note_repository,
)
from ra_aid.database.repositories.trajectory_repository import (
TrajectoryRepositoryManager, get_trajectory_repository
TrajectoryRepositoryManager,
get_trajectory_repository,
)
from ra_aid.database.repositories.session_repository import (
SessionRepositoryManager, get_session_repository
)
from ra_aid.database.repositories.related_files_repository import (
RelatedFilesRepositoryManager
)
from ra_aid.database.repositories.work_log_repository import (
WorkLogRepositoryManager
RelatedFilesRepositoryManager,
)
from ra_aid.database.repositories.work_log_repository import WorkLogRepositoryManager
from ra_aid.database.repositories.config_repository import (
ConfigRepositoryManager,
get_config_repository
get_config_repository,
)
from ra_aid.env_inv import EnvDiscovery
from ra_aid.env_inv_context import EnvInvManager, get_env_inv
@ -90,19 +83,154 @@ from ra_aid.tools.human import ask_human
logger = get_logger(__name__)
# Configure litellm to suppress debug logs
os.environ["LITELLM_LOG"] = "ERROR"
litellm.suppress_debug_info = True
litellm.set_verbose = False
def launch_webui(host: str, port: int):
# Explicitly configure LiteLLM's loggers
for logger_name in ["litellm", "LiteLLM"]:
litellm_logger = logging.getLogger(logger_name)
litellm_logger.setLevel(logging.WARNING)
litellm_logger.propagate = True
# Use litellm's internal method to disable debugging
if hasattr(litellm, "_logging") and hasattr(litellm._logging, "_disable_debugging"):
litellm._logging._disable_debugging()
def launch_server(host: str, port: int, args):
"""Launch the RA.Aid web interface."""
from ra_aid.webui import run_server
from ra_aid.server import run_server
from ra_aid.database.connection import DatabaseManager
from ra_aid.database.repositories.session_repository import SessionRepositoryManager
from ra_aid.database.repositories.key_fact_repository import KeyFactRepositoryManager
from ra_aid.database.repositories.key_snippet_repository import KeySnippetRepositoryManager
from ra_aid.database.repositories.human_input_repository import HumanInputRepositoryManager
from ra_aid.database.repositories.research_note_repository import ResearchNoteRepositoryManager
from ra_aid.database.repositories.related_files_repository import RelatedFilesRepositoryManager
from ra_aid.database.repositories.trajectory_repository import TrajectoryRepositoryManager
from ra_aid.database.repositories.work_log_repository import WorkLogRepositoryManager
from ra_aid.database.repositories.config_repository import ConfigRepositoryManager
from ra_aid.env_inv_context import EnvInvManager
from ra_aid.env_inv import EnvDiscovery
# Set the console handler level to INFO for server mode
# Get the root logger and modify the console handler
root_logger = logging.getLogger()
for handler in root_logger.handlers:
# Check if this is a console handler (outputs to stdout/stderr)
if isinstance(handler, logging.StreamHandler) and handler.stream in [sys.stdout, sys.stderr]:
# Set console handler to INFO level for better visibility in server mode
handler.setLevel(logging.INFO)
logger.debug("Modified console logging level to INFO for server mode")
# Apply any pending database migrations
from ra_aid.database import ensure_migrations_applied
try:
migration_result = ensure_migrations_applied()
if not migration_result:
logger.warning("Database migrations failed but execution will continue")
except Exception as e:
logger.error(f"Database migration error: {str(e)}")
# Check dependencies before proceeding
check_dependencies()
# Validate environment (expert_enabled, web_research_enabled)
(
expert_enabled,
expert_missing,
web_research_enabled,
web_research_missing,
) = validate_environment(
args
) # Will exit if main env vars missing
logger.debug("Environment validation successful")
# Validate model configuration early
model_config = models_params.get(args.provider, {}).get(
args.model or "", {}
)
supports_temperature = model_config.get(
"supports_temperature",
args.provider
in [
"anthropic",
"openai",
"openrouter",
"openai-compatible",
"deepseek",
],
)
if supports_temperature and args.temperature is None:
args.temperature = model_config.get("default_temperature")
if args.temperature is None:
cpm(
f"This model supports temperature argument but none was given. Setting default temperature to {DEFAULT_TEMPERATURE}."
)
args.temperature = DEFAULT_TEMPERATURE
logger.debug(
f"Using default temperature {args.temperature} for model {args.model}"
)
# Initialize config dictionary with values from args and environment validation
config = {
"provider": args.provider,
"model": args.model,
"expert_provider": args.expert_provider,
"expert_model": args.expert_model,
"temperature": args.temperature,
"experimental_fallback_handler": args.experimental_fallback_handler,
"expert_enabled": expert_enabled,
"web_research_enabled": web_research_enabled,
"show_thoughts": args.show_thoughts,
"show_cost": args.show_cost,
"force_reasoning_assistance": args.reasoning_assistance,
"disable_reasoning_assistance": args.no_reasoning_assistance
}
# Initialize environment discovery
env_discovery = EnvDiscovery()
env_discovery.discover()
env_data = env_discovery.format_markdown()
print(f"Starting RA.Aid web interface on http://{host}:{port}")
run_server(host=host, port=port)
# Initialize database connection and repositories
with DatabaseManager() as db, \
SessionRepositoryManager(db) as session_repo, \
KeyFactRepositoryManager(db) as key_fact_repo, \
KeySnippetRepositoryManager(db) as key_snippet_repo, \
HumanInputRepositoryManager(db) as human_input_repo, \
ResearchNoteRepositoryManager(db) as research_note_repo, \
RelatedFilesRepositoryManager() as related_files_repo, \
TrajectoryRepositoryManager(db) as trajectory_repo, \
WorkLogRepositoryManager() as work_log_repo, \
ConfigRepositoryManager(config) as config_repo, \
EnvInvManager(env_data) as env_inv:
# This initializes all repositories and makes them available via their respective get methods
logger.debug("Initialized SessionRepository")
logger.debug("Initialized KeyFactRepository")
logger.debug("Initialized KeySnippetRepository")
logger.debug("Initialized HumanInputRepository")
logger.debug("Initialized ResearchNoteRepository")
logger.debug("Initialized RelatedFilesRepository")
logger.debug("Initialized TrajectoryRepository")
logger.debug("Initialized WorkLogRepository")
logger.debug("Initialized ConfigRepository")
logger.debug("Initialized Environment Inventory")
# Run the server within the context managers
run_server(host=host, port=port)
def parse_arguments(args=None):
ANTHROPIC_DEFAULT_MODEL = "claude-3-7-sonnet-20250219"
ANTHROPIC_DEFAULT_MODEL = DEFAULT_MODEL
OPENAI_DEFAULT_MODEL = "gpt-4o"
# Case-insensitive log level argument type
def log_level_type(value):
value = value.lower()
@ -199,8 +327,10 @@ Examples:
help="Enable chat mode with direct human interaction (implies --hil)",
)
parser.add_argument(
"--log-mode", choices=["console", "file"], default="file",
help="Logging mode: 'console' shows all logs in console, 'file' logs to file with only warnings+ in console"
"--log-mode",
choices=["console", "file"],
default="file",
help="Logging mode: 'console' shows all logs in console, 'file' logs to file with only warnings+ in console",
)
parser.add_argument(
"--pretty-logger", action="store_true", help="Enable pretty logging output"
@ -264,21 +394,21 @@ Examples:
help=f"Timeout in seconds for test command execution (default: {DEFAULT_TEST_CMD_TIMEOUT})",
)
parser.add_argument(
"--webui",
"--server",
action="store_true",
help="Launch the web interface",
)
parser.add_argument(
"--webui-host",
"--server-host",
type=str,
default="0.0.0.0",
help="Host to listen on for web interface (default: 0.0.0.0)",
)
parser.add_argument(
"--webui-port",
"--server-port",
type=int,
default=8080,
help="Port to listen on for web interface (default: 8080)",
default=1818,
help="Port to listen on for web interface (default: 1818)",
)
parser.add_argument(
"--wipe-project-memory",
@ -290,6 +420,11 @@ Examples:
action="store_true",
help="Display model thinking content extracted from think tags when supported by the model",
)
parser.add_argument(
"--show-cost",
action="store_true",
help="Display cost information as the agent works",
)
parser.add_argument(
"--reasoning-assistance",
action="store_true",
@ -378,20 +513,20 @@ def is_stage_requested(stage: str) -> bool:
def wipe_project_memory():
"""Delete the project database file to wipe all stored memory.
Returns:
str: A message indicating the result of the operation
"""
import os
from pathlib import Path
cwd = os.getcwd()
ra_aid_dir = Path(os.path.join(cwd, ".ra-aid"))
db_path = os.path.join(ra_aid_dir, "pk.db")
if not os.path.exists(db_path):
return "No project memory found to wipe."
try:
os.remove(db_path)
return "Project memory wiped successfully."
@ -403,11 +538,11 @@ def wipe_project_memory():
def build_status():
"""Build status panel with model and feature information.
Includes memory statistics at the bottom with counts of key facts, snippets, and research notes.
"""
status = Text()
# Get the config repository to get model/provider information
config_repo = get_config_repository()
provider = config_repo.get("provider", "")
@ -415,12 +550,14 @@ def build_status():
temperature = config_repo.get("temperature")
expert_provider = config_repo.get("expert_provider", "")
expert_model = config_repo.get("expert_model", "")
experimental_fallback_handler = config_repo.get("experimental_fallback_handler", False)
experimental_fallback_handler = config_repo.get(
"experimental_fallback_handler", False
)
web_research_enabled = config_repo.get("web_research_enabled", False)
# Get the expert enabled status
expert_enabled = bool(expert_provider and expert_model)
# Basic model information
status.append("🤖 ")
status.append(f"{provider}/{model}")
@ -452,39 +589,41 @@ def build_status():
[fb_handler._format_model(m) for m in fb_handler.fallback_tool_models]
)
status.append(msg)
# Add memory statistics
# Get counts of key facts, snippets, and research notes with error handling
fact_count = 0
snippet_count = 0
note_count = 0
try:
fact_count = len(get_key_fact_repository().get_all())
except RuntimeError as e:
logger.debug(f"Failed to get key facts count: {e}")
try:
snippet_count = len(get_key_snippet_repository().get_all())
except RuntimeError as e:
logger.debug(f"Failed to get key snippets count: {e}")
try:
note_count = len(get_research_note_repository().get_all())
except RuntimeError as e:
logger.debug(f"Failed to get research notes count: {e}")
# Add memory statistics line with reset option note
status.append(f"\n💾 Memory: {fact_count} facts, {snippet_count} snippets, {note_count} notes")
status.append(
f"\n💾 Memory: {fact_count} facts, {snippet_count} snippets, {note_count} notes"
)
if fact_count > 0 or snippet_count > 0 or note_count > 0:
status.append(" (use --wipe-project-memory to reset)")
# Check for newer version
version_message = check_for_newer_version()
if version_message:
status.append("\n\n")
status.append(version_message, style="yellow")
return status
@ -493,7 +632,7 @@ def main():
args = parse_arguments()
setup_logging(args.log_mode, args.pretty_logger, args.log_level)
logger.debug("Starting RA.Aid with arguments: %s", args)
# Check if we need to wipe project memory before starting
if args.wipe_project_memory:
result = wipe_project_memory()
@ -501,8 +640,8 @@ def main():
print(f"📋 {result}")
# Launch web interface if requested
if args.webui:
launch_webui(args.webui_host, args.webui_port)
if args.server:
launch_server(args.server_host, args.server_port, args)
return
try:
@ -519,14 +658,15 @@ def main():
# Initialize empty config dictionary to be populated later
config = {}
# Initialize repositories with database connection
# Create environment inventory data
env_discovery = EnvDiscovery()
env_discovery.discover()
env_data = env_discovery.format_markdown()
with KeyFactRepositoryManager(db) as key_fact_repo, \
with SessionRepositoryManager(db) as session_repo, \
KeyFactRepositoryManager(db) as key_fact_repo, \
KeySnippetRepositoryManager(db) as key_snippet_repo, \
HumanInputRepositoryManager(db) as human_input_repo, \
ResearchNoteRepositoryManager(db) as research_note_repo, \
@ -536,6 +676,7 @@ def main():
ConfigRepositoryManager(config) as config_repo, \
EnvInvManager(env_data) as env_inv:
# This initializes all repositories and makes them available via their respective get methods
logger.debug("Initialized SessionRepository")
logger.debug("Initialized KeyFactRepository")
logger.debug("Initialized KeySnippetRepository")
logger.debug("Initialized HumanInputRepository")
@ -545,6 +686,10 @@ def main():
logger.debug("Initialized WorkLogRepository")
logger.debug("Initialized ConfigRepository")
logger.debug("Initialized Environment Inventory")
# Create a new session for this program run
logger.debug("Initializing new session")
session_repo.create_session()
# Check dependencies before proceeding
check_dependencies()
@ -554,7 +699,9 @@ def main():
expert_missing,
web_research_enabled,
web_research_missing,
) = validate_environment(args) # Will exit if main env vars missing
) = validate_environment(
args
) # Will exit if main env vars missing
logger.debug("Environment validation successful")
# Validate model configuration early
@ -590,11 +737,16 @@ def main():
config_repo.set("expert_provider", args.expert_provider)
config_repo.set("expert_model", args.expert_model)
config_repo.set("temperature", args.temperature)
config_repo.set("experimental_fallback_handler", args.experimental_fallback_handler)
config_repo.set(
"experimental_fallback_handler", args.experimental_fallback_handler
)
config_repo.set("web_research_enabled", web_research_enabled)
config_repo.set("show_thoughts", args.show_thoughts)
config_repo.set("show_cost", args.show_cost)
config_repo.set("force_reasoning_assistance", args.reasoning_assistance)
config_repo.set("disable_reasoning_assistance", args.no_reasoning_assistance)
config_repo.set(
"disable_reasoning_assistance", args.no_reasoning_assistance
)
# Build status panel with memory statistics
status = build_status()
@ -663,13 +815,15 @@ def main():
initial_request = ask_human.invoke(
{"question": "What would you like help with?"}
)
# Record chat input in database (redundant as ask_human already records it,
# but needed in case the ask_human implementation changes)
try:
# Using get_human_input_repository() to access the repository from context
human_input_repository = get_human_input_repository()
human_input_repository.create(content=initial_request, source='chat')
human_input_repository.create(
content=initial_request, source="chat"
)
human_input_repository.garbage_collect()
except Exception as e:
logger.error(f"Failed to record initial chat input: {str(e)}")
@ -698,6 +852,7 @@ def main():
config_repo.set("expert_model", args.expert_model)
config_repo.set("temperature", args.temperature)
config_repo.set("show_thoughts", args.show_thoughts)
config_repo.set("show_cost", args.show_cost)
config_repo.set("force_reasoning_assistance", args.reasoning_assistance)
config_repo.set("disable_reasoning_assistance", args.no_reasoning_assistance)
@ -726,8 +881,12 @@ def main():
),
working_directory=working_directory,
current_date=current_date,
key_facts=format_key_facts_dict(get_key_fact_repository().get_facts_dict()),
key_snippets=format_key_snippets_dict(get_key_snippet_repository().get_snippets_dict()),
key_facts=format_key_facts_dict(
get_key_fact_repository().get_facts_dict()
),
key_snippets=format_key_snippets_dict(
get_key_snippet_repository().get_snippets_dict()
),
project_info=formatted_project_info,
env_inv=get_env_inv(),
),
@ -759,12 +918,12 @@ def main():
sys.exit(1)
base_task = args.message
# Record CLI input in database
try:
# Using get_human_input_repository() to access the repository from context
human_input_repository = get_human_input_repository()
human_input_repository.create(content=base_task, source='cli')
human_input_repository.create(content=base_task, source="cli")
# Run garbage collection to ensure we don't exceed 100 inputs
human_input_repository.garbage_collect()
logger.debug(f"Recorded CLI input: {base_task}")
@ -798,19 +957,25 @@ def main():
config_repo.set("expert_model", args.expert_model)
# Store planner config with fallback to base values
config_repo.set("planner_provider", args.planner_provider or args.provider)
config_repo.set(
"planner_provider", args.planner_provider or args.provider
)
config_repo.set("planner_model", args.planner_model or args.model)
# Store research config with fallback to base values
config_repo.set("research_provider", args.research_provider or args.provider)
config_repo.set(
"research_provider", args.research_provider or args.provider
)
config_repo.set("research_model", args.research_model or args.model)
# Store temperature in config
config_repo.set("temperature", args.temperature)
# Store reasoning assistance flags
config_repo.set("force_reasoning_assistance", args.reasoning_assistance)
config_repo.set("disable_reasoning_assistance", args.no_reasoning_assistance)
config_repo.set(
"disable_reasoning_assistance", args.no_reasoning_assistance
)
# Set modification tools based on use_aider flag
set_modification_tools(args.use_aider)
@ -854,5 +1019,6 @@ def main():
print()
sys.exit(0)
if __name__ == "__main__":
main()

View File

@ -1,3 +1,3 @@
"""Version information."""
__version__ = "0.16.1"
__version__ = "0.17.1"

View File

@ -825,7 +825,8 @@ class CiaynAgent:
try:
last_result = self._execute_tool(response)
self.chat_history.append(response)
self.fallback_handler.reset_fallback_handler()
if hasattr(self.fallback_handler, 'reset_fallback_handler'):
self.fallback_handler.reset_fallback_handler()
yield {}
except ToolExecutionError as e:

View File

@ -1,19 +1,14 @@
"""Utility functions for working with agents."""
import inspect
import os
import signal
import sys
import threading
import time
import uuid
from datetime import datetime
from typing import Any, Dict, List, Literal, Optional, Sequence
from typing import Any, Dict, List, Literal, Optional
from langchain_anthropic import ChatAnthropic
from ra_aid.callbacks.anthropic_callback_handler import AnthropicCallbackHandler
import litellm
from anthropic import APIError, APITimeoutError, InternalServerError, RateLimitError
from openai import RateLimitError as OpenAIRateLimitError
from litellm.exceptions import RateLimitError as LiteLLMRateLimitError
@ -23,28 +18,24 @@ from langchain_core.messages import (
BaseMessage,
HumanMessage,
SystemMessage,
trim_messages,
)
from langchain_core.tools import tool
from langgraph.checkpoint.memory import MemorySaver
from langgraph.prebuilt import create_react_agent
from langgraph.prebuilt.chat_agent_executor import AgentState
from litellm import get_model_info
from rich.console import Console
from rich.markdown import Markdown
from rich.panel import Panel
from ra_aid.agent_context import (
agent_context,
get_depth,
is_completed,
reset_completion_flags,
should_exit,
)
from ra_aid.agent_backends.ciayn_agent import CiaynAgent
from ra_aid.agents_alias import RAgents
from ra_aid.config import DEFAULT_MAX_TEST_CMD_RETRIES, DEFAULT_RECURSION_LIMIT
from ra_aid.console.formatting import print_error, print_stage_header
from ra_aid.config import DEFAULT_MAX_TEST_CMD_RETRIES
from ra_aid.console.formatting import print_error
from ra_aid.console.output import print_agent_output
from ra_aid.exceptions import (
AgentInterrupt,
@ -53,77 +44,26 @@ from ra_aid.exceptions import (
)
from ra_aid.fallback_handler import FallbackHandler
from ra_aid.logging_config import get_logger
from ra_aid.llm import initialize_expert_llm
from ra_aid.models_params import DEFAULT_TOKEN_LIMIT, models_params
from ra_aid.text.processing import process_thinking_content
from ra_aid.project_info import (
display_project_status,
format_project_info,
get_project_info,
)
from ra_aid.prompts.expert_prompts import (
EXPERT_PROMPT_SECTION_IMPLEMENTATION,
EXPERT_PROMPT_SECTION_PLANNING,
EXPERT_PROMPT_SECTION_RESEARCH,
)
from ra_aid.prompts.human_prompts import (
HUMAN_PROMPT_SECTION_IMPLEMENTATION,
HUMAN_PROMPT_SECTION_PLANNING,
HUMAN_PROMPT_SECTION_RESEARCH,
)
from ra_aid.prompts.implementation_prompts import IMPLEMENTATION_PROMPT
from ra_aid.prompts.common_prompts import NEW_PROJECT_HINTS
from ra_aid.prompts.planning_prompts import PLANNING_PROMPT
from ra_aid.prompts.reasoning_assist_prompt import (
REASONING_ASSIST_PROMPT_PLANNING,
REASONING_ASSIST_PROMPT_IMPLEMENTATION,
REASONING_ASSIST_PROMPT_RESEARCH,
)
from ra_aid.prompts.research_prompts import (
RESEARCH_ONLY_PROMPT,
RESEARCH_PROMPT,
)
from ra_aid.prompts.web_research_prompts import (
WEB_RESEARCH_PROMPT,
WEB_RESEARCH_PROMPT_SECTION_CHAT,
WEB_RESEARCH_PROMPT_SECTION_PLANNING,
WEB_RESEARCH_PROMPT_SECTION_RESEARCH,
)
from ra_aid.tool_configs import (
get_implementation_tools,
get_planning_tools,
get_research_tools,
get_web_research_tools,
)
from ra_aid.models_params import DEFAULT_TOKEN_LIMIT
from ra_aid.tools.handle_user_defined_test_cmd_execution import execute_test_command
from ra_aid.database.repositories.key_fact_repository import get_key_fact_repository
from ra_aid.database.repositories.key_snippet_repository import (
get_key_snippet_repository,
)
from ra_aid.database.repositories.human_input_repository import (
get_human_input_repository,
)
from ra_aid.database.repositories.research_note_repository import (
get_research_note_repository,
)
from ra_aid.database.repositories.trajectory_repository import get_trajectory_repository
from ra_aid.database.repositories.work_log_repository import get_work_log_repository
from ra_aid.model_formatters import format_key_facts_dict
from ra_aid.model_formatters.key_snippets_formatter import format_key_snippets_dict
from ra_aid.model_formatters.research_notes_formatter import format_research_notes_dict
from ra_aid.tools.memory import (
get_related_files,
log_work_event,
)
from ra_aid.database.repositories.config_repository import get_config_repository
from ra_aid.env_inv_context import get_env_inv
from ra_aid.anthropic_token_limiter import (
get_model_name_from_chat_model,
sonnet_35_state_modifier,
state_modifier,
get_model_token_limit,
)
from ra_aid.model_detection import is_anthropic_claude
console = Console()
logger = get_logger(__name__)
# Import repositories using get_* functions
from ra_aid.database.repositories.key_fact_repository import get_key_fact_repository
@tool
@ -133,131 +73,17 @@ def output_markdown_message(message: str) -> str:
return "Message output."
def estimate_messages_tokens(messages: Sequence[BaseMessage]) -> int:
"""Helper function to estimate total tokens in a sequence of messages.
Args:
messages: Sequence of messages to count tokens for
Returns:
Total estimated token count
"""
if not messages:
return 0
estimate_tokens = CiaynAgent._estimate_tokens
return sum(estimate_tokens(msg) for msg in messages)
def state_modifier(
state: AgentState, max_input_tokens: int = DEFAULT_TOKEN_LIMIT
) -> list[BaseMessage]:
"""Given the agent state and max_tokens, return a trimmed list of messages.
Args:
state: The current agent state containing messages
max_tokens: Maximum number of tokens to allow (default: DEFAULT_TOKEN_LIMIT)
Returns:
list[BaseMessage]: Trimmed list of messages that fits within token limit
"""
messages = state["messages"]
if not messages:
return []
first_message = messages[0]
remaining_messages = messages[1:]
first_tokens = estimate_messages_tokens([first_message])
new_max_tokens = max_input_tokens - first_tokens
trimmed_remaining = trim_messages(
remaining_messages,
token_counter=estimate_messages_tokens,
max_tokens=new_max_tokens,
strategy="last",
allow_partial=False,
)
return [first_message] + trimmed_remaining
def get_model_token_limit(
config: Dict[str, Any], agent_type: Literal["default", "research", "planner"]
) -> Optional[int]:
"""Get the token limit for the current model configuration based on agent type.
Returns:
Optional[int]: The token limit if found, None otherwise
"""
try:
# Try to get config from repository for production use
try:
config_from_repo = get_config_repository().get_all()
# If we succeeded, use the repository config instead of passed config
config = config_from_repo
except RuntimeError:
# In tests, this may fail because the repository isn't set up
# So we'll use the passed config directly
pass
if agent_type == "research":
provider = config.get("research_provider", "") or config.get("provider", "")
model_name = config.get("research_model", "") or config.get("model", "")
elif agent_type == "planner":
provider = config.get("planner_provider", "") or config.get("provider", "")
model_name = config.get("planner_model", "") or config.get("model", "")
else:
provider = config.get("provider", "")
model_name = config.get("model", "")
try:
provider_model = model_name if not provider else f"{provider}/{model_name}"
model_info = get_model_info(provider_model)
max_input_tokens = model_info.get("max_input_tokens")
if max_input_tokens:
logger.debug(
f"Using litellm token limit for {model_name}: {max_input_tokens}"
)
return max_input_tokens
except litellm.exceptions.NotFoundError:
logger.debug(
f"Model {model_name} not found in litellm, falling back to models_params"
)
except Exception as e:
logger.debug(
f"Error getting model info from litellm: {e}, falling back to models_params"
)
# Fallback to models_params dict
# Normalize model name for fallback lookup (e.g. claude-2 -> claude2)
normalized_name = model_name.replace("-", "")
provider_tokens = models_params.get(provider, {})
if normalized_name in provider_tokens:
max_input_tokens = provider_tokens[normalized_name]["token_limit"]
logger.debug(
f"Found token limit for {provider}/{model_name}: {max_input_tokens}"
)
else:
max_input_tokens = None
logger.debug(f"Could not find token limit for {provider}/{model_name}")
return max_input_tokens
except Exception as e:
logger.warning(f"Failed to get model token limit: {e}")
return None
def build_agent_kwargs(
checkpointer: Optional[Any] = None,
model: ChatAnthropic = None,
max_input_tokens: Optional[int] = None,
) -> Dict[str, Any]:
"""Build kwargs dictionary for agent creation.
Args:
checkpointer: Optional memory checkpointer
config: Optional configuration dictionary
token_limit: Optional token limit for the model
model: The language model to use for token counting
max_input_tokens: Optional token limit for the model
Returns:
Dictionary of kwargs for agent creation
@ -270,37 +96,31 @@ def build_agent_kwargs(
agent_kwargs["checkpointer"] = checkpointer
config = get_config_repository().get_all()
if config.get("limit_tokens", True) and is_anthropic_claude(config):
if (
config.get("limit_tokens", True)
and is_anthropic_claude(config)
and model is not None
):
def wrapped_state_modifier(state: AgentState) -> list[BaseMessage]:
return state_modifier(state, max_input_tokens=max_input_tokens)
model_name = get_model_name_from_chat_model(model)
if any(
pattern in model_name
for pattern in ["claude-3.5", "claude3.5", "claude-3-5"]
):
return sonnet_35_state_modifier(
state, max_input_tokens=max_input_tokens
)
return state_modifier(state, model, max_input_tokens=max_input_tokens)
agent_kwargs["state_modifier"] = wrapped_state_modifier
agent_kwargs["name"] = "React"
return agent_kwargs
def is_anthropic_claude(config: Dict[str, Any]) -> bool:
"""Check if the provider and model name indicate an Anthropic Claude model.
Args:
config: Configuration dictionary containing provider and model information
Returns:
bool: True if this is an Anthropic Claude model
"""
# For backwards compatibility, allow passing of config directly
provider = config.get("provider", "")
model_name = config.get("model", "")
result = (
provider.lower() == "anthropic"
and model_name
and "claude" in model_name.lower()
) or (
provider.lower() == "openrouter"
and model_name.lower().startswith("anthropic/claude-")
)
return result
def create_agent(
@ -339,13 +159,14 @@ def create_agent(
# So we'll use the passed config directly
pass
max_input_tokens = (
get_model_token_limit(config, agent_type) or DEFAULT_TOKEN_LIMIT
get_model_token_limit(config, agent_type, model) or DEFAULT_TOKEN_LIMIT
)
# Use REACT agent for Anthropic Claude models, otherwise use CIAYN
if is_anthropic_claude(config):
logger.debug("Using create_react_agent to instantiate agent.")
agent_kwargs = build_agent_kwargs(checkpointer, max_input_tokens)
agent_kwargs = build_agent_kwargs(checkpointer, model, max_input_tokens)
return create_react_agent(
model, tools, interrupt_after=["tools"], **agent_kwargs
)
@ -357,17 +178,13 @@ def create_agent(
# Default to REACT agent if provider/model detection fails
logger.warning(f"Failed to detect model type: {e}. Defaulting to REACT agent.")
config = get_config_repository().get_all()
max_input_tokens = get_model_token_limit(config, agent_type)
agent_kwargs = build_agent_kwargs(checkpointer, max_input_tokens)
max_input_tokens = get_model_token_limit(config, agent_type, model)
agent_kwargs = build_agent_kwargs(checkpointer, model, max_input_tokens)
return create_react_agent(
model, tools, interrupt_after=["tools"], **agent_kwargs
)
from ra_aid.agents.research_agent import run_research_agent, run_web_research_agent
from ra_aid.agents.implementation_agent import run_task_implementation_agent
_CONTEXT_STACK = []
_INTERRUPT_CONTEXT = None
_FEEDBACK_MODE = False
@ -462,7 +279,7 @@ def _handle_api_error(e, attempt, max_retries, base_delay):
logger.warning("API error (attempt %d/%d): %s", attempt + 1, max_retries, str(e))
delay = base_delay * (2**attempt)
error_message = f"Encountered {e.__class__.__name__}: {e}. Retrying in {delay}s... (Attempt {attempt+1}/{max_retries})"
# Record error in trajectory
trajectory_repo = get_trajectory_repository()
human_input_id = get_human_input_repository().get_most_recent_id()
@ -474,9 +291,9 @@ def _handle_api_error(e, attempt, max_retries, base_delay):
record_type="error",
human_input_id=human_input_id,
is_error=True,
error_message=error_message
error_message=error_message,
)
print_error(error_message)
start = time.monotonic()
while time.monotonic() - start < delay:
@ -637,7 +454,9 @@ def run_agent_with_retry(
try:
_run_agent_stream(agent, msg_list)
if fallback_handler:
if fallback_handler and hasattr(
fallback_handler, "reset_fallback_handler"
):
fallback_handler.reset_fallback_handler()
should_break, prompt, auto_test, test_attempts = (
_execute_test_command_wrapper(

View File

@ -33,6 +33,7 @@ from ra_aid.logging_config import get_logger
from ra_aid.model_formatters import format_key_facts_dict
from ra_aid.model_formatters.key_snippets_formatter import format_key_snippets_dict
from ra_aid.model_formatters.research_notes_formatter import format_research_notes_dict
from ra_aid.text.processing import process_thinking_content
from ra_aid.models_params import models_params
from ra_aid.project_info import format_project_info, get_project_info
from ra_aid.prompts.expert_prompts import EXPERT_PROMPT_SECTION_PLANNING
@ -286,7 +287,7 @@ def run_planning_agent(
content = "\n".join(str(item) for item in content)
elif supports_think_tag or supports_thinking:
# Process thinking content using the centralized function
content, _ = agent_utils.process_thinking_content(
content, _ = process_thinking_content(
content=content,
supports_think_tag=supports_think_tag,
supports_thinking=supports_thinking,

View File

@ -34,6 +34,7 @@ from ra_aid.logging_config import get_logger
from ra_aid.model_formatters import format_key_facts_dict
from ra_aid.model_formatters.key_snippets_formatter import format_key_snippets_dict
from ra_aid.model_formatters.research_notes_formatter import format_research_notes_dict
from ra_aid.text.processing import process_thinking_content
from ra_aid.models_params import models_params
from ra_aid.project_info import display_project_status, format_project_info, get_project_info
from ra_aid.prompts.expert_prompts import EXPERT_PROMPT_SECTION_RESEARCH
@ -293,7 +294,7 @@ def run_research_agent(
content = "\n".join(str(item) for item in content)
elif supports_think_tag or supports_thinking:
# Process thinking content using the centralized function
content, _ = agent_utils.process_thinking_content(
content, _ = process_thinking_content(
content=content,
supports_think_tag=supports_think_tag,
supports_thinking=supports_thinking,

View File

@ -0,0 +1,312 @@
"""Utilities for handling Anthropic-specific message formats and trimming."""
from typing import Callable, List, Literal, Optional, Sequence, Union, cast
from langchain_core.messages import (
AIMessage,
BaseMessage,
ChatMessage,
FunctionMessage,
HumanMessage,
SystemMessage,
ToolMessage,
)
def _is_message_type(
message: BaseMessage, message_types: Union[str, type, List[Union[str, type]]]
) -> bool:
"""Check if a message is of a specific type or types.
Args:
message: The message to check
message_types: Type(s) to check against (string name or class)
Returns:
bool: True if message matches any of the specified types
"""
if not isinstance(message_types, list):
message_types = [message_types]
types_str = [t for t in message_types if isinstance(t, str)]
types_classes = tuple(t for t in message_types if isinstance(t, type))
return message.type in types_str or isinstance(message, types_classes)
def has_tool_use(message: BaseMessage) -> bool:
"""Check if a message contains tool use.
Args:
message: The message to check
Returns:
bool: True if the message contains tool use
"""
if not isinstance(message, AIMessage):
return False
# Check content for tool_use
if isinstance(message.content, str) and "tool_use" in message.content:
return True
# Check content list for tool_use blocks
if isinstance(message.content, list):
for item in message.content:
if isinstance(item, dict) and item.get("type") == "tool_use":
return True
# Check additional_kwargs for tool_calls
if hasattr(message, "additional_kwargs") and message.additional_kwargs.get(
"tool_calls"
):
return True
return False
def is_tool_pair(message1: BaseMessage, message2: BaseMessage) -> bool:
"""Check if two messages form a tool use/result pair.
Args:
message1: First message
message2: Second message
Returns:
bool: True if the messages form a tool use/result pair
"""
return (
isinstance(message1, AIMessage)
and isinstance(message2, ToolMessage)
and has_tool_use(message1)
)
def anthropic_trim_messages(
messages: Sequence[BaseMessage],
*,
max_tokens: int,
token_counter: Callable[[List[BaseMessage]], int],
strategy: Literal["first", "last"] = "last",
num_messages_to_keep: int = 2,
allow_partial: bool = False,
include_system: bool = True,
start_on: Optional[Union[str, type, List[Union[str, type]]]] = None,
) -> List[BaseMessage]:
"""Trim messages to fit within a token limit, with Anthropic-specific handling.
Warning - not fully implemented - last strategy is supported and test, not
allow partial, not 'first' strategy either.
This function is similar to langchain_core's trim_messages but with special
handling for Anthropic message formats to avoid API errors.
It always keeps the first num_messages_to_keep messages.
Args:
messages: Sequence of messages to trim
max_tokens: Maximum number of tokens allowed
token_counter: Function to count tokens in messages
strategy: Whether to keep the "first" or "last" messages
allow_partial: Whether to allow partial messages
include_system: Whether to always include the system message
start_on: Message type to start on (only for "last" strategy)
Returns:
List[BaseMessage]: Trimmed messages that fit within token limit
"""
if not messages:
return []
messages = list(messages)
# Always keep the first num_messages_to_keep messages
kept_messages = messages[:num_messages_to_keep]
remaining_msgs = messages[num_messages_to_keep:]
# For Anthropic, we need to maintain the conversation structure where:
# 1. Every AIMessage with tool_use must be followed by a ToolMessage
# 2. Every AIMessage that follows a ToolMessage must start with a tool_result
# First, check if we have any tool_use in the messages
has_tool_use_anywhere = any(has_tool_use(msg) for msg in messages)
# If we have tool_use anywhere, we need to be very careful about trimming
if has_tool_use_anywhere:
# For safety, just keep all messages if we're under the token limit
if token_counter(messages) <= max_tokens:
return messages
# We need to identify all tool_use/tool_result relationships
# First, find all AIMessage+ToolMessage pairs
pairs = []
i = 0
while i < len(messages) - 1:
if is_tool_pair(messages[i], messages[i + 1]):
pairs.append((i, i + 1))
i += 2
else:
i += 1
# For Anthropic, we need to ensure that:
# 1. If we include an AIMessage with tool_use, we must include the following ToolMessage
# 2. If we include a ToolMessage, we must include the preceding AIMessage with tool_use
# The safest approach is to always keep complete AIMessage+ToolMessage pairs together
# First, identify all complete pairs
complete_pairs = []
for start, end in pairs:
complete_pairs.append((start, end))
# Now we'll build our result, starting with the kept_messages
# But we need to be careful about the first message if it has tool_use
result = []
# Check if the last message in kept_messages has tool_use
if (
kept_messages
and isinstance(kept_messages[-1], AIMessage)
and has_tool_use(kept_messages[-1])
):
# We need to find the corresponding ToolMessage
for i, (ai_idx, tool_idx) in enumerate(pairs):
if messages[ai_idx] is kept_messages[-1]:
# Found the pair, add all kept_messages except the last one
result.extend(kept_messages[:-1])
# Add the AIMessage and ToolMessage as a pair
result.extend([messages[ai_idx], messages[tool_idx]])
# Remove this pair from the list of pairs to process later
pairs = pairs[:i] + pairs[i + 1 :]
break
else:
# If we didn't find a matching pair, just add all kept_messages
result.extend(kept_messages)
else:
# No tool_use in the last kept message, just add all kept_messages
result.extend(kept_messages)
# If we're using the "last" strategy, we'll try to include pairs from the end
if strategy == "last":
# First collect all pairs we can include within the token limit
pairs_to_include = []
# Process pairs from the end (newest first)
for pair_idx, (ai_idx, tool_idx) in enumerate(reversed(complete_pairs)):
# Try adding this pair
test_msgs = result.copy()
# Add all previously selected pairs
for prev_ai_idx, prev_tool_idx in pairs_to_include:
test_msgs.extend([messages[prev_ai_idx], messages[prev_tool_idx]])
# Add this pair
test_msgs.extend([messages[ai_idx], messages[tool_idx]])
if token_counter(test_msgs) <= max_tokens:
# This pair fits, add it to our list
pairs_to_include.append((ai_idx, tool_idx))
else:
# This pair would exceed the token limit
break
# Now add the pairs in the correct order
# Sort by index to maintain the original conversation flow
pairs_to_include.sort(key=lambda x: x[0])
for ai_idx, tool_idx in pairs_to_include:
result.extend([messages[ai_idx], messages[tool_idx]])
# No need to sort - we've already added messages in the correct order
return result
# If no tool_use, proceed with normal segmentation
segments = []
i = 0
# Group messages into segments
while i < len(remaining_msgs):
segments.append([remaining_msgs[i]])
i += 1
# Now we have segments that maintain the required structure
# We'll add segments from the end (for "last" strategy) or beginning (for "first")
# until we hit the token limit
if strategy == "last":
# If we have no segments, just return kept_messages
if not segments:
return kept_messages
result = []
# Process segments from the end
for i, segment in enumerate(reversed(segments)):
# Try adding this segment
test_msgs = segment + result
if token_counter(kept_messages + test_msgs) <= max_tokens:
result = segment + result
else:
# This segment would exceed the token limit
break
final_result = kept_messages + result
# For Anthropic, we need to ensure the conversation follows a valid structure
# We'll do a final check of the entire conversation
# Validate the conversation structure
valid_result = []
i = 0
# Process messages in order
while i < len(final_result):
current_msg = final_result[i]
# If this is an AIMessage with tool_use, it must be followed by a ToolMessage
if (
i < len(final_result) - 1
and isinstance(current_msg, AIMessage)
and has_tool_use(current_msg)
):
if isinstance(final_result[i + 1], ToolMessage):
# This is a valid tool_use + tool_result pair
valid_result.append(current_msg)
valid_result.append(final_result[i + 1])
i += 2
else:
# Invalid: AIMessage with tool_use not followed by ToolMessage
# Skip this message to maintain valid structure
i += 1
else:
# Regular message, just add it
valid_result.append(current_msg)
i += 1
# Final check: don't end with an AIMessage that has tool_use
if (
valid_result
and isinstance(valid_result[-1], AIMessage)
and has_tool_use(valid_result[-1])
):
valid_result.pop() # Remove the last message
return valid_result
elif strategy == "first":
result = []
# Process segments from the beginning
for i, segment in enumerate(segments):
# Try adding this segment
test_msgs = result + segment
if token_counter(kept_messages + test_msgs) <= max_tokens:
result = result + segment
else:
# This segment would exceed the token limit
break
final_result = kept_messages + result
return final_result

View File

@ -0,0 +1,303 @@
"""Utilities for handling token limits with Anthropic models."""
from functools import partial
from typing import Any, Dict, List, Optional, Sequence, Tuple
from langchain_core.language_models import BaseChatModel
from ra_aid.config import DEFAULT_MODEL
from ra_aid.model_detection import is_claude_37
from langchain_core.messages import (
BaseMessage,
trim_messages,
)
from langchain_core.messages.base import message_to_dict
from ra_aid.anthropic_message_utils import (
anthropic_trim_messages,
)
from langgraph.prebuilt.chat_agent_executor import AgentState
from litellm import token_counter, get_model_info
from ra_aid.agent_backends.ciayn_agent import CiaynAgent
from ra_aid.database.repositories.config_repository import get_config_repository
from ra_aid.logging_config import get_logger
from ra_aid.models_params import DEFAULT_TOKEN_LIMIT, models_params
logger = get_logger(__name__)
def estimate_messages_tokens(messages: Sequence[BaseMessage]) -> int:
"""Helper function to estimate total tokens in a sequence of messages.
Args:
messages: Sequence of messages to count tokens for
Returns:
Total estimated token count
"""
if not messages:
return 0
estimate_tokens = CiaynAgent._estimate_tokens
return sum(estimate_tokens(msg) for msg in messages)
def convert_message_to_litellm_format(message: BaseMessage) -> Dict:
"""Convert a BaseMessage to the format expected by litellm.
Args:
message: The BaseMessage to convert
Returns:
Dict in litellm format
"""
message_dict = message_to_dict(message)
return {
"role": message_dict["type"],
"content": message_dict["data"]["content"],
}
def create_token_counter_wrapper(model: str):
"""Create a wrapper for token counter that handles BaseMessage conversion.
Args:
model: The model name to use for token counting
Returns:
A function that accepts BaseMessage objects and returns token count
"""
# Create a partial function that already has the model parameter set
base_token_counter = partial(token_counter, model=model)
def wrapped_token_counter(messages: List[BaseMessage]) -> int:
"""Count tokens in a list of messages, converting BaseMessage to dict for litellm token counter usage.
Args:
messages: List of BaseMessage objects
Returns:
Token count for the messages
"""
if not messages:
return 0
litellm_messages = [convert_message_to_litellm_format(msg) for msg in messages]
result = base_token_counter(messages=litellm_messages)
return result
return wrapped_token_counter
def state_modifier(
state: AgentState, model: BaseChatModel, max_input_tokens: int = DEFAULT_TOKEN_LIMIT
) -> list[BaseMessage]:
"""Given the agent state and max_tokens, return a trimmed list of messages.
This uses anthropic_trim_messages which always keeps the first 2 messages.
Args:
state: The current agent state containing messages
model: The language model to use for token counting
max_input_tokens: Maximum number of tokens to allow (default: DEFAULT_TOKEN_LIMIT)
Returns:
list[BaseMessage]: Trimmed list of messages that fits within token limit
"""
messages = state["messages"]
if not messages:
return []
model_name = get_model_name_from_chat_model(model)
wrapped_token_counter = create_token_counter_wrapper(model_name)
result = anthropic_trim_messages(
messages,
token_counter=wrapped_token_counter,
max_tokens=max_input_tokens,
strategy="last",
allow_partial=False,
include_system=True,
num_messages_to_keep=2,
)
if len(result) < len(messages):
logger.info(
f"Anthropic Token Limiter Trimmed: {len(messages)} messages → {len(result)} messages"
)
return result
def sonnet_35_state_modifier(
state: AgentState, max_input_tokens: int = DEFAULT_TOKEN_LIMIT
) -> list[BaseMessage]:
"""Given the agent state and max_tokens, return a trimmed list of messages.
Args:
state: The current agent state containing messages
max_tokens: Maximum number of tokens to allow (default: DEFAULT_TOKEN_LIMIT)
Returns:
list[BaseMessage]: Trimmed list of messages that fits within token limit
"""
messages = state["messages"]
if not messages:
return []
first_message = messages[0]
remaining_messages = messages[1:]
first_tokens = estimate_messages_tokens([first_message])
new_max_tokens = max_input_tokens - first_tokens
trimmed_remaining = trim_messages(
remaining_messages,
token_counter=estimate_messages_tokens,
max_tokens=new_max_tokens,
strategy="last",
allow_partial=False,
include_system=True,
)
result = [first_message] + trimmed_remaining
return result
def get_provider_and_model_for_agent_type(
config: Dict[str, Any], agent_type: str
) -> Tuple[str, str]:
"""Get the provider and model name for the specified agent type.
Args:
config: Configuration dictionary containing provider and model information
agent_type: Type of agent ("default", "research", or "planner")
Returns:
Tuple[str, str]: A tuple containing (provider, model_name)
"""
if agent_type == "research":
provider = config.get("research_provider", "") or config.get("provider", "")
model_name = config.get("research_model", "") or config.get("model", "")
elif agent_type == "planner":
provider = config.get("planner_provider", "") or config.get("provider", "")
model_name = config.get("planner_model", "") or config.get("model", "")
else:
provider = config.get("provider", "")
model_name = config.get("model", "")
return provider, model_name
def get_model_name_from_chat_model(model: Optional[BaseChatModel]) -> str:
"""Extract the model name from a BaseChatModel instance.
Args:
model: The BaseChatModel instance
Returns:
str: The model name extracted from the instance, or DEFAULT_MODEL if not found
"""
if model is None:
return DEFAULT_MODEL
if hasattr(model, "model"):
return model.model
elif hasattr(model, "model_name"):
return model.model_name
else:
logger.debug(f"Could not extract model name from {model}, using DEFAULT_MODEL")
return DEFAULT_MODEL
def adjust_claude_37_token_limit(
max_input_tokens: int, model: Optional[BaseChatModel]
) -> Optional[int]:
"""Adjust token limit for Claude 3.7 models by subtracting max_tokens.
Args:
max_input_tokens: The original token limit
model: The model instance to check
Returns:
Optional[int]: Adjusted token limit if model is Claude 3.7, otherwise original limit
"""
if not max_input_tokens:
return max_input_tokens
if model and hasattr(model, "model") and is_claude_37(model.model):
if hasattr(model, "max_tokens") and model.max_tokens:
effective_max_input_tokens = max_input_tokens - model.max_tokens
logger.debug(
f"Adjusting token limit for Claude 3.7 model: {max_input_tokens} - {model.max_tokens} = {effective_max_input_tokens}"
)
return effective_max_input_tokens
return max_input_tokens
def get_model_token_limit(
config: Dict[str, Any],
agent_type: str = "default",
model: Optional[BaseChatModel] = None,
) -> Optional[int]:
"""Get the token limit for the current model configuration based on agent type.
Args:
config: Configuration dictionary containing provider and model information
agent_type: Type of agent ("default", "research", or "planner")
model: Optional BaseChatModel instance to check for model-specific attributes
Returns:
Optional[int]: The token limit if found, None otherwise
"""
try:
# Try to get config from repository for production use
try:
config_from_repo = get_config_repository().get_all()
# If we succeeded, use the repository config instead of passed config
config = config_from_repo
except RuntimeError:
# In tests, this may fail because the repository isn't set up
# So we'll use the passed config directly
pass
provider, model_name = get_provider_and_model_for_agent_type(config, agent_type)
# Always attempt to get model info from litellm first
provider_model = model_name if not provider else f"{provider}/{model_name}"
try:
model_info = get_model_info(provider_model)
max_input_tokens = model_info.get("max_input_tokens")
if max_input_tokens:
logger.debug(
f"Using litellm token limit for {model_name}: {max_input_tokens}"
)
return adjust_claude_37_token_limit(max_input_tokens, model)
except Exception as e:
logger.debug(
f"Error getting model info from litellm: {e}, falling back to models_params"
)
# Fallback to models_params dict
# Normalize model name for fallback lookup (e.g. claude-2 -> claude2)
normalized_name = model_name.replace("-", "")
provider_tokens = models_params.get(provider, {})
if normalized_name in provider_tokens:
max_input_tokens = provider_tokens[normalized_name]["token_limit"]
logger.debug(
f"Found token limit for {provider}/{model_name}: {max_input_tokens}"
)
else:
max_input_tokens = None
logger.debug(f"Could not find token limit for {provider}/{model_name}")
return adjust_claude_37_token_limit(max_input_tokens, model)
except Exception as e:
logger.warning(f"Failed to get model token limit: {e}")
return None

View File

@ -6,6 +6,8 @@ DEFAULT_MAX_TOOL_FAILURES = 3
FALLBACK_TOOL_MODEL_LIMIT = 5
RETRY_FALLBACK_COUNT = 3
DEFAULT_TEST_CMD_TIMEOUT = 60 * 5 # 5 minutes in seconds
DEFAULT_MODEL="claude-3-7-sonnet-20250219"
DEFAULT_SHOW_COST = False
VALID_PROVIDERS = [

View File

@ -1,19 +1,23 @@
from typing import Any, Dict, Literal, Optional
from typing import Any, Dict, List, Literal, Optional, Sequence
from langchain_core.messages import AIMessage
from langchain_core.messages import AIMessage, BaseMessage, HumanMessage
from rich.markdown import Markdown
from rich.panel import Panel
from ra_aid.exceptions import ToolExecutionError
from ra_aid.callbacks.anthropic_callback_handler import AnthropicCallbackHandler
from ra_aid.database.repositories.config_repository import get_config_repository
from ra_aid.config import DEFAULT_SHOW_COST
# Import shared console instance
from .formatting import console
def get_cost_subtitle(cost_cb: Optional[AnthropicCallbackHandler]) -> Optional[str]:
"""Generate a subtitle with cost information if a callback is provided."""
if cost_cb:
"""Generate a subtitle with cost information if a callback is provided and show_cost is enabled."""
# Only show cost information if both cost_cb is provided AND show_cost is True
show_cost = get_config_repository().get("show_cost", DEFAULT_SHOW_COST)
if cost_cb and show_cost:
return f"Cost: ${cost_cb.total_cost:.6f} | Tokens: {cost_cb.total_tokens}"
return None
@ -94,3 +98,57 @@ def cpm(message: str, title: Optional[str] = None, border_style: str = "blue") -
"""
console.print(Panel(Markdown(message), title=title, border_style=border_style))
def print_messages_compact(messages: Sequence[BaseMessage]) -> None:
"""Print a compact representation of a list of messages.
Warning: Used mainly for debugging purposes so do not delete if not referenced anywhere!
For all message types, only the first 30 characters of content are shown.
Args:
messages: A sequence of BaseMessage objects to print
"""
if not messages:
console.print("[italic]No messages[/italic]")
return
for i, msg in enumerate(messages):
msg_type = msg.__class__.__name__
content = msg.content
# Process content based on its type
if isinstance(content, str):
display_content = f"{content[:30]}..." if len(content) > 30 else content
elif isinstance(content, list):
# Handle structured content (list of content blocks)
content_preview = []
for item in content[:2]: # Show first 2 items at most
if isinstance(item, dict):
if item.get("type") == "text":
text = item.get("text", "")
content_preview.append(f"text: {text[:20]}..." if len(text) > 20 else f"text: {text}")
elif item.get("type") == "tool_call":
tool_name = item.get("tool_call", {}).get("name", "unknown")
content_preview.append(f"tool_call: {tool_name}")
else:
content_preview.append(f"{item.get('type', 'unknown')}")
if len(content) > 2:
content_preview.append(f"...({len(content)-2} more)")
display_content = ", ".join(content_preview)
else:
display_content = str(content)[:30] + "..." if len(str(content)) > 30 else str(content)
# Add additional tool message info if available
additional_info = []
if hasattr(msg, "tool_call_id") and msg.tool_call_id:
additional_info.append(f"tool_call_id: {msg.tool_call_id}")
if hasattr(msg, "name") and msg.name:
additional_info.append(f"name: {msg.name}")
if hasattr(msg, "status") and msg.status:
additional_info.append(f"status: {msg.status}")
info_str = f" ({', '.join(additional_info)})" if additional_info else ""
console.print(f"[{i}] [bold]{msg_type}{info_str}[/bold]: {display_content}")

View File

@ -42,8 +42,8 @@ def initialize_database():
# to avoid circular imports
# Note: This import needs to be here, not at the top level
try:
from ra_aid.database.models import KeyFact, KeySnippet, HumanInput, ResearchNote, Trajectory
db.create_tables([KeyFact, KeySnippet, HumanInput, ResearchNote, Trajectory], safe=True)
from ra_aid.database.models import KeyFact, KeySnippet, HumanInput, ResearchNote, Trajectory, Session
db.create_tables([KeyFact, KeySnippet, HumanInput, ResearchNote, Trajectory, Session], safe=True)
logger.debug("Ensured database tables exist")
except Exception as e:
logger.error(f"Error creating tables: {str(e)}")
@ -99,6 +99,25 @@ class BaseModel(peewee.Model):
raise
class Session(BaseModel):
"""
Model representing a session stored in the database.
Sessions track information about each program run, providing a way to group
related records like human inputs, trajectories, and key facts.
Each session record captures details about when the program was started,
what command line arguments were used, and environment information.
"""
start_time = peewee.DateTimeField(default=datetime.datetime.now)
command_line = peewee.TextField(null=True)
program_version = peewee.TextField(null=True)
machine_info = peewee.TextField(null=True) # JSON-encoded machine information
class Meta:
table_name = "session"
class HumanInput(BaseModel):
"""
Model representing human input stored in the database.
@ -109,6 +128,7 @@ class HumanInput(BaseModel):
"""
content = peewee.TextField()
source = peewee.TextField() # 'cli', 'chat', or 'hil'
session = peewee.ForeignKeyField(Session, backref='human_inputs', null=True)
# created_at and updated_at are inherited from BaseModel
class Meta:
@ -124,6 +144,7 @@ class KeyFact(BaseModel):
"""
content = peewee.TextField()
human_input = peewee.ForeignKeyField(HumanInput, backref='key_facts', null=True)
session = peewee.ForeignKeyField(Session, backref='key_facts', null=True)
# created_at and updated_at are inherited from BaseModel
class Meta:
@ -143,6 +164,7 @@ class KeySnippet(BaseModel):
snippet = peewee.TextField()
description = peewee.TextField(null=True)
human_input = peewee.ForeignKeyField(HumanInput, backref='key_snippets', null=True)
session = peewee.ForeignKeyField(Session, backref='key_snippets', null=True)
# created_at and updated_at are inherited from BaseModel
class Meta:
@ -159,6 +181,7 @@ class ResearchNote(BaseModel):
"""
content = peewee.TextField()
human_input = peewee.ForeignKeyField(HumanInput, backref='research_notes', null=True)
session = peewee.ForeignKeyField(Session, backref='research_notes', null=True)
# created_at and updated_at are inherited from BaseModel
class Meta:
@ -193,6 +216,7 @@ class Trajectory(BaseModel):
error_message = peewee.TextField(null=True) # The error message
error_type = peewee.TextField(null=True) # The type/class of the error
error_details = peewee.TextField(null=True) # Additional error details like stack traces or context
session = peewee.ForeignKeyField(Session, backref='trajectories', null=True)
# created_at and updated_at are inherited from BaseModel
class Meta:

View File

@ -0,0 +1,376 @@
"""
Pydantic models for ra_aid database entities.
This module defines Pydantic models that correspond to Peewee ORM models,
providing validation, serialization, and deserialization capabilities.
"""
import datetime
import json
from typing import Dict, List, Any, Optional
from pydantic import BaseModel, ConfigDict, field_serializer, field_validator
class SessionModel(BaseModel):
"""
Pydantic model representing a Session.
This model corresponds to the Session Peewee ORM model and provides
validation and serialization capabilities. It handles the conversion
between JSON-encoded strings and Python dictionaries for the machine_info field.
Attributes:
id: Unique identifier for the session
created_at: When the session record was created
updated_at: When the session record was last updated
start_time: When the program session started
command_line: Command line arguments used to start the program
program_version: Version of the program
machine_info: Dictionary containing machine-specific metadata
"""
id: Optional[int] = None
created_at: datetime.datetime
updated_at: datetime.datetime
start_time: datetime.datetime
command_line: Optional[str] = None
program_version: Optional[str] = None
machine_info: Optional[Dict[str, Any]] = None
# Configure the model to work with ORM objects
model_config = ConfigDict(from_attributes=True)
@field_validator("machine_info", mode="before")
@classmethod
def parse_machine_info(cls, value: Any) -> Optional[Dict[str, Any]]:
"""
Parse the machine_info field from a JSON string to a dictionary.
Args:
value: The value to parse, can be a string, dict, or None
Returns:
Optional[Dict[str, Any]]: The parsed dictionary or None
Raises:
ValueError: If the JSON string is invalid
"""
if value is None:
return None
if isinstance(value, dict):
return value
if isinstance(value, str):
try:
return json.loads(value)
except json.JSONDecodeError as e:
raise ValueError(f"Invalid JSON in machine_info: {e}")
raise ValueError(f"Unexpected type for machine_info: {type(value)}")
@field_serializer("machine_info")
def serialize_machine_info(self, machine_info: Optional[Dict[str, Any]]) -> Optional[str]:
"""
Serialize the machine_info dictionary to a JSON string for storage.
Args:
machine_info: Dictionary to serialize
Returns:
Optional[str]: JSON-encoded string or None
"""
if machine_info is None:
return None
return json.dumps(machine_info)
class HumanInputModel(BaseModel):
"""
Pydantic model representing a HumanInput.
This model corresponds to the HumanInput Peewee ORM model and provides
validation and serialization capabilities.
Attributes:
id: Unique identifier for the human input
created_at: When the record was created
updated_at: When the record was last updated
content: The text content of the input
source: The source of the input ('cli', 'chat', or 'hil')
session_id: Optional reference to the associated session
"""
id: Optional[int] = None
created_at: datetime.datetime
updated_at: datetime.datetime
content: str
source: str
session_id: Optional[int] = None
# Configure the model to work with ORM objects
model_config = ConfigDict(from_attributes=True)
class KeyFactModel(BaseModel):
"""
Pydantic model representing a KeyFact.
This model corresponds to the KeyFact Peewee ORM model and provides
validation and serialization capabilities.
Attributes:
id: Unique identifier for the key fact
created_at: When the record was created
updated_at: When the record was last updated
content: The text content of the key fact
human_input_id: Optional reference to the associated human input
session_id: Optional reference to the associated session
"""
id: Optional[int] = None
created_at: datetime.datetime
updated_at: datetime.datetime
content: str
human_input_id: Optional[int] = None
session_id: Optional[int] = None
# Configure the model to work with ORM objects
model_config = ConfigDict(from_attributes=True)
class KeySnippetModel(BaseModel):
"""
Pydantic model representing a KeySnippet.
This model corresponds to the KeySnippet Peewee ORM model and provides
validation and serialization capabilities.
Attributes:
id: Unique identifier for the key snippet
created_at: When the record was created
updated_at: When the record was last updated
filepath: Path to the source file
line_number: Line number where the snippet starts
snippet: The source code snippet text
description: Optional description of the significance
human_input_id: Optional reference to the associated human input
session_id: Optional reference to the associated session
"""
id: Optional[int] = None
created_at: datetime.datetime
updated_at: datetime.datetime
filepath: str
line_number: int
snippet: str
description: Optional[str] = None
human_input_id: Optional[int] = None
session_id: Optional[int] = None
# Configure the model to work with ORM objects
model_config = ConfigDict(from_attributes=True)
class ResearchNoteModel(BaseModel):
"""
Pydantic model representing a ResearchNote.
This model corresponds to the ResearchNote Peewee ORM model and provides
validation and serialization capabilities.
Attributes:
id: Unique identifier for the research note
created_at: When the record was created
updated_at: When the record was last updated
content: The text content of the research note
human_input_id: Optional reference to the associated human input
session_id: Optional reference to the associated session
"""
id: Optional[int] = None
created_at: datetime.datetime
updated_at: datetime.datetime
content: str
human_input_id: Optional[int] = None
session_id: Optional[int] = None
# Configure the model to work with ORM objects
model_config = ConfigDict(from_attributes=True)
class TrajectoryModel(BaseModel):
"""
Pydantic model representing a Trajectory.
This model corresponds to the Trajectory Peewee ORM model and provides
validation and serialization capabilities. It handles the conversion
between JSON-encoded strings and Python dictionaries for the tool_parameters,
tool_result, and step_data fields.
Attributes:
id: Unique identifier for the trajectory
created_at: When the record was created
updated_at: When the record was last updated
human_input_id: Optional reference to the associated human input
tool_name: Name of the tool that was executed
tool_parameters: Dictionary containing the parameters passed to the tool
tool_result: Dictionary containing the result returned by the tool
step_data: Dictionary containing UI rendering data
record_type: Type of trajectory record
cost: Optional cost of the tool execution
tokens: Optional token usage of the tool execution
is_error: Flag indicating if this record represents an error
error_message: The error message if is_error is True
error_type: The type/class of the error if is_error is True
error_details: Additional error details if is_error is True
session_id: Optional reference to the associated session
"""
id: Optional[int] = None
created_at: datetime.datetime
updated_at: datetime.datetime
human_input_id: Optional[int] = None
tool_name: Optional[str] = None
tool_parameters: Optional[Dict[str, Any]] = None
tool_result: Optional[Any] = None
step_data: Optional[Dict[str, Any]] = None
record_type: Optional[str] = None
cost: Optional[float] = None
tokens: Optional[int] = None
is_error: bool = False
error_message: Optional[str] = None
error_type: Optional[str] = None
error_details: Optional[str] = None
session_id: Optional[int] = None
# Configure the model to work with ORM objects
model_config = ConfigDict(from_attributes=True)
@field_validator("tool_parameters", mode="before")
@classmethod
def parse_tool_parameters(cls, value: Any) -> Optional[Dict[str, Any]]:
"""
Parse the tool_parameters field from a JSON string to a dictionary.
Args:
value: The value to parse, can be a string, dict, or None
Returns:
Optional[Dict[str, Any]]: The parsed dictionary or None
Raises:
ValueError: If the JSON string is invalid
"""
if value is None:
return None
if isinstance(value, dict):
return value
if isinstance(value, str):
try:
return json.loads(value)
except json.JSONDecodeError as e:
raise ValueError(f"Invalid JSON in tool_parameters: {e}")
raise ValueError(f"Unexpected type for tool_parameters: {type(value)}")
@field_validator("tool_result", mode="before")
@classmethod
def parse_tool_result(cls, value: Any) -> Optional[Any]:
"""
Parse the tool_result field from a JSON string to a Python object.
Args:
value: The value to parse, can be a string, dict, list, or None
Returns:
Optional[Any]: The parsed object or None
Raises:
ValueError: If the JSON string is invalid
"""
if value is None:
return None
if not isinstance(value, str):
return value
try:
return json.loads(value)
except json.JSONDecodeError as e:
raise ValueError(f"Invalid JSON in tool_result: {e}")
@field_validator("step_data", mode="before")
@classmethod
def parse_step_data(cls, value: Any) -> Optional[Dict[str, Any]]:
"""
Parse the step_data field from a JSON string to a dictionary.
Args:
value: The value to parse, can be a string, dict, or None
Returns:
Optional[Dict[str, Any]]: The parsed dictionary or None
Raises:
ValueError: If the JSON string is invalid
"""
if value is None:
return None
if isinstance(value, dict):
return value
if isinstance(value, str):
try:
return json.loads(value)
except json.JSONDecodeError as e:
raise ValueError(f"Invalid JSON in step_data: {e}")
raise ValueError(f"Unexpected type for step_data: {type(value)}")
@field_serializer("tool_parameters")
def serialize_tool_parameters(self, tool_parameters: Optional[Dict[str, Any]]) -> Optional[str]:
"""
Serialize the tool_parameters dictionary to a JSON string for storage.
Args:
tool_parameters: Dictionary to serialize
Returns:
Optional[str]: JSON-encoded string or None
"""
if tool_parameters is None:
return None
return json.dumps(tool_parameters)
@field_serializer("tool_result")
def serialize_tool_result(self, tool_result: Optional[Any]) -> Optional[str]:
"""
Serialize the tool_result object to a JSON string for storage.
Args:
tool_result: Object to serialize
Returns:
Optional[str]: JSON-encoded string or None
"""
if tool_result is None:
return None
return json.dumps(tool_result)
@field_serializer("step_data")
def serialize_step_data(self, step_data: Optional[Dict[str, Any]]) -> Optional[str]:
"""
Serialize the step_data dictionary to a JSON string for storage.
Args:
step_data: Dictionary to serialize
Returns:
Optional[str]: JSON-encoded string or None
"""
if step_data is None:
return None
return json.dumps(step_data)

View File

@ -32,6 +32,7 @@ class ConfigRepository:
FALLBACK_TOOL_MODEL_LIMIT,
RETRY_FALLBACK_COUNT,
DEFAULT_TEST_CMD_TIMEOUT,
DEFAULT_SHOW_COST,
VALID_PROVIDERS,
)
@ -42,6 +43,7 @@ class ConfigRepository:
"fallback_tool_model_limit": FALLBACK_TOOL_MODEL_LIMIT,
"retry_fallback_count": RETRY_FALLBACK_COUNT,
"test_cmd_timeout": DEFAULT_TEST_CMD_TIMEOUT,
"show_cost": DEFAULT_SHOW_COST,
"valid_providers": VALID_PROVIDERS,
}

View File

@ -11,6 +11,7 @@ import contextvars
import peewee
from ra_aid.database.models import HumanInput
from ra_aid.database.pydantic_models import HumanInputModel
from ra_aid.logging_config import get_logger
logger = get_logger(__name__)
@ -118,8 +119,23 @@ class HumanInputRepository:
if db is None:
raise ValueError("Database connection is required for HumanInputRepository")
self.db = db
def _to_model(self, human_input: Optional[HumanInput]) -> Optional[HumanInputModel]:
"""
Convert a Peewee HumanInput object to a Pydantic HumanInputModel.
Args:
human_input: Peewee HumanInput instance or None
Returns:
Optional[HumanInputModel]: Pydantic model representation or None if human_input is None
"""
if human_input is None:
return None
return HumanInputModel.model_validate(human_input, from_attributes=True)
def create(self, content: str, source: str) -> HumanInput:
def create(self, content: str, source: str) -> HumanInputModel:
"""
Create a new human input record in the database.
@ -128,7 +144,7 @@ class HumanInputRepository:
source: The source of the input (e.g., "cli", "chat", "hil")
Returns:
HumanInput: The newly created human input instance
HumanInputModel: The newly created human input instance
Raises:
peewee.DatabaseError: If there's an error creating the record
@ -136,12 +152,12 @@ class HumanInputRepository:
try:
input_record = HumanInput.create(content=content, source=source)
logger.debug(f"Created human input ID {input_record.id} from {source}")
return input_record
return self._to_model(input_record)
except peewee.DatabaseError as e:
logger.error(f"Failed to create human input record: {str(e)}")
raise
def get(self, input_id: int) -> Optional[HumanInput]:
def get(self, input_id: int) -> Optional[HumanInputModel]:
"""
Retrieve a human input record by its ID.
@ -149,18 +165,19 @@ class HumanInputRepository:
input_id: The ID of the human input to retrieve
Returns:
Optional[HumanInput]: The human input instance if found, None otherwise
Optional[HumanInputModel]: The human input instance if found, None otherwise
Raises:
peewee.DatabaseError: If there's an error accessing the database
"""
try:
return HumanInput.get_or_none(HumanInput.id == input_id)
human_input = HumanInput.get_or_none(HumanInput.id == input_id)
return self._to_model(human_input)
except peewee.DatabaseError as e:
logger.error(f"Failed to fetch human input {input_id}: {str(e)}")
raise
def update(self, input_id: int, content: str = None, source: str = None) -> Optional[HumanInput]:
def update(self, input_id: int, content: str = None, source: str = None) -> Optional[HumanInputModel]:
"""
Update an existing human input record.
@ -170,14 +187,14 @@ class HumanInputRepository:
source: The new source for the human input
Returns:
Optional[HumanInput]: The updated human input if found, None otherwise
Optional[HumanInputModel]: The updated human input if found, None otherwise
Raises:
peewee.DatabaseError: If there's an error updating the record
"""
try:
# First check if the record exists
input_record = self.get(input_id)
# We need to get the raw Peewee object for updating
input_record = HumanInput.get_or_none(HumanInput.id == input_id)
if not input_record:
logger.warning(f"Attempted to update non-existent human input {input_id}")
return None
@ -190,7 +207,7 @@ class HumanInputRepository:
input_record.save()
logger.debug(f"Updated human input ID {input_id}")
return input_record
return self._to_model(input_record)
except peewee.DatabaseError as e:
logger.error(f"Failed to update human input {input_id}: {str(e)}")
raise
@ -223,23 +240,24 @@ class HumanInputRepository:
logger.error(f"Failed to delete human input {input_id}: {str(e)}")
raise
def get_all(self) -> List[HumanInput]:
def get_all(self) -> List[HumanInputModel]:
"""
Retrieve all human input records from the database.
Returns:
List[HumanInput]: List of all human input instances
List[HumanInputModel]: List of all human input instances
Raises:
peewee.DatabaseError: If there's an error accessing the database
"""
try:
return list(HumanInput.select().order_by(HumanInput.created_at.desc()))
human_inputs = list(HumanInput.select().order_by(HumanInput.created_at.desc()))
return [self._to_model(input) for input in human_inputs]
except peewee.DatabaseError as e:
logger.error(f"Failed to fetch all human inputs: {str(e)}")
raise
def get_recent(self, limit: int = 10) -> List[HumanInput]:
def get_recent(self, limit: int = 10) -> List[HumanInputModel]:
"""
Retrieve the most recent human input records.
@ -247,13 +265,14 @@ class HumanInputRepository:
limit: Maximum number of records to retrieve (default: 10)
Returns:
List[HumanInput]: List of the most recent human input records
List[HumanInputModel]: List of the most recent human input records
Raises:
peewee.DatabaseError: If there's an error accessing the database
"""
try:
return list(HumanInput.select().order_by(HumanInput.created_at.desc()).limit(limit))
human_inputs = list(HumanInput.select().order_by(HumanInput.created_at.desc()).limit(limit))
return [self._to_model(input) for input in human_inputs]
except peewee.DatabaseError as e:
logger.error(f"Failed to fetch recent human inputs: {str(e)}")
raise
@ -277,7 +296,7 @@ class HumanInputRepository:
logger.error(f"Failed to fetch most recent human input ID: {str(e)}")
raise
def get_by_source(self, source: str) -> List[HumanInput]:
def get_by_source(self, source: str) -> List[HumanInputModel]:
"""
Retrieve human input records by source.
@ -285,13 +304,14 @@ class HumanInputRepository:
source: The source to filter by (e.g., "cli", "chat", "hil")
Returns:
List[HumanInput]: List of human input records from the specified source
List[HumanInputModel]: List of human input records from the specified source
Raises:
peewee.DatabaseError: If there's an error accessing the database
"""
try:
return list(HumanInput.select().where(HumanInput.source == source).order_by(HumanInput.created_at.desc()))
human_inputs = list(HumanInput.select().where(HumanInput.source == source).order_by(HumanInput.created_at.desc()))
return [self._to_model(input) for input in human_inputs]
except peewee.DatabaseError as e:
logger.error(f"Failed to fetch human inputs by source {source}: {str(e)}")
raise

View File

@ -12,6 +12,7 @@ from contextlib import contextmanager
import peewee
from ra_aid.database.models import KeyFact
from ra_aid.database.pydantic_models import KeyFactModel
from ra_aid.logging_config import get_logger
logger = get_logger(__name__)
@ -120,7 +121,22 @@ class KeyFactRepository:
raise ValueError("Database connection is required for KeyFactRepository")
self.db = db
def create(self, content: str, human_input_id: Optional[int] = None) -> KeyFact:
def _to_model(self, fact: Optional[KeyFact]) -> Optional[KeyFactModel]:
"""
Convert a Peewee KeyFact object to a Pydantic KeyFactModel.
Args:
fact: Peewee KeyFact instance or None
Returns:
Optional[KeyFactModel]: Pydantic model representation or None if fact is None
"""
if fact is None:
return None
return KeyFactModel.model_validate(fact, from_attributes=True)
def create(self, content: str, human_input_id: Optional[int] = None) -> KeyFactModel:
"""
Create a new key fact in the database.
@ -129,7 +145,7 @@ class KeyFactRepository:
human_input_id: Optional ID of the associated human input
Returns:
KeyFact: The newly created key fact instance
KeyFactModel: The newly created key fact instance
Raises:
peewee.DatabaseError: If there's an error creating the fact
@ -137,12 +153,12 @@ class KeyFactRepository:
try:
fact = KeyFact.create(content=content, human_input_id=human_input_id)
logger.debug(f"Created key fact ID {fact.id}: {content}")
return fact
return self._to_model(fact)
except peewee.DatabaseError as e:
logger.error(f"Failed to create key fact: {str(e)}")
raise
def get(self, fact_id: int) -> Optional[KeyFact]:
def get(self, fact_id: int) -> Optional[KeyFactModel]:
"""
Retrieve a key fact by its ID.
@ -150,18 +166,19 @@ class KeyFactRepository:
fact_id: The ID of the key fact to retrieve
Returns:
Optional[KeyFact]: The key fact instance if found, None otherwise
Optional[KeyFactModel]: The key fact instance if found, None otherwise
Raises:
peewee.DatabaseError: If there's an error accessing the database
"""
try:
return KeyFact.get_or_none(KeyFact.id == fact_id)
fact = KeyFact.get_or_none(KeyFact.id == fact_id)
return self._to_model(fact)
except peewee.DatabaseError as e:
logger.error(f"Failed to fetch key fact {fact_id}: {str(e)}")
raise
def update(self, fact_id: int, content: str) -> Optional[KeyFact]:
def update(self, fact_id: int, content: str) -> Optional[KeyFactModel]:
"""
Update an existing key fact.
@ -170,14 +187,14 @@ class KeyFactRepository:
content: The new content for the key fact
Returns:
Optional[KeyFact]: The updated key fact if found, None otherwise
Optional[KeyFactModel]: The updated key fact if found, None otherwise
Raises:
peewee.DatabaseError: If there's an error updating the fact
"""
try:
# First check if the fact exists
fact = self.get(fact_id)
fact = KeyFact.get_or_none(KeyFact.id == fact_id)
if not fact:
logger.warning(f"Attempted to update non-existent key fact {fact_id}")
return None
@ -186,7 +203,7 @@ class KeyFactRepository:
fact.content = content
fact.save()
logger.debug(f"Updated key fact ID {fact_id}: {content}")
return fact
return self._to_model(fact)
except peewee.DatabaseError as e:
logger.error(f"Failed to update key fact {fact_id}: {str(e)}")
raise
@ -206,7 +223,7 @@ class KeyFactRepository:
"""
try:
# First check if the fact exists
fact = self.get(fact_id)
fact = KeyFact.get_or_none(KeyFact.id == fact_id)
if not fact:
logger.warning(f"Attempted to delete non-existent key fact {fact_id}")
return False
@ -219,18 +236,19 @@ class KeyFactRepository:
logger.error(f"Failed to delete key fact {fact_id}: {str(e)}")
raise
def get_all(self) -> List[KeyFact]:
def get_all(self) -> List[KeyFactModel]:
"""
Retrieve all key facts from the database.
Returns:
List[KeyFact]: List of all key fact instances
List[KeyFactModel]: List of all key fact instances
Raises:
peewee.DatabaseError: If there's an error accessing the database
"""
try:
return list(KeyFact.select().order_by(KeyFact.id))
facts = list(KeyFact.select().order_by(KeyFact.id))
return [self._to_model(fact) for fact in facts]
except peewee.DatabaseError as e:
logger.error(f"Failed to fetch all key facts: {str(e)}")
raise

View File

@ -11,6 +11,7 @@ import contextvars
import peewee
from ra_aid.database.models import KeySnippet
from ra_aid.database.pydantic_models import KeySnippetModel
from ra_aid.logging_config import get_logger
logger = get_logger(__name__)
@ -129,10 +130,25 @@ class KeySnippetRepository:
raise ValueError("Database connection is required for KeySnippetRepository")
self.db = db
def _to_model(self, snippet: Optional[KeySnippet]) -> Optional[KeySnippetModel]:
"""
Convert a Peewee KeySnippet object to a Pydantic KeySnippetModel.
Args:
snippet: Peewee KeySnippet instance or None
Returns:
Optional[KeySnippetModel]: Pydantic model representation or None if snippet is None
"""
if snippet is None:
return None
return KeySnippetModel.model_validate(snippet, from_attributes=True)
def create(
self, filepath: str, line_number: int, snippet: str, description: Optional[str] = None,
human_input_id: Optional[int] = None
) -> KeySnippet:
) -> KeySnippetModel:
"""
Create a new key snippet in the database.
@ -144,7 +160,7 @@ class KeySnippetRepository:
human_input_id: Optional ID of the associated human input
Returns:
KeySnippet: The newly created key snippet instance
KeySnippetModel: The newly created key snippet instance
Raises:
peewee.DatabaseError: If there's an error creating the snippet
@ -158,12 +174,12 @@ class KeySnippetRepository:
human_input_id=human_input_id
)
logger.debug(f"Created key snippet ID {key_snippet.id}: {filepath}:{line_number}")
return key_snippet
return self._to_model(key_snippet)
except peewee.DatabaseError as e:
logger.error(f"Failed to create key snippet: {str(e)}")
raise
def get(self, snippet_id: int) -> Optional[KeySnippet]:
def get(self, snippet_id: int) -> Optional[KeySnippetModel]:
"""
Retrieve a key snippet by its ID.
@ -171,13 +187,14 @@ class KeySnippetRepository:
snippet_id: The ID of the key snippet to retrieve
Returns:
Optional[KeySnippet]: The key snippet instance if found, None otherwise
Optional[KeySnippetModel]: The key snippet instance if found, None otherwise
Raises:
peewee.DatabaseError: If there's an error accessing the database
"""
try:
return KeySnippet.get_or_none(KeySnippet.id == snippet_id)
snippet = KeySnippet.get_or_none(KeySnippet.id == snippet_id)
return self._to_model(snippet)
except peewee.DatabaseError as e:
logger.error(f"Failed to fetch key snippet {snippet_id}: {str(e)}")
raise
@ -189,7 +206,7 @@ class KeySnippetRepository:
line_number: int,
snippet: str,
description: Optional[str] = None
) -> Optional[KeySnippet]:
) -> Optional[KeySnippetModel]:
"""
Update an existing key snippet.
@ -201,14 +218,14 @@ class KeySnippetRepository:
description: Optional description of the significance
Returns:
Optional[KeySnippet]: The updated key snippet if found, None otherwise
Optional[KeySnippetModel]: The updated key snippet if found, None otherwise
Raises:
peewee.DatabaseError: If there's an error updating the snippet
"""
try:
# First check if the snippet exists
key_snippet = self.get(snippet_id)
key_snippet = KeySnippet.get_or_none(KeySnippet.id == snippet_id)
if not key_snippet:
logger.warning(f"Attempted to update non-existent key snippet {snippet_id}")
return None
@ -220,7 +237,7 @@ class KeySnippetRepository:
key_snippet.description = description
key_snippet.save()
logger.debug(f"Updated key snippet ID {snippet_id}: {filepath}:{line_number}")
return key_snippet
return self._to_model(key_snippet)
except peewee.DatabaseError as e:
logger.error(f"Failed to update key snippet {snippet_id}: {str(e)}")
raise
@ -240,7 +257,7 @@ class KeySnippetRepository:
"""
try:
# First check if the snippet exists
key_snippet = self.get(snippet_id)
key_snippet = KeySnippet.get_or_none(KeySnippet.id == snippet_id)
if not key_snippet:
logger.warning(f"Attempted to delete non-existent key snippet {snippet_id}")
return False
@ -253,18 +270,19 @@ class KeySnippetRepository:
logger.error(f"Failed to delete key snippet {snippet_id}: {str(e)}")
raise
def get_all(self) -> List[KeySnippet]:
def get_all(self) -> List[KeySnippetModel]:
"""
Retrieve all key snippets from the database.
Returns:
List[KeySnippet]: List of all key snippet instances
List[KeySnippetModel]: List of all key snippet instances
Raises:
peewee.DatabaseError: If there's an error accessing the database
"""
try:
return list(KeySnippet.select().order_by(KeySnippet.id))
snippets = list(KeySnippet.select().order_by(KeySnippet.id))
return [self._to_model(snippet) for snippet in snippets]
except peewee.DatabaseError as e:
logger.error(f"Failed to fetch all key snippets: {str(e)}")
raise

View File

@ -12,6 +12,7 @@ from contextlib import contextmanager
import peewee
from ra_aid.database.models import ResearchNote
from ra_aid.database.pydantic_models import ResearchNoteModel
from ra_aid.logging_config import get_logger
logger = get_logger(__name__)
@ -120,7 +121,22 @@ class ResearchNoteRepository:
raise ValueError("Database connection is required for ResearchNoteRepository")
self.db = db
def create(self, content: str, human_input_id: Optional[int] = None) -> ResearchNote:
def _to_model(self, note: Optional[ResearchNote]) -> Optional[ResearchNoteModel]:
"""
Convert a Peewee ResearchNote object to a Pydantic ResearchNoteModel.
Args:
note: Peewee ResearchNote instance or None
Returns:
Optional[ResearchNoteModel]: Pydantic model representation or None if note is None
"""
if note is None:
return None
return ResearchNoteModel.model_validate(note, from_attributes=True)
def create(self, content: str, human_input_id: Optional[int] = None) -> ResearchNoteModel:
"""
Create a new research note in the database.
@ -129,7 +145,7 @@ class ResearchNoteRepository:
human_input_id: Optional ID of the associated human input
Returns:
ResearchNote: The newly created research note instance
ResearchNoteModel: The newly created research note instance
Raises:
peewee.DatabaseError: If there's an error creating the note
@ -137,12 +153,12 @@ class ResearchNoteRepository:
try:
note = ResearchNote.create(content=content, human_input_id=human_input_id)
logger.debug(f"Created research note ID {note.id}: {content[:50]}...")
return note
return self._to_model(note)
except peewee.DatabaseError as e:
logger.error(f"Failed to create research note: {str(e)}")
raise
def get(self, note_id: int) -> Optional[ResearchNote]:
def get(self, note_id: int) -> Optional[ResearchNoteModel]:
"""
Retrieve a research note by its ID.
@ -150,18 +166,19 @@ class ResearchNoteRepository:
note_id: The ID of the research note to retrieve
Returns:
Optional[ResearchNote]: The research note instance if found, None otherwise
Optional[ResearchNoteModel]: The research note instance if found, None otherwise
Raises:
peewee.DatabaseError: If there's an error accessing the database
"""
try:
return ResearchNote.get_or_none(ResearchNote.id == note_id)
note = ResearchNote.get_or_none(ResearchNote.id == note_id)
return self._to_model(note)
except peewee.DatabaseError as e:
logger.error(f"Failed to fetch research note {note_id}: {str(e)}")
raise
def update(self, note_id: int, content: str) -> Optional[ResearchNote]:
def update(self, note_id: int, content: str) -> Optional[ResearchNoteModel]:
"""
Update an existing research note.
@ -170,14 +187,14 @@ class ResearchNoteRepository:
content: The new content for the research note
Returns:
Optional[ResearchNote]: The updated research note if found, None otherwise
Optional[ResearchNoteModel]: The updated research note if found, None otherwise
Raises:
peewee.DatabaseError: If there's an error updating the note
"""
try:
# First check if the note exists
note = self.get(note_id)
note = ResearchNote.get_or_none(ResearchNote.id == note_id)
if not note:
logger.warning(f"Attempted to update non-existent research note {note_id}")
return None
@ -186,7 +203,7 @@ class ResearchNoteRepository:
note.content = content
note.save()
logger.debug(f"Updated research note ID {note_id}: {content[:50]}...")
return note
return self._to_model(note)
except peewee.DatabaseError as e:
logger.error(f"Failed to update research note {note_id}: {str(e)}")
raise
@ -206,7 +223,7 @@ class ResearchNoteRepository:
"""
try:
# First check if the note exists
note = self.get(note_id)
note = ResearchNote.get_or_none(ResearchNote.id == note_id)
if not note:
logger.warning(f"Attempted to delete non-existent research note {note_id}")
return False
@ -219,18 +236,19 @@ class ResearchNoteRepository:
logger.error(f"Failed to delete research note {note_id}: {str(e)}")
raise
def get_all(self) -> List[ResearchNote]:
def get_all(self) -> List[ResearchNoteModel]:
"""
Retrieve all research notes from the database.
Returns:
List[ResearchNote]: List of all research note instances
List[ResearchNoteModel]: List of all research note instances
Raises:
peewee.DatabaseError: If there's an error accessing the database
"""
try:
return list(ResearchNote.select().order_by(ResearchNote.id))
notes = list(ResearchNote.select().order_by(ResearchNote.id))
return [self._to_model(note) for note in notes]
except peewee.DatabaseError as e:
logger.error(f"Failed to fetch all research notes: {str(e)}")
raise

View File

@ -0,0 +1,276 @@
"""
Session repository implementation for database access.
This module provides a repository implementation for the Session model,
following the repository pattern for data access abstraction. It handles
operations for storing and retrieving application session information.
"""
from typing import Dict, List, Optional, Any
import contextvars
import datetime
import json
import logging
import sys
import peewee
from ra_aid.database.models import Session
from ra_aid.database.pydantic_models import SessionModel
from ra_aid.__version__ import __version__
from ra_aid.logging_config import get_logger
logger = get_logger(__name__)
# Create contextvar to hold the SessionRepository instance
session_repo_var = contextvars.ContextVar("session_repo", default=None)
class SessionRepositoryManager:
"""
Context manager for SessionRepository.
This class provides a context manager interface for SessionRepository,
using the contextvars approach for thread safety.
Example:
with DatabaseManager() as db:
with SessionRepositoryManager(db) as repo:
# Use the repository
session = repo.create_session()
current_session = repo.get_current_session()
"""
def __init__(self, db):
"""
Initialize the SessionRepositoryManager.
Args:
db: Database connection to use (required)
"""
self.db = db
def __enter__(self) -> 'SessionRepository':
"""
Initialize the SessionRepository and return it.
Returns:
SessionRepository: The initialized repository
"""
repo = SessionRepository(self.db)
session_repo_var.set(repo)
return repo
def __exit__(
self,
exc_type: Optional[type],
exc_val: Optional[Exception],
exc_tb: Optional[object],
) -> None:
"""
Reset the repository when exiting the context.
Args:
exc_type: The exception type if an exception was raised
exc_val: The exception value if an exception was raised
exc_tb: The traceback if an exception was raised
"""
# Reset the contextvar to None
session_repo_var.set(None)
# Don't suppress exceptions
return False
def get_session_repository() -> 'SessionRepository':
"""
Get the current SessionRepository instance.
Returns:
SessionRepository: The current repository instance
Raises:
RuntimeError: If no repository has been initialized with SessionRepositoryManager
"""
repo = session_repo_var.get()
if repo is None:
raise RuntimeError(
"No SessionRepository available. "
"Make sure to initialize one with SessionRepositoryManager first."
)
return repo
class SessionRepository:
"""
Repository for handling Session records in the database.
This class provides methods for creating, retrieving, and managing Session records.
It abstracts away the database operations and provides a clean interface for working
with Session entities.
"""
def __init__(self, db):
"""
Initialize the SessionRepository.
Args:
db: Database connection to use (required)
"""
if db is None:
raise ValueError("Database connection is required for SessionRepository")
self.db = db
self.current_session = None
def _to_model(self, session: Optional[Session]) -> Optional[SessionModel]:
"""
Convert a Peewee Session object to a Pydantic SessionModel.
Args:
session: Peewee Session instance or None
Returns:
Optional[SessionModel]: Pydantic model representation or None if session is None
"""
if session is None:
return None
return SessionModel.model_validate(session, from_attributes=True)
def create_session(self, metadata: Optional[Dict[str, Any]] = None) -> SessionModel:
"""
Create a new session record in the database.
Args:
metadata: Optional dictionary of additional metadata to store with the session
Returns:
SessionModel: The newly created session instance
Raises:
peewee.DatabaseError: If there's an error creating the record
"""
try:
# Get command line arguments
command_line = " ".join(sys.argv)
# Get program version
program_version = __version__
# JSON encode metadata if provided
machine_info = json.dumps(metadata) if metadata is not None else None
session = Session.create(
start_time=datetime.datetime.now(),
command_line=command_line,
program_version=program_version,
machine_info=machine_info
)
# Store the current session
self.current_session = session
logger.debug(f"Created new session with ID {session.id}")
return self._to_model(session)
except peewee.DatabaseError as e:
logger.error(f"Failed to create session record: {str(e)}")
raise
def get_current_session(self) -> Optional[SessionModel]:
"""
Get the current active session.
If no session has been created in this repository instance,
retrieves the most recent session from the database.
Returns:
Optional[SessionModel]: The current session or None if no sessions exist
"""
if self.current_session is not None:
return self._to_model(self.current_session)
try:
# Find the most recent session
session = Session.select().order_by(Session.created_at.desc()).first()
if session:
self.current_session = session
return self._to_model(session)
except peewee.DatabaseError as e:
logger.error(f"Failed to get current session: {str(e)}")
return None
def get_current_session_id(self) -> Optional[int]:
"""
Get the ID of the current active session.
Returns:
Optional[int]: The ID of the current session or None if no session exists
"""
session = self.get_current_session()
return session.id if session else None
def get(self, session_id: int) -> Optional[SessionModel]:
"""
Get a session by its ID.
Args:
session_id: The ID of the session to retrieve
Returns:
Optional[SessionModel]: The session with the given ID or None if not found
"""
try:
session = Session.get_or_none(Session.id == session_id)
return self._to_model(session)
except peewee.DatabaseError as e:
logger.error(f"Database error getting session {session_id}: {str(e)}")
return None
def get_all(self, offset: int = 0, limit: int = 10) -> tuple[List[SessionModel], int]:
"""
Get all sessions from the database with pagination support.
Args:
offset: Number of sessions to skip (default: 0)
limit: Maximum number of sessions to return (default: 10)
Returns:
tuple: (List[SessionModel], int) containing the list of sessions and the total count
"""
try:
# Get total count for pagination info
total_count = Session.select().count()
# Get paginated sessions ordered by created_at in descending order (newest first)
sessions = list(
Session.select()
.order_by(Session.created_at.desc())
.offset(offset)
.limit(limit)
)
return [self._to_model(session) for session in sessions], total_count
except peewee.DatabaseError as e:
logger.error(f"Failed to get all sessions with pagination: {str(e)}")
return [], 0
def get_recent(self, limit: int = 10) -> List[SessionModel]:
"""
Get the most recent sessions from the database.
Args:
limit: Maximum number of sessions to return (default: 10)
Returns:
List[SessionModel]: List of the most recent sessions
"""
try:
sessions = list(
Session.select()
.order_by(Session.created_at.desc())
.limit(limit)
)
return [self._to_model(session) for session in sessions]
except peewee.DatabaseError as e:
logger.error(f"Failed to get recent sessions: {str(e)}")
return []

Some files were not shown because too many files have changed in this diff Show More