Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Personal finance AI (v1) #2022

Merged
merged 66 commits into from
Mar 28, 2025
Merged

Personal finance AI (v1) #2022

merged 66 commits into from
Mar 28, 2025

Conversation

zachgoll
Copy link
Collaborator

@zachgoll zachgoll commented Mar 24, 2025

This PR is a continuation of #1985 and finalizes the "V1" of the personal finance AI chat feature.

Domain overview

  • Chat - has many messages, has one "assistant"
    • Chat::Debuggable - defines "debug mode", where the chat will persist verbose debug messages to help better understand the path it took to get to its final response
  • Message - a message can be a "user message", "assistant message", or "developer message" (uses STI)
  • ToolCall - belongs to a Message and is a "subroutine" a message uses to augment its response. Tool calls are only relevant for assistant messages and are optional.
  • Assistant - owned by Chat, this represents a generic "LLM assistant" that can perform chat completions
    • Assistant::Provided is responsible for finding the correct provider for a given response (i.e. the user can select a model to use for each message)
  • Assistant::Functions - the assistant comes with a library of pre-defined "functions" that the LLM provider can call to augment chat responses. An Assistant::Functions::ConcreteFunction must provide a name, description, parameters schema, and call(params = {}) method. These are passed to and executed by the Provider.
  • Provider::ConcreteLLMProvider (e.g. Provider::OpenAI)
    • Assistant::Provideable defines the interface that all LLM providers must implement to provide completions for the assistant

Swappable LLM Providers

This first version of the chat implements a single Provider::OpenAI, which implements to the Assistant::Provideable interface.

Provider responsibilities

Each "LLM Provider" is responsible for:

  • Providing chat completions
  • Executing tool calls given the provided functions
  • Handling streaming of responses
  • Building a generic Assistant::Provideable::ChatResponse for the Assistant to use

Concrete LLM implementations

To introduce a new LLM, simply implement the interface:

class Provider::Anthropic
  include Assistant::Provideable

  def chat_response(chat_history:, model: nil, instructions: nil, functions: [])
    provider_response do
      Assistant::Provideable::ChatResponse.new(
        # Anthropic implementation
     )
    end
  end
end

This way, Assistant can easily choose different models for each chat message:

module Assistant::Provided
  extend ActiveSupport::Concern

  def get_model_provider(ai_model)
    available_providers.find { |provider| provider.supports_model?(ai_model) }
  end

  private
    def available_providers
      [ Providers.openai ].compact
    end
end

class Assistant
  def respond_to(message)

    # Assistant can easily "swap out" providers based on the user's selection for each message
    provider = get_model_provider(message.ai_model)

    # response = provider.chat_response( ... )
  end
end

Turbo frames, streams, and broadcasts

The AI chat feature is entirely contained within the global sidebar, and uses Turbo frames to load the various resource views for the Chat resource (i.e. new, show, index).

In application.html.erb layout, the chat_view_path(@chat) helper is used to determine which resource view should currently show in the chat sidebar:

  • If @chat is set and it is a persisted record, the sidebar loads the show path
  • If @chat is set and its a new record, the sidebar loads new
  • If @chat is nil, show index
def chat_view_path(chat)
  return new_chat_path if params[:chat_view] == "new"
  return chats_path if chat.nil? || params[:chat_view] == "all"

  chat.persisted? ? chat_path(chat) : new_chat_path
end
<% if Current.user.ai_enabled? %>
  <%= turbo_frame_tag chat_frame, src: chat_view_path(@chat), loading: "lazy", class: "h-full" do %>
    <div class="flex justify-center items-center h-full">
      <%= lucide_icon("loader-circle", class: "w-5 h-5 text-secondary animate-spin") %>
    </div>
  <% end %>
<% else %>
  <%= render "chats/ai_consent" %>
<% end%>

Chat state

There are two important concepts for managing the sidebar chat state:

  • "Current chat" - identified by the @chat controller instance variable
  • "Last viewed chat" - used to preserve sidebar state across page refreshes

The @chat variable is set via inheritance in ApplicationController and ChatsController. In other words, the sidebar defaults to showing the "last viewed chat" unless told otherwise by a more specific action in the inheritance hierarchy:

class ApplicationController < ActionController::Base
  before_action :set_default_chat

  private
    # By default, we show the user the last chat they interacted with
    def set_default_chat
      @last_viewed_chat = Current.user&.last_viewed_chat
      @chat = @last_viewed_chat
    end
end

class ChatsController < ApplicationController
  before_action :set_chat, only: [ :show, :edit, :update, :destroy ]

  # override application_controller default behavior of setting @chat to last viewed chat
  def index
    @chat = nil
  end

  def show
    set_last_viewed_chat(@chat)
  end

  def new
    @chat = Current.user.chats.new(title: "New chat #{Time.current.strftime("%Y-%m-%d %H:%M")}")
  end
end

Broadcasts and "thinking"

Assistant responses run in background jobs and therefore require a "thinking" indicator. The base Message model implements both create and update callbacks, which both broadcast the changes to the Chat if broadcast? returns true on the specific type.

  • All creates and updates broadcast directly to the chat
  • If the message is a user message, we request an assistant response, which will enqueue a response job
  • The job creates/updates assistant messages, which will trigger broadcasts to the chat through these callbacks
  • The chat's show action can have a ?thinking=true param to trigger the AI "thinking" message. The AssistantResponseJob is then responsible for removing that message when the response is complete (otherwise, it is just removed on the next page refresh since the param is passed in a turbo frame). See ChatsController#create and MessagesController#create where we redirect_to chat_path(@chat, thinking: true) to immediately show the "thinking" message.
# User messages have a special `request_response` hook
class UserMessage < Message
  validates :ai_model, presence: true

  after_create_commit :request_response_later

  def role
    "user"
  end

  def request_response_later
    chat.ask_assistant_later(self)
  end

  def request_response
    chat.ask_assistant(self)
  end

  private
    def broadcast?
      true
    end
end

Shpigford and others added 30 commits February 25, 2025 20:33
- Add chat and messages controllers
- Create chat and message views
- Implement chat-related routes
- Add message broadcasting and user interactions
- Update application layout to support chat sidebar
- Enhance user model with initials method
- Update sidebar layout with dynamic width and improved responsiveness
- Add new chat menu Stimulus controller for toggling between chat and chat list views
- Improve chat list display with recent chats and empty state
- Extract AI avatar to a partial for reusability
- Enhance message display and interaction styling
- Add more contextual buttons and interaction hints
- Refactor chat scroll functionality with Stimulus controller
- Optimize message scrolling in chat views
- Update message styling for better visual hierarchy
- Enhance chat container layout with flex and auto-scroll
- Simplify message rendering across different chat views
- Refactor AI avatar rendering across chat views
- Replace hardcoded avatar markup with a reusable partial
- Simplify avatar display in chats and messages views
- Add conditional width class for right sidebar panel
- Ensure consistent sidebar toggle behavior for both left and right panels
- Use specific width class for right panel (w-[375px])
- Extract message form to a reusable partial with dynamic context support
- Create flexible AI greeting partial for consistent welcome messages
- Simplify chat and sidebar views by leveraging new partials
- Add support for different form scenarios (chat, new chat, sidebar)
- Improve code modularity and reduce duplication
- Implement clear chat action in ChatsController
- Add clear chat route to support clearing messages
- Update AI sidebar with dropdown menu for chat actions
- Preserve system message when clearing chat
- Enhance chat interaction with new menu options
- Create initial frontmatter for structure.mdc file
- Include description and configuration options
- Prepare for potential dynamic documentation rendering
- Add rule for using `Current.family` instead of `current_family`
- Include new guidelines for testing, API routes, and solution approach
- Expand project-specific rules for more consistent development practices
- Add `ruby-openai` gem for AI integration
- Implement `to_ai_readable_hash` methods in BalanceSheet and IncomeStatement
- Include Promptable module in both models
- Add savings rate calculation method in IncomeStatement
- Prepare financial models for AI-powered insights and interactions
…apabilities

- Implement comprehensive AI financial query system with function-based interactions
- Add detailed debug logging for AI responses and function calls
- Extend BalanceSheet and IncomeStatement models with AI-friendly methods
- Create robust error handling and fallback mechanisms for AI queries
- Update chat and message views to support debug mode and enhanced rendering
- Add AI query routes and initial test coverage for financial assistant
- Remove inline AI chat from application layout
- Enhance AI sidebar with more semantic HTML structure
- Add descriptive comments to clarify different sections of chat view
- Improve flex layout and scrolling behavior in chat messages container
- Optimize message rendering with more explicit class names and structure
- Implement `markdown` helper method in ApplicationHelper using Redcarpet
- Update message view to render AI messages with Markdown formatting
- Add comprehensive Markdown rendering options (tables, code blocks, links)
- Enhance AI Financial Assistant prompt to encourage Markdown usage
- Remove commented Markdown CSS in Tailwind application stylesheet
- Update @biomejs/biome to latest version with caret (^) notation
- Refactor AI query and chat controllers to use template literals
- Standardize npm scripts formatting in package.json
- Add family association to chat fixtures and tests
- Set consistent password digest for test users
- Enable AI for test users
- Add OpenAI access token for test environment
- Update chat and user model tests to include family context
@zachgoll zachgoll mentioned this pull request Mar 24, 2025
@zachgoll zachgoll marked this pull request as ready for review March 26, 2025 16:23
@zachgoll zachgoll changed the title Personal finance AI: Hotwire improvements, domain updates Personal finance AI (v1) Mar 26, 2025
@zachgoll zachgoll merged commit 2f6b11c into main Mar 28, 2025
5 checks passed
@zachgoll zachgoll deleted the zachgoll/ai-improvements branch March 28, 2025 17:08
@developerdsk
Copy link

alignment is very bad on small screens like 13 - 14 inch laptop with this commit
even if i try to minimize the Personal Finance AI , it doesn't minimize completely
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants