Skip to content

Flask web app that uses a custom Norwegian trained Llama model to summarize text.

Notifications You must be signed in to change notification settings

jenslys/ai-summarizer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AI-Summarizer

A simple web application that uses a custom Llama model specifically tuned for Norwegian text by RuterNorway. The application allows the user to input text and receive a summarized version.

Demo

YouTube Demo

Prerequisites

Ollama

Custom Ollama Model

  1. Download the Model:

    • Download the quantized GGUF version of the model from Hugging Face
    • Drag the model file into the models folder
    • Rename the model file to llama-2-13b-chat-norwegian.gguf
  2. Create a Model File:

    • Create a model file with the following content:

      # set the base model
      FROM llama-2-13b-chat-norwegian.gguf
      
      # Set custom parameter values
      PARAMETER temperature 0.7
      
      # Define stop tokens to ensure proper parsing of sections
      PARAMETER stop  </>
      PARAMETER stop  </>
      PARAMETER stop  </>
      PARAMETER stop <|reserved_special_token|>
      
      # Set the model template to handle summarization
      TEMPLATE """
      {{ if .System }} </system>
      {{ .System }}{{ end }}
      {{ if .Prompt }} </user>
      {{ .Prompt }}{{ end }}
      </assistant>
      {{ .Response }}
      """
      
      SYSTEM Du er en erfaren journalist. Svar med en kort og nøyaktig oppsummering av informasjonen som er gitt.
  3. Create the Custom Ollama Model:

    • Run the following command to create the custom model:

      ollama create llama-nor -f .modelfile

Python Dependencies

  • Install Python dependencies using:

    pip install -r requirements.txt

Running the Application

  1. Start the Flask Application:

    python main.py
  2. Open the Web Application:

    • Open your web browser and navigate to http://127.0.0.1:5000.

Usage

  • Enter the text you want to summarize in the "Tekst å summere" field.
  • Click the "Summer" button.
  • The summarized text will appear in the "Sammendrag" field.

Architecture

Overview

The application consists of the following components:

  1. Frontend: A simple HTML form where users can input text and receive the summarized output.
  2. Backend: A Flask server that handles requests from the frontend and interacts with the Ollama model.
  3. Model: A custom Llama model fine-tuned for Norwegian text summarization.

Data Flow

  1. User Input: The user enters text into the input field on the web page.
  2. Request Handling: The frontend sends the input text to the Flask backend via an HTTP POST request.
  3. Model Interaction: The Flask server processes the input and sends it to the Ollama model for summarization.
  4. Response Handling: The Ollama model returns the summarized text to the Flask server.
  5. Display Output: The Flask server sends the summarized text back to the frontend, where it is displayed to the user.

References

About

Flask web app that uses a custom Norwegian trained Llama model to summarize text.

Topics

Resources

Stars

Watchers

Forks