Skip to content

Replit template for hosting LangChain runnables via LangServe

Notifications You must be signed in to change notification settings

latifs/langserve-replit-template

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🦜🔗 LangServe Replit Template

Run on Repl.it

This template shows how to deploy a LangChain Expression Language Runnable as a set of HTTP endpoints with stream and batch support using LangServe onto Replit, a collaborative online code editor and platform for creating and deploying software.

Getting started

The default chat endpoint is a chain that translates questions into pirate dialect.

  1. Deploy your app to Replit by clicking here.
    • You will also need to set an OPENAI_API_KEY environment variable by going under Tools > Secrets in the bottom left corner.
    • To enable tracing, you'll also need to set LANGCHAIN_TRACING_V2=true, LANGCHAIN_API_KEY, and optionally LANGCHAIN_SESSION.
  2. Run pip install -U langchain-cli to install the required command.
  3. Run poetry install to install the required dependencies.
  4. Press Run on main.py.
  5. Navigate to https://your_url.repl.co/docs/ to see documentation for your live runnable, and https://your_url.repl.co/pirate-speak/playground/ to access a playground where you can try sending requests!

As you experiment, you can install the LangSmith Replit extension to see traces of your runs in action by navigating to either your default project or the one set in your LANGCHAIN_SESSION environment variable.

Calling from the client

You can use the RemoteRunnable class in LangServe to call these hosted runnables:

from langserve import RemoteRunnable

pirate_chain = RemoteRunnable("https://your_url.repl.co/pirate-speak/")

pirate_chain.invoke({"question": "how are you?"})

# or async
await pirate_chain.ainvoke({"question": "how are you?"})

# Supports astream
async for msg in pirate_chain.astream({"question": "how are you?"}):
    print(msg, end="", flush=True)

In TypeScript (requires LangChain.js version 0.0.166 or later):

import { RemoteRunnable } from "langchain/runnables/remote";

const pirateChain = new RemoteRunnable({ url: `https://your_url.repl.co/pirate-speak/` });
const result = await pirateChain.invoke({
  "question": "how are you?",
});

You can also use curl:

curl --location --request POST 'https://your_url.repl.co/pirate-speak/invoke' \
--header 'Content-Type: application/json' \
--data-raw '{
    "input": {
        "question": "how are you?"
    }
}'

Adding more chains

You can add more chains from a variety of templates by using the LangChain CLI:

$ langchain app add <template name>

For full docs and a list of possible templates, see the official page here.

API reference

LangServe makes the following endpoints available:

  • POST /my_runnable/invoke - invoke the runnable on a single input
  • POST /my_runnable/batch - invoke the runnable on a batch of inputs
  • POST /my_runnable/stream - invoke on a single input and stream the output
  • POST /my_runnable/stream_log - invoke on a single input and stream the output, including partial outputs of intermediate steps
  • GET /my_runnable/input_schema - json schema for input to the runnable
  • GET /my_runnable/output_schema - json schema for output of the runnable
  • GET /my_runnable/config_schema - json schema for config of the runnable

You can navigate to https://your_url.repl.co/docs/ to see generated documentation.

Thank you!

Here are some convenient links:

Follow LangChain on X (formerly Twitter) @LangChainAI for more!

About

Replit template for hosting LangChain runnables via LangServe

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 96.4%
  • Nix 3.6%