This template shows how to deploy a LangChain Expression Language Runnable as a set of HTTP endpoints with stream and batch support using LangServe onto Replit, a collaborative online code editor and platform for creating and deploying software.
The default chat endpoint is a chain that translates questions into pirate dialect.
- Deploy your app to Replit by clicking here.
- You will also need to set an
OPENAI_API_KEY
environment variable by going underTools > Secrets
in the bottom left corner. - To enable tracing, you'll also need to set
LANGCHAIN_TRACING_V2=true
,LANGCHAIN_API_KEY
, and optionallyLANGCHAIN_SESSION
.
- You will also need to set an
- Run
pip install -U langchain-cli
to install the required command. - Run
poetry install
to install the required dependencies. - Press
Run
onmain.py
. - Navigate to
https://your_url.repl.co/docs/
to see documentation for your live runnable, andhttps://your_url.repl.co/pirate-speak/playground/
to access a playground where you can try sending requests!
As you experiment, you can install the LangSmith Replit extension to see traces of your runs in action by navigating to either your default project or the one set in your LANGCHAIN_SESSION
environment variable.
You can use the RemoteRunnable
class in LangServe to call these hosted runnables:
from langserve import RemoteRunnable
pirate_chain = RemoteRunnable("https://your_url.repl.co/pirate-speak/")
pirate_chain.invoke({"question": "how are you?"})
# or async
await pirate_chain.ainvoke({"question": "how are you?"})
# Supports astream
async for msg in pirate_chain.astream({"question": "how are you?"}):
print(msg, end="", flush=True)
In TypeScript (requires LangChain.js version 0.0.166 or later):
import { RemoteRunnable } from "langchain/runnables/remote";
const pirateChain = new RemoteRunnable({ url: `https://your_url.repl.co/pirate-speak/` });
const result = await pirateChain.invoke({
"question": "how are you?",
});
You can also use curl
:
curl --location --request POST 'https://your_url.repl.co/pirate-speak/invoke' \
--header 'Content-Type: application/json' \
--data-raw '{
"input": {
"question": "how are you?"
}
}'
You can add more chains from a variety of templates by using the LangChain CLI:
$ langchain app add <template name>
For full docs and a list of possible templates, see the official page here.
LangServe makes the following endpoints available:
POST /my_runnable/invoke
- invoke the runnable on a single inputPOST /my_runnable/batch
- invoke the runnable on a batch of inputsPOST /my_runnable/stream
- invoke on a single input and stream the outputPOST /my_runnable/stream_log
- invoke on a single input and stream the output, including partial outputs of intermediate stepsGET /my_runnable/input_schema
- json schema for input to the runnableGET /my_runnable/output_schema
- json schema for output of the runnableGET /my_runnable/config_schema
- json schema for config of the runnable
You can navigate to https://your_url.repl.co/docs/
to see generated documentation.
Here are some convenient links:
Follow LangChain on X (formerly Twitter) @LangChainAI for more!