Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update code snippets in README #19

Merged
merged 2 commits into from
Feb 27, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 7 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,30 +24,30 @@ Using this convenience API, requesting text completion from an already
loaded LLM is as straightforward as:

```python
import lmstudio as lm
import lmstudio as lms

llm = lm.llm()
llm.complete("Once upon a time,")
model = lms.llm()
model.complete("Once upon a time,")
```

Requesting a chat response instead only requires the extra step of
setting up a `Chat` helper to manage the chat history and include
it in response prediction requests:

```python
import lmstudio as lm
import lmstudio as lms

EXAMPLE_MESSAGES = (
"My hovercraft is full of eels!",
"I will not buy this record, it is scratched."
)

llm = lm.llm()
chat = lm.Chat("You are a helpful shopkeeper assisting a foreign traveller")
model = lms.llm()
chat = lms.Chat("You are a helpful shopkeeper assisting a foreign traveller")
for message in EXAMPLE_MESSAGES:
chat.add_user_message(message)
print(f"Customer: {message}")
response = llm.respond(chat)
response = model.respond(chat)
chat.add_assistant_response(response)
print(f"Shopkeeper: {response}")
```
Expand Down