-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add lsp-ai support #3206
feat: add lsp-ai support #3206
Conversation
957e7ca
to
19b7ea7
Compare
needs a rebase, the |
Thanks for the heads up, will do and attempt to finish this in the next few days. |
e9f60ab
to
195b0ca
Compare
I finally got to play a bit with an up to date version of the language server to make sure everything works, seems OK. I still don't think there's much more that can be provided in terms of reasonable defaults (except maybe something for the root dir ?), so this is the bare minimum required for the server to attach without complaints. |
https://github.com/SilasMarvin/lsp-ai | ||
|
||
LSP-AI is an open source language server that serves as a backend for AI-powered functionality in your favorite code | ||
editors. It offers features like in-editor chatting with LLMs and code completions. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does chat work? Is that a custom lsp request?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's one available (document/textGeneration
IIRC) but it feels somewhat low-level, The more user friendly way is to define a prefix in config that enables the chat code action when present in the buffer.
models = vim.empty_dict(), | ||
}, | ||
}, | ||
docs = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Getting an issue downstream with this being a table instead of just a string, do we need to support the desc
sometimes being a table of strings ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pr welcome
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was just double checking whether it was supposed to only be strings or you do expect tables for this, if it can be changed I can do a quick PR. Created #3374 to address
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah that looks like a mistake on my part thanks for the PR 👍
This PR adds support for the lsp-ai language server. Since the server is language-agnostic, I've set the defaults to attach to all filetypes, and always run in single-file mode.
Do note that as is, this is not a full, functional configuration (it shouldn't error, just won't generate any completions), as there are parts for which I don't think there are good default values, so I just want to be sure you're good with this as long as I document it, since from a quick look at the supported server list it looks like most (all ?) servers work out-of-the-box with the default config of an empty
setup{}
call. Then again this one is a bit unusual.In particular, a user would need to select an inference backend, which could either be some API requiring an API key or some other user-managed inference server (no sane defaults for us to provide here), or have
lsp-ai
do inference itself withllama.cpp
, which requires a decent GPU as well as downloading a few GBs of model files from HuggingFace, which I don't think is a great default either.