-
Notifications
You must be signed in to change notification settings - Fork 138
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use client and server for "ramalama run" #993
base: main
Are you sure you want to change the base?
Conversation
Reviewer's Guide by SourceryThis pull request focuses on using vLLM server and llama-server as the endpoint. It removes the Sequence diagram for exec_cmd with stderr redirectionsequenceDiagram
participant Caller
participant common
participant os
participant sys
Caller->>common: exec_cmd(args, debug=False, stderr2null=True)
alt stderr2null is True
common->>os: open(os.devnull, 'w')
os-->>common: devnull
common->>os: dup2(devnull.fileno(), sys.stderr.fileno())
os-->>common: Redirect stderr to devnull
end
common->>os: execvp(args[0], args)
alt Exception
os-->>common: Exception
common-->>Caller: Exception
else Success
os-->>common: Return Code
common-->>Caller: Return Code
end
Updated class diagram for common.pyclassDiagram
class common {
+exec_cmd(args, debug=False, stderr2null=False)
}
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @ericcurtin - I've reviewed your changes - here's some feedback:
Overall Comments:
- It's unusual to redirect stderr using
os.dup2
; consider using thesubprocess
module instead.
Here's what I looked at during the review
- 🟢 General issues: all looks good
- 🟢 Security: all looks good
- 🟢 Testing: all looks good
- 🟢 Complexity: all looks good
- 🟢 Documentation: all looks good
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
libexec/ramalama-run-core
Outdated
def main(args): | ||
return 0 | ||
sys.path.append('./') | ||
os.environ["PATH"] += os.pathsep + "libexec" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (code-quality): Use f-string instead of string concatenation (use-fstring-for-concatenation
)
os.environ["PATH"] += os.pathsep + "libexec" | |
os.environ["PATH"] += f"{os.pathsep}libexec" |
This is super messy, just to show people where I am going, nowhere near ready for merge. |
a25bb2e
to
83dd896
Compare
To focus on vLLM server and llama-server being the endpoint Signed-off-by: Eric Curtin <[email protected]>
83dd896
to
cde3d9e
Compare
To focus on vLLM server and llama-server being the endpoint
Summary by Sourcery
Refactor ramalama to use a client-server architecture.
Chores: