-
Notifications
You must be signed in to change notification settings - Fork 737
feat: install Run:ai model streamer for vllm #4848
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: install Run:ai model streamer for vllm #4848
Conversation
|
👋 Hi gtbai! Thank you for contributing to ai-dynamo/dynamo. Just a reminder: The 🚀 |
WalkthroughAdded the Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~3 minutes These are consistent, homogeneous dependency configuration updates with no logic changes or structural complexity. Poem
Pre-merge checks✅ Passed checks (3 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Signed-off-by: Guangtong Bai <[email protected]>
a6504db to
af221f0
Compare
|
/ok to test af221f0 |
|
Thanks for the contribution @gtbai ! |
|
@ganeshku1 @nicolasnoble @itay for viz |
|
@rmccorm4 thanks for the review! I saw check Docker Build and Test / vllm (amd64) (push) failed on my previous commit, but seems due to an irrelevant error. Thus I updated the branch hoping to retry the failed check but it did not get triggered. Could you help rerun the checks? Thanks! |
|
/ok to test b1a0b67 |
|
@rmccorm4 some checks failed again, this time seems due to another transient failure : ) |
Overview:
Install RunAI dependency for Dynamo vLLM so that we can use Run:ai model streamer to load models from local paths
Test
Without the change saw:
With the change saw:
Chat completions API sanity check also ok:
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.