Skip to content

Commit a7f9cae

Browse files
committed
add vllm meetup notes
Signed-off-by: 申杉杉 <[email protected]>
1 parent 4d264ee commit a7f9cae

File tree

7 files changed

+66
-0
lines changed

7 files changed

+66
-0
lines changed
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,66 @@
1+
---
2+
layout: post
3+
title: "vLLM Beijing meetup: Innovation, Ecosystem and Cmmunity"
4+
author: "vLLM / vLLM Ascend / verl / LLaMAFactory Team"
5+
image: /assets/logos/vllm-logo-text-light.png
6+
---
7+
8+
On March 16, 2025, We hosted the tenth vLLM meetup with Huawei, the vLLM team, [verl](https://github.com/volcengine/verl), [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) team and [vLLM Ascend](https://github.com/vllm-project/vllm-ascend) team share how vLLM is leveraged in post-training, fine-tuning and deployment.
9+
10+
<p align="center">
11+
<picture>
12+
<img src="/assets/figures/vllm-2025-beijing-meetup/0.png" width="45%">
13+
</picture>
14+
</p>
15+
16+
It is worth noting that this meetup marked the first vLLM community exchange event held in China. It not only provided a great platform for communication among major Chinese enterprises and universities but also strengthened the connection between the vLLM community and Chinese developers. In the future, we hope that through the joint efforts of the vLLM community and Chinese users, vLLM can be further refined, made more efficient, and user-friendly.
17+
18+
## Talks
19+
20+
### vLLM Update
21+
22+
<p align="center">
23+
<picture>
24+
<img src="/assets/figures/vllm-2025-beijing-meetup/1.png" width="45%">
25+
</picture>
26+
</p>
27+
28+
Zhang Chen, one of the maintainers of vLLM, shared the recent work within the vLLM community and the release plan for the upcoming v0.8.0 version. She also highlighted the new features and usage methods of the vLLM V1 Engine. Compared to the V0 version, the V1 Engine will be more concise and efficient, and in the future, V1 will become the default option for vLLM.
29+
30+
<p align="center">
31+
<picture>
32+
<img src="/assets/figures/vllm-2025-beijing-meetup/2.png" width="45%">
33+
</picture>
34+
</p>
35+
36+
You Kaichao, another maintainer of vLLM, shared the current ecosystem development status of the vLLM project in the industry and the communication channels of the vLLM community in China, including Zhihu and WeChat official accounts, which facilitate better connections between the vLLM team and Chinese developers.
37+
38+
### vLLM Hardware Plugin Mechanism and Ascend Best Practices
39+
40+
<p align="center">
41+
<picture>
42+
<img src="/assets/figures/vllm-2025-beijing-meetup/3.png" width="45%">
43+
</picture>
44+
</p>
45+
46+
Wang Xiyuan, an engineer at Huawei and a core maintainer of the vllm-ascend project, shared Huawei's work on the vLLM hardware plugin mechanism using the Ascend NPU as an example. He explained the technology behind vLLM's easy implementation of multi-device backend support. Additionally, he introduced Huawei's Ascend AI chips and the principles and features of the CANN computing architecture, as well as Huawei's future plans for the vLLM community.
47+
48+
### VeRL: A Hybrid Controller-based RLHF Framework
49+
50+
<p align="center">
51+
<picture>
52+
<img src="/assets/figures/vllm-2025-beijing-meetup/4.png" width="45%">
53+
</picture>
54+
</p>
55+
56+
Zhang Chi, an engineer at ByteDance and a core developer of the VeRL project, shared ByteDance's research and work in the field of reinforcement learning fine-tuning frameworks. He focused on the pain points currently addressed by VeRL and the core working principles of the Hybrid Controller.
57+
58+
### Best Practices for Efficient Fine-Tuning Framework LLaMA-Factory with vLLM
59+
60+
<p align="center">
61+
<picture>
62+
<img src="/assets/figures/vllm-2025-beijing-meetup/5.png" width="45%">
63+
</picture>
64+
</p>
65+
66+
Zheng Yaowei, a researcher at Beihang University and a core maintainer of the LLaMA-Factory project, shared the current development status in the field of large model fine-tuning and the latest graphical interface launched by LLaMA-Factory. He also introduced how LLaMA-Factory works with frameworks like vLLM to provide developers with ultimate ease of use.
2.71 MB
Loading
1.98 MB
Loading
2.32 MB
Loading
2.52 MB
Loading
2.16 MB
Loading
2.44 MB
Loading

0 commit comments

Comments
 (0)