Skip to content

Commit 4d264ee

Browse files
authored
Update installation doc URLs (#40)
Follow up to vllm-project/vllm#14556. Signed-off-by: Harry Mellor <[email protected]>
1 parent 7e828ff commit 4d264ee

4 files changed

+6
-6
lines changed

_posts/2023-06-20-vllm.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -108,7 +108,7 @@ This utilization of vLLM has also significantly reduced operational costs. With
108108

109109
### Get started with vLLM
110110

111-
Install vLLM with the following command (check out our [installation guide](https://docs.vllm.ai/en/latest/getting_started/installation/index.html) for more):
111+
Install vLLM with the following command (check out our [installation guide](https://docs.vllm.ai/en/latest/getting_started/installation.html) for more):
112112

113113
```bash
114114
$ pip install vllm

_posts/2024-09-05-perf-update.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -150,7 +150,7 @@ Importantly, we will also focus on improving the core of vLLM to reduce the comp
150150

151151
### Get Involved
152152

153-
If you haven’t, we highly recommend you to update the vLLM version (see instructions [here](https://docs.vllm.ai/en/latest/getting_started/installation/index.html)) and try it out for yourself\! We always love to learn more about your use cases and how we can make vLLM better for you. The vLLM team can be reached out via [[email protected]](mailto:[email protected]). vLLM is also a community project, if you are interested in participating and contributing, we welcome you to check out our [roadmap](https://roadmap.vllm.ai/) and see [good first issues](https://github.com/vllm-project/vllm/issues?q=is:open+is:issue+label:%22good+first+issue%22) to tackle. Stay tuned for more updates by [following us on X](https://x.com/vllm\_project).
153+
If you haven’t, we highly recommend you to update the vLLM version (see instructions [here](https://docs.vllm.ai/en/latest/getting_started/installation.html)) and try it out for yourself\! We always love to learn more about your use cases and how we can make vLLM better for you. The vLLM team can be reached out via [[email protected]](mailto:[email protected]). vLLM is also a community project, if you are interested in participating and contributing, we welcome you to check out our [roadmap](https://roadmap.vllm.ai/) and see [good first issues](https://github.com/vllm-project/vllm/issues?q=is:open+is:issue+label:%22good+first+issue%22) to tackle. Stay tuned for more updates by [following us on X](https://x.com/vllm\_project).
154154

155155
If you are in the Bay Area, you can meet the vLLM team at the following events: [vLLM’s sixth meetup with NVIDIA(09/09)](https://lu.ma/87q3nvnh), [PyTorch Conference (09/19)](https://pytorch2024.sched.com/event/1fHmx/vllm-easy-fast-and-cheap-llm-serving-for-everyone-woosuk-kwon-uc-berkeley-xiaoxuan-liu-ucb), [CUDA MODE IRL meetup (09/21)](https://events.accel.com/cudamode), and [the first ever vLLM track at Ray Summit (10/01-02)](https://raysummit.anyscale.com/flow/anyscale/raysummit2024/landing/page/sessioncatalog?search.sessiontracks=1719251906298001uzJ2).
156156

_posts/2025-01-10-dev-experience.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ For those who prefer a faster package manager, [**uv**](https://github.com/astra
2929
uv pip install vllm
3030
```
3131

32-
Refer to the [documentation](https://docs.vllm.ai/en/latest/getting_started/installation/gpu/index.html?device=cuda#create-a-new-python-environment) for more details on setting up [**uv**](https://github.com/astral-sh/uv). Using a simple server-grade setup (Intel 8th Gen CPU), we observe that [**uv**](https://github.com/astral-sh/uv) is 200x faster than pip:
32+
Refer to the [documentation](https://docs.vllm.ai/en/latest/getting_started/installation/gpu.html?device=cuda#create-a-new-python-environment) for more details on setting up [**uv**](https://github.com/astral-sh/uv). Using a simple server-grade setup (Intel 8th Gen CPU), we observe that [**uv**](https://github.com/astral-sh/uv) is 200x faster than pip:
3333

3434
```sh
3535
# with cached packages, clean virtual environment
@@ -77,11 +77,11 @@ VLLM_USE_PRECOMPILED=1 pip install -e .
7777

7878
The `VLLM_USE_PRECOMPILED=1` flag instructs the installer to use pre-compiled CUDA kernels instead of building them from source, significantly reducing installation time. This is perfect for developers focusing on Python-level features like API improvements, model support, or integration work.
7979

80-
This lightweight process runs efficiently, even on a laptop. Refer to our [documentation](https://docs.vllm.ai/en/latest/getting_started/installation/gpu/index.html?device=cuda#build-wheel-from-source) for more advanced usage.
80+
This lightweight process runs efficiently, even on a laptop. Refer to our [documentation](https://docs.vllm.ai/en/latest/getting_started/installation/gpu.html?device=cuda#build-wheel-from-source) for more advanced usage.
8181

8282
### C++/Kernel Developers
8383

84-
For advanced contributors working with C++ code or CUDA kernels, we incorporate a compilation cache to minimize build time and streamline kernel development. Please check our [documentation](https://docs.vllm.ai/en/latest/getting_started/installation/gpu/index.html?device=cuda#build-wheel-from-source) for more details.
84+
For advanced contributors working with C++ code or CUDA kernels, we incorporate a compilation cache to minimize build time and streamline kernel development. Please check our [documentation](https://docs.vllm.ai/en/latest/getting_started/installation/gpu.html?device=cuda#build-wheel-from-source) for more details.
8585

8686
## Track Changes with Ease
8787

_posts/2025-01-27-intro-to-llama-stack-with-vllm.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ huggingface-cli login --token <YOUR-HF-TOKEN>
4949
huggingface-cli download meta-llama/Llama-3.2-1B-Instruct --local-dir /tmp/test-vllm-llama-stack/.cache/huggingface/hub/models/Llama-3.2-1B-Instruct
5050
```
5151

52-
Next, let's build the vLLM CPU container image from source. Note that while we use it for demonstration purposes, there are plenty of [other images available for different hardware and architectures](https://docs.vllm.ai/en/latest/getting_started/installation/index.html).
52+
Next, let's build the vLLM CPU container image from source. Note that while we use it for demonstration purposes, there are plenty of [other images available for different hardware and architectures](https://docs.vllm.ai/en/latest/getting_started/installation.html).
5353

5454
```
5555
git clone [email protected]:vllm-project/vllm.git /tmp/test-vllm-llama-stack

0 commit comments

Comments
 (0)