Skip to content

Commit 94ce61d

Browse files
committed
Shorten ReadMe
ghstack-source-id: 5517ce3a52ff73107f1b378ae36e5ffcd52909ba Pull Request resolved: #237
1 parent 6d049b5 commit 94ce61d

File tree

3 files changed

+10
-353
lines changed

3 files changed

+10
-353
lines changed

README.md

+5-164
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,8 @@ Requirements:
1717

1818
> ℹ️ This is project is in Beta. `torch::deploy` is ready for use in production environments but may have some rough edges that we're continuously working on improving. We're always interested in hearing feedback and usecases that you might have. Feel free to reach out!
1919
20+
## The Easy Path to Installation
21+
2022
## Installation
2123

2224
### Building via Docker
@@ -183,170 +185,9 @@ cd build
183185
./test_deploy
184186
```
185187

186-
## Examples
187-
188-
See the [examples directory](./examples) for complete examples.
189-
190-
### Packaging a model `for multipy::runtime`
191-
192-
``multipy::runtime`` can load and run Python models that are packaged with
193-
``torch.package``. You can learn more about ``torch.package`` in the ``torch.package`` [documentation](https://pytorch.org/docs/stable/package.html#tutorials).
194-
195-
For now, let's create a simple model that we can load and run in ``multipy::runtime``.
196-
197-
```python
198-
from torch.package import PackageExporter
199-
import torchvision
200-
201-
# Instantiate some model
202-
model = torchvision.models.resnet.resnet18()
203-
204-
# Package and export it.
205-
with PackageExporter("my_package.pt") as e:
206-
e.intern("torchvision.**")
207-
e.extern("numpy.**")
208-
e.extern("sys")
209-
e.extern("PIL.*")
210-
e.extern("typing_extensions")
211-
e.save_pickle("model", "model.pkl", model)
212-
```
213-
214-
Note that since "numpy", "sys", "PIL" were marked as "extern", `torch.package` will
215-
look for these dependencies on the system that loads this package. They will not be packaged
216-
with the model.
217-
218-
Now, there should be a file named ``my_package.pt`` in your working directory.
219-
220-
<br>
221-
222-
### Load the model in C++
223-
```cpp
224-
#include <multipy/runtime/deploy.h>
225-
#include <multipy/runtime/path_environment.h>
226-
#include <torch/script.h>
227-
#include <torch/torch.h>
228-
229-
#include <iostream>
230-
#include <memory>
231-
232-
int main(int argc, const char* argv[]) {
233-
if (argc != 2) {
234-
std::cerr << "usage: example-app <path-to-exported-script-module>\n";
235-
return -1;
236-
}
237-
238-
// Start an interpreter manager governing 4 embedded interpreters.
239-
std::shared_ptr<multipy::runtime::Environment> env =
240-
std::make_shared<multipy::runtime::PathEnvironment>(
241-
std::getenv("PATH_TO_EXTERN_PYTHON_PACKAGES") // Ensure to set this environment variable (e.g. /home/user/anaconda3/envs/multipy-example/lib/python3.8/site-packages)
242-
);
243-
multipy::runtime::InterpreterManager manager(4, env);
244-
245-
try {
246-
// Load the model from the multipy.package.
247-
multipy::runtime::Package package = manager.loadPackage(argv[1]);
248-
multipy::runtime::ReplicatedObj model = package.loadPickle("model", "model.pkl");
249-
} catch (const c10::Error& e) {
250-
std::cerr << "error loading the model\n";
251-
std::cerr << e.msg();
252-
return -1;
253-
}
254-
255-
std::cout << "ok\n";
256-
}
257-
258-
```
259-
260-
This small program introduces many of the core concepts of ``multipy::runtime``.
261-
262-
An ``InterpreterManager`` abstracts over a collection of independent Python
263-
interpreters, allowing you to load balance across them when running your code.
264-
265-
``PathEnvironment`` enables you to specify the location of Python
266-
packages on your system which are external, but necessary, for your model.
267-
268-
Using the ``InterpreterManager::loadPackage`` method, you can load a
269-
``multipy.package`` from disk and make it available to all interpreters.
270-
271-
``Package::loadPickle`` allows you to retrieve specific Python objects
272-
from the package, like the ResNet model we saved earlier.
273-
274-
Finally, the model itself is a ``ReplicatedObj``. This is an abstract handle to
275-
an object that is replicated across multiple interpreters. When you interact
276-
with a ``ReplicatedObj`` (for example, by calling ``forward``), it will select
277-
an free interpreter to execute that interaction.
278-
279-
<br>
280-
281-
### Build and execute the C++ example
282-
283-
Assuming the above C++ program was stored in a file called, `example-app.cpp`, a
284-
minimal `CMakeLists.txt` file would look like:
285-
286-
```cmake
287-
cmake_minimum_required(VERSION 3.12 FATAL_ERROR)
288-
project(multipy_tutorial)
289-
290-
set(MULTIPY_PATH ".." CACHE PATH "The repo where multipy is built or the PYTHONPATH")
291-
292-
# include the multipy utils to help link against
293-
include(${MULTIPY_PATH}/multipy/runtime/utils.cmake)
294-
295-
# add headers from multipy
296-
include_directories(${MULTIPY_PATH})
297-
298-
# link the multipy prebuilt binary
299-
add_library(multipy_internal STATIC IMPORTED)
300-
set_target_properties(multipy_internal
301-
PROPERTIES
302-
IMPORTED_LOCATION
303-
${MULTIPY_PATH}/multipy/runtime/build/libtorch_deploy.a)
304-
caffe2_interface_library(multipy_internal multipy)
305-
306-
add_executable(example-app example-app.cpp)
307-
target_link_libraries(example-app PUBLIC "-Wl,--no-as-needed -rdynamic" dl pthread util multipy c10 torch_cpu)
308-
```
309-
310-
Currently, it is necessary to build ``multipy::runtime`` as a static library.
311-
In order to correctly link to a static library, the utility ``caffe2_interface_library``
312-
is used to appropriately set and unset ``--whole-archive`` flag.
313-
314-
Furthermore, the ``-rdynamic`` flag is needed when linking to the executable
315-
to ensure that symbols are exported to the dynamic table, making them accessible
316-
to the deploy interpreters (which are dynamically loaded).
317-
318-
**Updating LIBRARY_PATH and LD_LIBRARY_PATH**
319-
320-
In order to locate dependencies provided by PyTorch (e.g. `libshm`), we need to update the `LIBRARY_PATH` and `LD_LIBRARY_PATH` environment variables to include the path to PyTorch's C++ libraries. If you installed PyTorch using pip or conda, this path is usually in the site-packages. An example of this is provided below.
321-
322-
```bash
323-
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/home/user/anaconda3/envs/multipy-example/lib/python3.8/site-packages/torch/lib"
324-
export LIBRARY_PATH="$LIBRARY_PATH:/home/user/anaconda3/envs/multipy-example/lib/python3.8/site-packages/torch/lib"
325-
```
326-
327-
The last step is configuring and building the project. Assuming that our code
328-
directory is laid out like this:
329-
330-
```
331-
example-app/
332-
CMakeLists.txt
333-
example-app.cpp
334-
```
335-
336-
337-
We can now run the following commands to build the application from within the
338-
``example-app/`` folder:
339-
340-
```bash
341-
cmake -S . -B build -DMULTIPY_PATH="/home/user/repos/multipy" # the parent directory of multipy (i.e. the git repo)
342-
cmake --build build --config Release -j
343-
```
344-
345-
Now we can run our app:
346-
347-
```bash
348-
./example-app /path/to/my_package.pt
349-
```
188+
## Getting Started with `torch::deploy`
189+
Once you have `torch::deploy` built, check out our [tutorials](https://pytorch.org/multipy/latest/tutorials/tutorial_root.html) and
190+
[API documentation](https://pytorch.org/multipy/latest/api/library_root.html).
350191

351192
## Contributing
352193

docs/source/index.rst

+2-3
Original file line numberDiff line numberDiff line change
@@ -3,9 +3,8 @@
33
``torch::deploy`` [Beta]
44
=====================
55

6-
``torch::deploy`` is a system that allows you to load multiple python interpreters which execute PyTorch models, and run them in a single C++ process. Effectively, it allows people to multithread their pytorch models.
7-
For more information on how torch::deploy works please see the related `arXiv paper <https://arxiv.org/pdf/2104.00254.pdf>`_. We plan to further generalize ``torch::deploy`` into a more generic system, ``multipy::runtime``,
8-
which is more suitable for arbitrary python programs rather than just pytorch applications.
6+
``torch::deploy`` (MultiPy for non-PyTorch use cases) is a C++ library that enables you to run eager mode PyTorch models in production without any modifications to your model to support tracing. ``torch::deploy`` provides a way to run using multiple independent Python interpreters in a single process without a shared global interpreter lock (GIL).
7+
For more information on how ``torch::deploy`` works please see the related `arXiv paper <https://arxiv.org/pdf/2104.00254.pdf>`_.
98

109

1110
Documentation

docs/source/setup.rst

+3-186
Original file line numberDiff line numberDiff line change
@@ -1,186 +1,3 @@
1-
Installation
2-
============
3-
4-
Building ``torch::deploy`` via Docker
5-
-------------------------------------
6-
7-
The easiest way to build ``torch::deploy``, along with fetching all interpreter
8-
dependencies, is to do so via docker.
9-
10-
.. code:: shell
11-
12-
git clone https://github.com/pytorch/multipy.git
13-
cd multipy
14-
export DOCKER_BUILDKIT=1
15-
docker build -t multipy .
16-
17-
The built artifacts are located in ``multipy/runtime/build``.
18-
19-
To run the tests:
20-
21-
.. code:: shell
22-
23-
docker run --rm multipy multipy/runtime/build/test_deploy
24-
25-
Installing via ``pip install``
26-
------------------------------
27-
28-
We support installing both the python modules and the c++ bits (through ``CMake``)
29-
using a single ``pip install -e .`` command, with the caveat of having to manually
30-
install the dependencies first.
31-
32-
First clone multipy and update the submodules:
33-
34-
.. code:: shell
35-
36-
git clone https://github.com/pytorch/multipy.git
37-
cd multipy
38-
git submodule sync && git submodule update --init --recursive
39-
40-
Installing system dependencies
41-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
42-
43-
The runtime system dependencies are specified in
44-
``build-requirements.txt``. To install them on Debian-based systems, one
45-
could run:
46-
47-
.. code:: shell
48-
49-
sudo apt update
50-
xargs sudo apt install -y -qq --no-install-recommends < build-requirements.txt
51-
52-
Installing environment encapsulators
53-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
54-
55-
We recommend using the isolated python environments of either `conda
56-
<https://docs.conda.io/projects/continuumio-conda/en/latest/user-guide/install/index.html#regular-installation>`__
57-
or `pyenv + virtualenv <https://github.com/pyenv/pyenv.git>`__
58-
because ``torch::deploy`` requires a
59-
position-independent version of python to launch interpreters with. For
60-
``conda`` environments we use the prebuilt ``libpython-static=3.x``
61-
libraries from ``conda-forge`` to link with at build time. For
62-
``virtualenv``/``pyenv``, we compile python with the ``-fPIC`` flag to create the
63-
linkable library.
64-
65-
.. warning::
66-
While `torch::deploy` supports Python versions 3.7 through 3.10,
67-
the ``libpython-static`` libraries used with ``conda`` environments
68-
are only available for ``3.8`` onwards. With ``virtualenv``/``pyenv``
69-
any version from 3.7 through 3.10 can be
70-
used, as python can be built with the ``-fPIC`` flag explicitly.
71-
72-
Installing pytorch and related dependencies
73-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
74-
``torch::deploy`` requires the latest version of pytorch to run models
75-
successfully, and we recommend fetching the latest *nightlies* for
76-
pytorch and also cuda.
77-
78-
Installing the python dependencies in a ``conda`` environment:
79-
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
80-
81-
.. code:: shell
82-
83-
conda create -n newenv
84-
conda activate newenv
85-
86-
conda install python=3.8 # or 3.8/3.10
87-
conda install -c conda-forge libpython-static=3.8 # or 3.8/3.10
88-
89-
# install your desired flavor of pytorch from https://pytorch.org/get-started/locally/
90-
conda install pytorch torchvision torchaudio cpuonly -c pytorch-nightly
91-
92-
Installing the python dependencies in a ``pyenv`` / ``virtualenv`` setup
93-
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
94-
95-
.. code:: shell
96-
97-
# feel free to replace 3.8.6 with any python version > 3.7.0
98-
export CFLAGS="-fPIC -g"
99-
~/.pyenv/bin/pyenv install --force 3.8.6
100-
virtualenv -p ~/.pyenv/versions/3.8.6/bin/python3 ~/venvs/multipy
101-
source ~/venvs/multipy/bin/activate
102-
pip install -r dev-requirements.txt
103-
104-
# install your desired flavor of pytorch from https://pytorch.org/get-started/locally/
105-
pip3 install --pre torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/cpu
106-
107-
Running ``pip install``
108-
~~~~~~~~~~~~~~~~~~~~~~~
109-
110-
Once all the dependencies are successfully installed,
111-
including a ``-fPIC`` enabled build of python and the latest nightly of pytorch, we
112-
can run the following, in either ``conda`` or ``virtualenv``, to install
113-
both the python modules and the runtime/interpreter libraries:
114-
115-
.. code:: shell
116-
117-
# from base torch::deploy directory
118-
pip install -e .
119-
# alternatively one could run
120-
python setup.py develop
121-
122-
The C++ binaries should be available in ``/opt/dist``.
123-
124-
Alternatively, one can install only the python modules without invoking
125-
``cmake`` as follows:
126-
127-
.. code:: shell
128-
129-
# from base multipy directory
130-
pip install -e . --install-option="--cmakeoff"
131-
132-
.. warning::
133-
As of 10/11/2022 the linking of prebuilt static ``-fPIC``
134-
versions of python downloaded from ``conda-forge`` can be problematic
135-
on certain systems (for example Centos 8), with linker errors like
136-
``libpython_multipy.a: error adding symbols: File format not recognized``.
137-
This seems to be an issue with ``binutils``, and `these steps
138-
<https://wiki.gentoo.org/wiki/Project:Toolchain/Binutils_2.32_upgrade_notes/elfutils_0.175:_unable_to_initialize_decompress_status_for_section_.debug_info>`__
139-
can help. Alternatively, the user can go with the
140-
``virtualenv``/``pyenv`` flow above.
141-
142-
Running ``torch::deploy`` build steps from source
143-
-------------------------------------------------
144-
145-
Both ``docker`` and ``pip install`` options above are wrappers around
146-
the cmake build of `torch::deploy`. If the user wishes to run the
147-
build steps manually instead, as before the dependencies would have to
148-
be installed in the user’s (isolated) environment of choice first. After
149-
that the following steps can be executed:
150-
151-
Building
152-
~~~~~~~~
153-
154-
.. code:: bash
155-
156-
# checkout repo
157-
git checkout https://github.com/pytorch/multipy.git
158-
git submodule sync && git submodule update --init --recursive
159-
160-
cd multipy
161-
# install python parts of `torch::deploy` in multipy/multipy/utils
162-
pip install -e . --install-option="--cmakeoff"
163-
164-
cd multipy/runtime
165-
166-
# build runtime
167-
mkdir build
168-
cd build
169-
# use cmake -DABI_EQUALS_1=ON .. instead if you want ABI=1
170-
cmake ..
171-
cmake --build . --config Release
172-
173-
Running unit tests for ``torch::deploy``
174-
----------------------------------------
175-
176-
We first need to generate the neccessary examples. First make sure your
177-
python enviroment has `torch <https://pytorch.org>`__. Afterwards, once
178-
``torch::deploy`` is built, run the following (executed automatically
179-
for ``docker`` and ``pip`` above):
180-
181-
.. code:: bash
182-
183-
cd multipy/multipy/runtime
184-
python example/generate_examples.py
185-
cd build
186-
./test_deploy
1+
.. literalinclude:: ../../../README.md
2+
:language: markdown
3+
:lines: 22-176

0 commit comments

Comments
 (0)