Skip to content

Update Compute cloud deployment docs #167

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 8 commits into from
Sep 7, 2022
Original file line number Diff line number Diff line change
@@ -1,20 +1,78 @@
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';

# Configure storage

By default, Compute will store compiled functions and function source code on the local storage in your Kubernetes cluster. For greater scalability, Compute can be configured to store artifacts in cloud-based object storage like Amazon S3 or Google Cloud Storage.

To configure a storage bucket, provide the `SCC_STORAGE_PATH` environment variable to both the control plane and builder, e.g. `s3://my-bucket` for Amazon S3 or `gs://my-bucket` for Google Cloud Storage. For Kubernetes deployments, this is done in `.suborbital/scc-controlplane-deployment.yaml` under the `controlplane` and `builder` containers sections, and for local docker-compose deployments, this is done in `docker-compose.yaml` under the `scc-control-plane` and `scc-builder` services.

## Authentication

Bucket authentication varies between cloud providers.

<Tabs groupId='cloud-provider'>

<TabItem value="S3" label="Amazon S3">

### Amazon S3

You will need to also supply the `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_SESSION_TOKEN` and `AWS_REGION` environment variables to both the control plane and the builder. See the [AWS authentication documentation](https://docs.aws.amazon.com/sdk-for-go/api/aws/session/) for details. It is also possible to store the configuration as a Kubernetes secret, similar to the Google Cloud Storage configuration.
You will need to supply the `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_SESSION_TOKEN` and `AWS_REGION` environment variables to the API for both the control plane and the builder. See the [AWS authentication documentation](https://docs.aws.amazon.com/sdk-for-php/v3/developer-guide/guide_credentials_environment.html) for details. It is also possible to store the configuration as a Kubernetes secret, similar to the Google Cloud Storage configuration.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should specify that they need to be added to the yaml file, and also show them in the example. I also wonder if we want to keep this bit about the Kubernetes secret. Maybe we have it link to the other tab?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do specify the different yaml files in the next paragraph down:

For Kubernetes deployments, this is done in .suborbital/scc-controlplane-deployment.yaml under the controlplane and builder containers sections, and for local docker-compose deployments, this is done in docker-compose.yaml under the scc-control-plane and scc-builder services.

and also show them in the example

Do you mean show the file name in the yaml codeblock? I considered that but decided against it since there are two candidates for the file name depending on whether a deployment is on Kubernetes or local.

I also wonder if we want to keep this bit about the Kubernetes secret. Maybe we have it link to the other tab?

I'll add a link to the GCS tab!


To configure a storage bucket, provide the `SCC_STORAGE_PATH` environment variable to both the control plane and builder, e.g. `s3://my-bucket` for Amazon S3 or `gs://my-bucket` for Google Cloud Storage. For Kubernetes deployments, this is done in `.suborbital/scc-controlplane-deployment.yaml` under the `controlplane` and `builder` containers sections, and for local docker-compose deployments, this is done in `docker-compose.yaml` under the `scc-control-plane` and `scc-builder` services.


```yaml
containers:
- name: controlplane
image: suborbital/scc-control-plane:v0.3.0
command: ["controlplane"]

ports:
- containerPort: 8081

env:
- name: SCC_HTTP_PORT
value: "8081"

- name: SCC_LOG_LEVEL
value: 'info'

- name: SCC_HEADLESS
value: "true"

- name: SCC_ENV_TOKEN
value: <your environment token>

- name: SCC_STORAGE_PATH
value: s3://your-s3-storage-bucket>


- name: builder
image: suborbital/scc-builder:v0.3.0
command: ["builder"]

env:
- name: SCC_DOMAIN
value: "domain.example.com"

- name: SCC_TLS_PORT
value: "8443"

- name: SCC_LOG_LEVEL
value: "info"

- name: SCC_CONTROL_PLANE
value: "scc-controlplane-service:8081"

- name: SCC_STORAGE_PATH
value: s3://your-s3-storage-bucket
```
</TabItem>

<TabItem value="GCS" label="Google Cloud Storage">

### Google Cloud Storage

GCS expects to read a service account credentials file, so those credentials must be mounted. See the [GCP authentication documentation](https://cloud.google.com/docs/authentication/production) for more details.
GCS expects to read a service account credentials file, so those credentials must be mounted. See the [GCP authentication documentation](https://cloud.google.com/iam/docs/creating-managing-service-account-keys) for more details.

#### Kubernetes deployment

Expand Down Expand Up @@ -90,3 +148,7 @@ A few things to note:
- The addition of `GOOGLE_APPLICATION_CREDENTIALS` to both environments of the builder and control plane containers
- The `gcs-service-account-credentials-volume` volume mount to the `volumeMounts` sections of both containers
- The declaration of the volume itself in the `volumes` section

</TabItem>

</Tabs>
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,9 @@
pagination_next: null
---

# Deploy Compute to your cloud environment
# Install Compute in your cloud environment

To install Compute in the cloud, you'll use the `subo` tool to
automatically install Suborbital Compute into a Kubernetes cluster.
You need to ensure you have some **pre-requisites** ready:
To install Compute in the cloud, you'll use the `subo` tool to automatically install Suborbital Compute into a Kubernetes cluster. You need to ensure you have some **pre-requisites** ready:

1. Deploy a Kubernetes cluster into your cloud provider of choice (if you have a pre-existing one, that works too!).
* [Ensure there is a storage class available in Kubernetes](https://kubernetes.io/docs/concepts/storage/storage-classes/). Some cloud providers such as AWS do not have a default storage class. See this great [GitLab guide on how to set up a storage class](https://docs.gitlab.com/charts/installation/storage.html#configuring-cluster-storage).
Expand All @@ -19,8 +17,7 @@ You need to ensure you have some **pre-requisites** ready:
`subo` creates a `suborbital` Kubernetes namespace and installs the `KEDA` autoscaler. Don't worry about existing applications installed in the cluster; this won't affect them!
:::

Once you have the pre-requisites in place, navigate to the `suborbital`
directory you created when you [generated your token](../get-started#generate-your-token.md) and use `subo` to install:
Once you have the pre-requisites in place, navigate to the `suborbital` directory you created when you [generated your token](../get-started#generate-your-token.md) and use `subo` to install:

```bash
subo compute deploy core
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ This will generate some Kubernetes manifest files, which will now live in the `.

Open up `.suborbital/scc-controlplane-deployment.yaml` in your editor of choice, and make the following changes.

We are disabling the built-in TLS certificate provisioning, as ngrok already takes care of this for us.
We are disabling the built-in TLS certificate provisioning, as `ngrok` already takes care of this for us.

Under the Builder Container:

Expand Down
2 changes: 1 addition & 1 deletion website/docs/compute/get-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -175,7 +175,7 @@ export const run = (input) => {

- The function provided is complete, so we can just click "Build"
- In the "TEST" field, add some text. Here, we've added "new
Suborbital user!"
Suborbital user"
- Click "Run test"
- Toward the bottom of the editor, click "TEST RESULTS". There's our
greeting!
Expand Down