Skip to content

Update docs for VPC integration #434

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
72 changes: 65 additions & 7 deletions docs/book/src/clustercloudstack/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,6 @@ The cluster configuration file can be generated by using [`clusterctl generate c
This command actually uses [a template file][template-file] and replaces the values surrounded by `${}` with environment variables.
You have to set all required environment variables in advance. The following sections explain some more details about what should be configured.

Note: You can also use [template files][template-file] by manually replacing values in copies of the template file.

```bash
clusterctl generate cluster capi-quickstart \
--kubernetes-version v1.21.3 \
Expand All @@ -14,9 +12,25 @@ clusterctl generate cluster capi-quickstart \
> capi-quickstart.yaml
```

Note: additional template files are provided, offering capabilities beyond the default template file. These can be
utilized via the *clusterctl --flavor* parameter. Additional environment variables are often required by these templates.
See clusterctl documentation for further details about *flavors*.
You can also use [template files][template-file] by manually replacing values in copies of the template file.


> **Note**
>
> Additional template files are provided, offering capabilities beyond the default template file. These can be
> utilized via the *clusterctl --flavor* parameter. Additional environment variables are often required by these templates.
> The following flavors are supported as of now:
> - *managed-ssh*
> - *ssh-material*
> - *with-disk-offering*
> - *with-existing-vpc-network*
> - *with-kube-vip*
>
> To check the available variables for a flavor, execute the following command:
> ```bash
> clusterctl generate cluster capi-quickstart --flavor <flavor> --list-variables
> ```
> See clusterctl documentation for further details about *flavors*.

In order to fetch the configuration parameters via the terminal, please install [cmk][cmk-download] and [jq][jq-download]

Expand Down Expand Up @@ -60,15 +74,58 @@ cmk list zones listall=true | jq '.zone[] | {name, id}'
#### Network

The network must be declared as an environment variable `CLOUDSTACK_NETWORK_NAME` and is a mandatory parameter.
As of now, only isolated and shared networks are supported.
As of now, only isolated and shared networks are supported. The isolated network can also be part of a VPC.

If the specified network does not exist, a new isolated network will be created. The newly created network will have a default egress firewall policy that allows all TCP, UDP and ICMP traffic from the cluster to the outside world. If the network is part of a VPC, the VPC will also be created if it does not exist.

If the specified network does not exist, a new isolated network will be created. The newly created network will have a default egress firewall policy that allows all TCP, UDP and ICMP traffic from the cluster to the outside world.
If the offerings are not specified, the default offerings will be used.

The list of networks for the specific zone can be fetched using the cmk cli as follows :
```
cmk list networks listall=true zoneid=<zoneid> | jq '.network[] | {name, id, type}'
```

The list of VPCs for the specific zone can be fetched using the cmk cli as follows :
```
cmk list vpcs listall=true zoneid=<zoneid> | jq '.vpc[] | {name, id}'
```

The user can configure the network offering and VPC offering for the isolated network as follows:

```yaml
apiVersion: infrastructure.cluster.x-k8s.io/v1beta3
kind: CloudStackCluster
metadata:
name: capc-cluster
namespace: default
spec:
controlPlaneEndpoint:
host: 10.0.58.19
port: 6443
failureDomains:
- acsEndpoint:
name: secret1
namespace: default
name: fd1
zone:
name: cloudstack-zone
network:
name: cloudstack-network
offering: custom-network-offering
gateway: 10.0.0.1
netmask: 255.255.255.0
vpc:
name: cloudstack-vpc
offering: custom-vpc-offering
cidr: 10.0.0.0/16
```

If the network already exists, offering, gateway and netmask will be ignored.
Similarly, if the VPC already exists, offering and cidr will be ignored.

If you want to use an existing network inside a VPC, you can specify the flavor as `with-existing-vpc-network` while
generating the cluster configuration file and set the `CLOUDSTACK_VPC_NAME` environment variable to the name of the VPC.

#### CloudStack Endpoint Credentials Secret (*optional for provided templates when used with provided getting-started process*)

A reference to a Kubernetes Secret containing a YAML object containing credentials for accessing a particular CloudStack
Expand Down Expand Up @@ -159,6 +216,7 @@ The project name can be specified by adding the `CloudStackCluster.spec.project`
The list of projects can be fetched using the cmk cli as follows :
```
cmk list projects listall=true | jq '.project[] | {name, id}'
```

## Cluster Level Configurations

Expand Down
17 changes: 12 additions & 5 deletions docs/book/src/development/releasing.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,21 +8,23 @@
- [gcloud][gcloud-install]

2. Set up and log in to gcloud by running `gcloud init`
> In order to publish any artifact, you need to be a member of the [k8s-infra-staging-capi-cloudstack][k8s-infra-staging-capi-cloudstack] group
> **Note**
>
> In order to publish any artifact, you need to be a member of the [k8s-infra-staging-capi-cloudstack][k8s-infra-staging-capi-cloudstack] group

## Creating only the docker container

If you would just like to build only the docker container and upload it rather than creating a release, you can run the following command :
```
```bash
REGISTRY=<your custom registry> IMAGE_NAME=<your custom image name> TAG=<your custom tag> make docker-build
```
It defaults to `gcr.io/k8s-staging-capi-cloudstack/capi-cloudstack-controller:dev`


## Creating a new release

Run the following command to create the new release artifacts as well as publish them to the upstream gcr.io repository :
```
Run the following command to create the new release artifacts as well as publish them to the upstream gcr.io repository:
```bash
RELEASE_TAG=<your custom tag> make release-staging
```

Expand All @@ -31,9 +33,14 @@ Create the necessary release in GitHub along with the following artifacts ( foun
- infrastructure-components.yaml
- cluster-template*.yaml

> **Note**
>
> - The `RELEASE_TAG` should be in the format of `v<major>.<minor>.<patch>`. For example, `v0.6.0`
> - For RC releases, the `RELEASE_TAG` should be in the format of `v<major>.<minor>.<patch>-rc<rc-number>`. For example, `v0.6.0-rc1`
> - Before creating the release, ensure that the `metadata.yaml` file is updated with the latest release information.


[docker-install]: https://www.docker.com/
[go]: https://golang.org/doc/install
[gcloud-install]: https://cloud.google.com/sdk/docs/install
[k8s-infra-staging-capi-cloudstack]: https://github.com/kubernetes/k8s.io/blob/main/groups/sig-cluster-lifecycle/groups.yaml#L106
[k8s-infra-staging-capi-cloudstack]: https://github.com/kubernetes/k8s.io/blob/main/groups/sig-cluster-lifecycle/groups.yaml#L106
128 changes: 128 additions & 0 deletions templates/cluster-template-with-existing-vpc-network.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,128 @@
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: ${CLUSTER_NAME}
spec:
clusterNetwork:
pods:
cidrBlocks:
- 192.168.0.0/16
serviceDomain: "cluster.local"
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta3
kind: CloudStackCluster
name: ${CLUSTER_NAME}
controlPlaneRef:
kind: KubeadmControlPlane
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
name: ${CLUSTER_NAME}-control-plane
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta3
kind: CloudStackCluster
metadata:
name: ${CLUSTER_NAME}
spec:
syncWithACS: ${CLOUDSTACK_SYNC_WITH_ACS=false}
controlPlaneEndpoint:
host: ${CLUSTER_ENDPOINT_IP}
port: ${CLUSTER_ENDPOINT_PORT=6443}
failureDomains:
- name: ${CLOUDSTACK_FD1_NAME=failure-domain-1}
acsEndpoint:
name: ${CLOUDSTACK_FD1_SECRET_NAME=cloudstack-credentials}
namespace: ${CLOUDSTACK_FD1_SECRET_NAMESPACE=default}
zone:
name: ${CLOUDSTACK_ZONE_NAME}
network:
name: ${CLOUDSTACK_NETWORK_NAME}
vpc:
name: ${CLOUDSTACK_VPC_NAME}
---
kind: KubeadmControlPlane
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
metadata:
name: "${CLUSTER_NAME}-control-plane"
spec:
kubeadmConfigSpec:
initConfiguration:
nodeRegistration:
name: '{{ local_hostname }}'
kubeletExtraArgs:
provider-id: "cloudstack:///'{{ ds.meta_data.instance_id }}'"
joinConfiguration:
nodeRegistration:
name: '{{ local_hostname }}'
kubeletExtraArgs:
provider-id: "cloudstack:///'{{ ds.meta_data.instance_id }}'"
preKubeadmCommands:
- swapoff -a
machineTemplate:
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta3
kind: CloudStackMachineTemplate
name: "${CLUSTER_NAME}-control-plane"
replicas: ${CONTROL_PLANE_MACHINE_COUNT}
version: ${KUBERNETES_VERSION}
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta3
kind: CloudStackMachineTemplate
metadata:
name: ${CLUSTER_NAME}-control-plane
spec:
template:
spec:
offering:
name: ${CLOUDSTACK_CONTROL_PLANE_MACHINE_OFFERING}
template:
name: ${CLOUDSTACK_TEMPLATE_NAME}
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
name: "${CLUSTER_NAME}-md-0"
spec:
clusterName: "${CLUSTER_NAME}"
replicas: ${WORKER_MACHINE_COUNT}
selector:
matchLabels: null
template:
spec:
clusterName: "${CLUSTER_NAME}"
version: "${KUBERNETES_VERSION}"
bootstrap:
configRef:
name: "${CLUSTER_NAME}-md-0"
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
infrastructureRef:
name: "${CLUSTER_NAME}-md-0"
apiVersion: infrastructure.cluster.x-k8s.io/v1beta3
kind: CloudStackMachineTemplate
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta3
kind: CloudStackMachineTemplate
metadata:
name: ${CLUSTER_NAME}-md-0
spec:
template:
spec:
offering:
name: ${CLOUDSTACK_WORKER_MACHINE_OFFERING}
template:
name: ${CLOUDSTACK_TEMPLATE_NAME}
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
metadata:
name: ${CLUSTER_NAME}-md-0
spec:
template:
spec:
joinConfiguration:
nodeRegistration:
name: '{{ local_hostname }}'
kubeletExtraArgs:
provider-id: "cloudstack:///'{{ ds.meta_data.instance_id }}'"
preKubeadmCommands:
- swapoff -a