Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improvements #4

Open
wants to merge 5 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .profile-example
Original file line number Diff line number Diff line change
Expand Up @@ -10,3 +10,7 @@ export workshopNamespace=workshop
# export sessionSecret=cloudnative1337
# export clusterName=workshop
# export gitrepo=https://github.com/ContainerSolutions/timber.git

## Required for gitter self-serivce portal (get them here: https://developer.gitter.im/apps/new)
# export GITTER_OAUTH_KEY=xx
# export GITTER_OAUTH_SECRET=xxx
18 changes: 10 additions & 8 deletions infra-setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,13 @@ preinstalled, and authenticated against the CS account.
Just use this url: [CloudShell](https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/lalyos/k8s-workshop&tutorial=infra-setup.md
)

## Changelog 2020-02-17

- Added code-server, exposed via domain on ide.userXX.${domain}
- Migrated gotty shell to shell.userXX.${domain}
- Added function setup-gitter
- Extended timeout from 60s to 3600s for long live proxy connection via ingress (should fix connection dropping while using ingress)

## ChangeLog 2019-10-25

- cluster creation is moved to a function `start-cluster`
Expand All @@ -16,7 +23,7 @@ Just use this url: [CloudShell](https://console.cloud.google.com/cloudshell/open
- defPoolSize (3)
- preemPoolSize (3)
- zone (europe-west3-b)
- istio and http lb is switched of by default (speedup start) - see: 403bc36d8c25f6173e04b8fca0d1a0c5a96c1601
- istio and http lb is switched off by default (speedup start) - see: 403bc36d8c25f6173e04b8fca0d1a0c5a96c1601

## Configure Project

Expand Down Expand Up @@ -48,7 +55,7 @@ source workshop-functions.sh
```

Now you can create the GKE cluster. All config will be printed,
and you have a chance to review and cancel.
and you have a chance to review and cancel. This will also automatically import cluster config
```
start-cluster
```
Expand All @@ -58,11 +65,6 @@ checking the GKE cluster
gcloud container clusters list
```

get kubectl credentials
```
gcloud container clusters get-credentials workshop --zone=${zone}
```

## Initial setup

At the begining you have to create some cluster roles :
Expand Down Expand Up @@ -115,7 +117,7 @@ dev user0
```
Please note, the first couple may take more time, as the docker image should be pulled on each node.

To create more user sssions use the following line
To create more user sessions use the following line
```
for u in user{2..15}; do dev $u; done
```
Expand Down
23 changes: 5 additions & 18 deletions self-service.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,30 +6,17 @@ Since we use basic auth now, the urls are simple (like userX.domain.com).
Of course now you have to distribute the credentials, but hey you can use
the same password for everybody ;)

## Self Service portal - depricated
## Self Service portal v2 (WIP)

After creating the user sessions, its hard to distribute/assign the session urls.

There is a small gitter authentication based web app, where participants can get an unused
session assigned to them.
More details and the process toget GITTER credentials is described: https://github.com/lalyos/gitter-scripter

Run this line to setup gitter, don't forget to update .profile with credentials
```bash
setup-gitter
```
export GITTER_OAUTH_KEY=xxxxxxx
export GITTER_OAUTH_SECRET=yyyyyyy
kubectl create secret generic gitter \
--from-literal=GITTER_OAUTH_KEY=$GITTER_OAUTH_KEY \
--from-literal=GITTER_OAUTH_SECRET=$GITTER_OAUTH_SECRET
# todo automate setting of gitter room:

export workshopNamespace=workshop
export domain=k8z.eu
curl -sL https://raw.githubusercontent.com/lalyos/gitter-scripter/master/gitter-template.yaml \
| envsubst \
| kubectl apply -f -

export gitterRoom=lalyos/earthport
kubectl patch deployments gitter --patch '{"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"gitter"}],"containers":[{"$setElementOrder/env":[{"name":"GITTER_ROOM_NAME"},{"name":"DOMAIN"}],"env":[{"name":"GITTER_ROOM_NAME","value":"'${gitterRoom}'"}],"name":"gitter"}]}}}}'
```

The users can self service at: http://session.k8z.eu
The users can self service at: http://session.${domain}
116 changes: 95 additions & 21 deletions workshop-functions.sh
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ metadata:
subjects:
- kind: ServiceAccount
name: default
namespace: ${mamespace}
namespace: ${namespace}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
Expand Down Expand Up @@ -105,7 +105,7 @@ namespace() {
kubectl label clusterrolebinding crb-cc-${namespace} user=${namespace}

kubectl create clusterrolebinding crb-ssh-${namespace} --clusterrole=sshreader --serviceaccount=${workshopNamespace}:sa-${namespace}
kubectl label clusterrolebinding crb-ssh-${namespace} user=${namespace}
kubectl label clusterrolebinding crb-ssh-${namespace} user=${namespace}
}

enable-namespaces() {
Expand All @@ -130,10 +130,25 @@ depl() {
: ${namespace:? required}
: ${gitrepo:? required}
: ${sessionSecret:=cloudnative1337}

local name=${namespace}

cat <<EOF
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
user: "${namespace}"
run: ${name}
name: ${name}-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 512M
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
Expand All @@ -153,17 +168,36 @@ spec:
spec:
serviceAccountName: sa-${name}
volumes:
- name: storage
persistentVolumeClaim:
claimName: ${name}-pvc
- name: gitrepo
gitRepo:
repository: ${gitrepo}
directory: .
initContainers:
- name: copy-repo-to-storage
image: busybox:1.28
command: ['sh', '-c', 'cp -rf /tmp/repo /tmp/storage && chown -R 1000:1000 /tmp/storage']
volumeMounts:
- mountPath: /tmp/repo
name: gitrepo
- mountPath: /tmp/storage
name: storage
containers:
- image: codercom/code-server:v2
args:
- "--auth=none"
- "--port=8181"
name: vscode
volumeMounts:
- mountPath: /home/coder/workshop
name: storage
- args:
- gotty
- "-w"
- "--credential=user:${sessionSecret}"
- "--title-format=${name}"
#- tmux
- bash
env:
- name: NS
Expand All @@ -184,7 +218,7 @@ spec:
name: dev
volumeMounts:
- mountPath: /root/workshop
name: gitrepo
name: storage
---
apiVersion: v1
kind: Service
Expand All @@ -195,9 +229,14 @@ metadata:
name: ${name}
spec:
ports:
- port: 8080
- name: shell
port: 8080
protocol: TCP
targetPort: 8080
- name: ide
port: 8181
protocol: TCP
targetPort: 8181
selector:
run: ${name}
type: NodePort
Expand All @@ -206,13 +245,21 @@ apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.org/websocket-services: ${name}
labels:
user: "${namespace}"
name: ${name}
spec:
rules:
- host: ${name}.${domain}
- host: ide.${name}.${domain}
http:
paths:
- backend:
serviceName: ${name}
servicePort: 8181
- host: shell.${name}.${domain}
http:
paths:
- backend:
Expand Down Expand Up @@ -262,18 +309,21 @@ get-url() {
declare deployment=${1}

: ${deployment:? required}
pod=$(kubectl get po -lrun=${deployment} -o jsonpath='{.items[0].metadata.name}')

sessionUrl=http://${deployment}.${domain}/
kubectl annotate deployments ${deployment} --overwrite sessionurl="${sessionUrl}"
sessionurl=$(kubectl get deployments. ${deployment} -o jsonpath='{.metadata.annotations.sessionurl}')
newSessionUrl="${sessionurl%/*/}"
kubectl annotate deployments ${deployment} --overwrite sessionurl="${newSessionUrl}"

externalip=$(kubectl get nodes -o jsonpath='{.items[0].status.addresses[?(@.type == "ExternalIP")].address}')
nodePort=$(kubectl get svc ${deployment} -o jsonpath="{.spec.ports[0].nodePort}")
sessionUrlNodePort="http://${externalip}:${nodePort}${rndPath}"
kubectl annotate deployments ${deployment} --overwrite sessionurlnp=${sessionUrlNodePort}

echo "open ${sessionUrlNodePort}"
echo "open ${sessionUrl}"

nodePortShell=$(kubectl get svc ${deployment} -o jsonpath="{.spec.ports[0].nodePort}")
nodePortIde=$(kubectl get svc ${deployment} -o jsonpath="{.spec.ports[1].nodePort}")
sessionUrlNodePort="http://${externalip}:${nodePortShell}"
sessionUrlNodePortIde="http://${externalip}:${nodePortIde}"
kubectl annotate deployments ${deployment} --overwrite sessionurlnp=${nodePortShell}

echo "open shell ${sessionUrlNodePort}"
echo "open ide ${sessionUrlNodePortIde}"
}

switchNs() {
Expand Down Expand Up @@ -372,7 +422,7 @@ clean-user() {
ns=$1;
: ${ns:?required};

kubectl delete all,ns,sa,clusterrolebinding,ing -l "user in (${ns},${ns}play)"
kubectl delete all,ns,sa,clusterrolebinding,ing,pv,pvc -l "user in (${ns},${ns}play)"
}

list-sessions() {
Expand Down Expand Up @@ -412,7 +462,7 @@ EOF
ingressip=$(kubectl get svc -n ingress-nginx ingress-nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

echo "---> checking DNS A record (*.${domain}) points to: $ingressip ..."
if [[ $(dig +short "*.${domain}") == $ingressip ]] ; then
if [[ $(dig +short "*.${domain}") == $ingressip ]] ; then
echo "DNS setting are ok"
else
echo "---> set external dns A record (*.${domain}) to: $ingressip"
Expand Down Expand Up @@ -455,10 +505,11 @@ start-cluster() {
: ${defPoolSize:=3}
: ${preemPoolSize:=3}

project_id="container-solutions-workshops"
confirm-config

gcloud beta container \
--project "container-solutions-workshops" \
--project "${project_id}" \
clusters create "${clusterName}" \
--zone "${zone}" \
--username "admin" \
Expand All @@ -477,7 +528,7 @@ start-cluster() {
--enable-autoupgrade \
--enable-autorepair \
&& gcloud beta container \
--project "container-solutions-workshops" \
--project "${project_id}" \
node-pools create "pool-1" \
--cluster "${clusterName}" \
--zone "${zone}" \
Expand All @@ -491,7 +542,29 @@ start-cluster() {
--preemptible \
--num-nodes "${preemPoolSize}" \
--no-enable-autoupgrade \
--enable-autorepair
--enable-autorepair \
&& gcloud container clusters get-credentials "${clusterName}" --project "${project_id}" --zone "${zone}"

}

setup-gitter() {

: ${workshopNamespace:? required}
: ${gitterRoom:? required}
: ${GITTER_OAUTH_KEY:? required}
: ${GITTER_OAUTH_SECRET:? required}

echo "Create secrets"
kubectl create secret generic gitter \
--from-literal=GITTER_OAUTH_KEY=$GITTER_OAUTH_KEY \
--from-literal=GITTER_OAUTH_SECRET=$GITTER_OAUTH_SECRET

curl -sL https://raw.githubusercontent.com/lalyos/gitter-scripter/master/gitter-template.yaml \
| envsubst \
| kubectl apply -f -

kubectl patch deployments gitter --patch '{"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"gitter"}],"containers":[{"$setElementOrder/env":[{"name":"GITTER_ROOM_NAME"},{"name":"DOMAIN"}],"env":[{"name":"GITTER_ROOM_NAME","value":"'${gitterRoom}'"}],"name":"gitter"}]}}}}'

}

[[ -e .profile ]] && source .profile || true
Expand All @@ -501,3 +574,4 @@ main() {
init
init-sshfront
}