-
Notifications
You must be signed in to change notification settings - Fork 1
Home
(Temporary - might move this page elsewhere, but want these instructions to be public ~Lyuma)
Describe the project you are working on: CI/CD, Backend hosting.
Describe the problem or limitation you are having in your project: We want a way to manage backend resources on one or more physical machines.
Describe how this feature / enhancement will help you overcome this problem or limitation: We are using a lightweight Kubernetes implementation, k3s, to provide management of backend resources
Show a mock up screenshots/video or a flow diagram explaining how your proposal will work:
Describe implementation detail for your proposal (in code), if possible:
Install Fedora 32 with a user account set as administrator.
Add authorized keys into ~/.ssh/authorized_keys
OPTIONAL: customize SSH port:
Set Port 2345
in /etc/ssh/sshd_config.
sudo semanage port -a -t ssh_port_t -p tcp 2345
sudo firewall-cmd --add-port=2345/tcp
REQUIRED: set PasswordAuthentication no
in /etc/ssh/sshd_config.
OPTIONAL: Configure default DNS server:
[main]
dns=none
[ipv4]
method=auto
dns=8.8.8.8;4.2.2.2;
ignore-auto-dns=true
search localhost
nameserver 1.0.0.1
nameserver 1.1.1.1
sudo service sshd restart
sudo firewall-cmd --add-masquerade --permanent
sudo systemctl restart firewalld
sudo yum install redhat-lsb-core container-selinux selinux-policy-base podman
sudo rpm -i https://github.com/rancher/k3s-selinux/releases/download/v0.1.1-rc1/k3s-selinux-0.1.1-rc1.el7.noarch.rpm
# Validate that ssh logins work, then run:
sudo service NetworkManager stop; sudo service NetworkManager start
sudo grubby --update-kernel=ALL --args="systemd.unified_cgroup_hierarchy=0"
Now, reboot the system.
First, create a new LVM Logical Volume named kubepvc XFS 100GB mounted at /kube
.
Add the following to /etc/fstab:
# If created manually, run `sudo blkid` and add:
# UUID=aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee /kube auto defaults 0 0
/kube/rancher /var/lib/rancher none bind 0 0`
Now, setup the partition:
sudo mkdir -p /kube
sudo mount /kube
sudo mkdir -p /kube/pvc /kube/rancher /var/lib/rancher /root/.kube
sudo chcon -R -t container_file_t /kube/pvc
sudo mount /var/lib/rancher
curl -sfL https://raw.githubusercontent.com/rancher/k3s/master/install.sh | sh -s - server -o /root/.kube/config --default-local-storage-path /kube/pvc --no-deploy=servicelb --disable=traefik --disable=servicelb
Now you will need to obtain the DNS IAM credentials. In this example, we are working with these Route53 hosted domains: longdomain.example,shrtdom.example
Install Helm services:
ACCESSKEY=AKIAABCDEFGHIJKLMNOP
SECRETKEY=XXXXXXXXXXXXXXXXXXXX
sudo helm repo add bitnami https://charts.bitnami.com/bitnami
sudo helm repo add jetstack https://charts.jetstack.io
sudo helm repo update
sudo helm install external-dns --version 3.2.3 --set provider=aws --set aws.zoneType=public --set registry=noop --set aws.credentials.accessKey="$ACCESSKEY" --set domainFilters='{longdomain.example,shrtdom.example}' --set aws.credentials.secretKey="$SECRETKEY" bitnami/external-dns
sudo helm install nginx stable/nginx-ingress --namespace kube-system --version 1.41.1
sudo kubectl patch svc/nginx-nginx-ingress-controller -n kube-system --patch '{"spec":{"externalTrafficPolicy":"Local"}}'
sudo kubectl patch deployments/nginx-nginx-ingress-controller --patch '{"spec":{"template":{"spec":{"hostNetwork":true}}}}' -n kube-system
sudo kubectl get replicasets -n kube-system
# Find the oldest nginx-nginx-ingress-controller one and delete with
sudo kubectl delete replicasets -nginx-ingress-controller-abcdefg-whatever -n kube-system
sudo kubectl create namespace cert-manager
sudo helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v0.15.1 --set installCRDs=true
sudo kubectl --namespace cert-manager create secret generic prod-route53-credentials-secret --from-literal=secret-access-key="$SECRETKEY"
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
namespace: cert-manager
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: [email protected]
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-prod
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
class: nginx
#solvers:
#- dns01:
# route53:
# region: us-east-1
# accessKeyID: AKIAABCDEFGHIJKLMNOP
# secretAccessKeySecretRef:
# name: prod-route53-credentials-secret
# key: secret-access-key
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: core-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/hsts: "false"
# nginx.ingress.kubernetes.io/add-base-url: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($host ~ ^(?!(ci))) {
more_clear_headers "Strict-Transport-Security";
}
if ($host ~ ^(.*\.?)shrtdom\.([a-z]*)$ ) {
set $subd $1;
set $tld $2;
set $newdomain shrtdom.$tld;
rewrite ^/(.*)$ https://$subd$newdomain/$1;
}
spec:
tls:
- hosts:
- shrtdom.example
- longdomain.example
- uro.shrtdom.example
- uro.longdomain.example
- ci.longdomain.example
- ci.shrtdom.example
secretName: radiance-cert-secret1
rules:
- host: shrtdom.example
- host: longdomain.example
- host: uro.shrtdom.example
- host: uro.longdomain.example
http:
paths:
- path: /
backend:
serviceName: uro
servicePort: 4000
- host: hls.shrtdom.example
http:
paths:
- path: /
backend:
serviceName: nginx-rtmp-service
servicePort: 80
- host: ci.shrtdom.example
- host: ci.longdomain.example
http:
paths:
- path: /
backend:
serviceName: gocd-server
servicePort: 8153
Then:
sudo kubectl apply -f ingress.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.0.2.0/32 # <--- this is the external. HOST IP
sudo kubectl create namespace metallb-system
sudo kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
sudo kubectl apply -f metalconfig.yml
sudo kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml
agent:
image:
# agent.image.repository is the GoCD Agent image name
repository: "groupsinfra/gocd-agent-centos-8-groups" # agent.image.tag is the GoCD Agent image's tag
tag: v20.7.0-groups-0.5.3
# agent.image.pullPolicy is the GoCD Agent image's pull policy
pullPolicy: "IfNotPresent" # agent.replicaCount is the GoCD Agent replicas Count. Specify the number of GoCD agents to run
replicaCount: 6
security:
ssh:
enabled: true
secretName: gocd-ssh
server:
shouldPreconfigure: false
security:
ssh:
enabled: true
secretName: gocd-ssh
env:
extraEnvVars:
- name: GOCD_PLUGIN_INSTALL_gitlab-oauth-authorization-plugin
value: https://github.com/gocd-contrib/gitlab-oauth-authorization-plugin/releases/download/v2.0.1-52-exp/gitlab-oauth-authorization-plugin-2.0.1-52.jar
ssh-keygen -t rsa -b 4096 -C "gocd-ssh-key" -f gocd-ssh -P ''
( ssh-keyscan gitlab.com ; ssh-keyscan github.com ) > gocd_known_hosts
sudo kubectl create secret generic gocd-ssh --from-file=id_rsa=gocd-ssh --from-file=id_rsa.pub=gocd-ssh.pub --from-file=known_hosts=gocd_known_hosts
# Chart version 1.30.0 is gocd 20.7.0
sudo helm install -f gocd_values.yaml gocd stable/gocd --version 1.30.0
sudo chcon -R -t container_file_t /kube/pvc
# Installs a trash service on port 80 by default. Let's delete it:
sudo kubectl delete ingress gocd-server
# Instead of using "kubectl scale", scale agents by editing gocd_values.yaml
# and do "sudo helm upgrade -f ...."
sudo helm install -f gocd_values.yaml gocd stable/gocd --version 1.30.0
# Make sure to enable the agents in the web UI, and assign them to Resources and Environments.
Upgrade process (make sure to sudo kubectl delete ingress gocd-server
after every upgrade):
# Disable and Delete all agents in the AGENTS tab of gocd.
Edit gocd_values.yaml and set agent version to latest (e.g. 20.7.0-groups-0.5.3)
sudo helm upgrade -f gocd_values.yaml gocd stable/gocd --version 1.30.0
sudo kubectl delete ingress gocd-server
# Wait for agents to come up, and enable them and assign them as appropriate.
Create DockerHub permissions: Create an account if you do not have one. Visit https://hub.docker.com/settings/security and create an Access Token. Copy the token.
sudo kubectl create secret docker-registry regcred --docker-server=https://index.docker.io/v1/ --docker-username=dockerhubuser --docker-password=XXXX [email protected]
Now create Docker-in-Docker:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: gocd
component: agent-dind
name: gocd-agent-dind
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
app: gocd
component: agent-dind
release: gocd
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: gocd
component: agent-dind
release: gocd
spec:
containers:
- env:
- name: GO_SERVER_URL
value: http://gocd-server:8153/go
image: gocd/gocd-agent-docker-dind:v20.7.0
imagePullPolicy: IfNotPresent
name: gocd-agent-dind
resources: {}
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /root/.ssh
name: ssh-secrets
readOnly: true
- mountPath: /root/.docker
name: kaniko-secret
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 0
runAsGroup: 0
runAsUser: 0
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
volumes:
- name: ssh-secrets
secret:
defaultMode: 256
secretName: gocd-ssh
- name: kaniko-secret
secret:
secretName: regcred
items:
- key: .dockerconfigjson
path: config.json
Make sure to enable the Agents when they come up on the GoCD Dashboard. Add every server to the "developement
" environment. Also, assign linux servers to "mingw
" and "linux
". Assign the dind agents to "dind
".
For GitLab, go to https://ci.longdomain.example/go/admin/security/auth_configs and select Create new authorization configuration
-> gitlab-auth-config
/ GitLab Authentication plugin
/ follow documentation here: https://github.com/gocd-contrib/gitlab-oauth-authorization-plugin/blob/master/INSTALL.md - Do not check Allow only known users to login yet. If this works, you can skip the text auth step and corresponding passwd commands.
Create Guest login:
- Go to auth_configs,
Create new authorization configuration
->guest-login
/Guest Login Plugin
/ Fill out Go server URL / Usernameview
/ Display nameGuest
. - Now, go to Roles Management. Create role
guest
. Add Deny for all types and Resources*
as desired. - In an Incognito window, visit the CI system and login as Guest. Close the incognito window.
- Now, go to Users Management. Select view / Guest and select Roles ->
guest
- Admin -> Pipelines. Select Pipeline Group
beta
, click + on the top right of the whole group, go to Roles, addguest
, and only check View. Save this.
At this point, Guest should have permission to view pipelines, see logs, download artifacts but nothing else.
For text auth, go to https://ci.example.com/go/admin/security/auth_configs and select Create new authorization configuration
-> file-auth-config
/ Password File Authentication plugin
/ /godata/config/password.properties
sudo kubectl exec gocd-server-6d77846995-5l244 -- touch /godata/config/password.properties
sudo yum install httpd-tools
htpasswd -c -B passwd admin
cat passwd | sudo kubectl exec gocd-server-6d77846995-5l244 -- sudo tee /kubepvc/pvc/*/godata/config/password.properties
Now go to users page, edit your user and enable Global Admin
.
Now go to file-auth-config, edit configuration, enable Allow only known users to login
Go to ADMIN -> Config Repositories
- Config repository Name: groups-gocd-pipelines
- Plugin ID: JSON Configuration Plugin
- Material Type: Git
- URL: https://github.com/V-Sekai/groups-gocd-pipelines
- Branch: master
- GoCD pipeline files pattern: *.gopipeline.json
- GoCD environment files pattern: *.goenvironment.json RULES
- Allow: Pipeline Group: beta
- Allow: Environment: development
wget https://github.com/fluxcd/flux/releases/download/1.20.2/fluxctl_linux_amd64
sudo cp fluxctl_linux_amd64 /usr/local/bin/fluxctl
sudo chmod +x /usr/local/bin/fluxctl
sudo helm repo add fluxcd https://charts.fluxcd.io
sudo kubectl apply -f https://raw.githubusercontent.com/fluxcd/helm-operator/master/deploy/crds.yaml
sudo kubectl create namespace flux
sudo kubectl identity --k8s-fwd-ns flux
Fork the flux-config repository from here https://github.com/V-Sekai/flux-config into your own github account, and set GHUSER=your github account.
Now, in your fork of flux-config, go to project Settings -> Deploy Keys and add the result of the above identity command. Make sure to check Allow write access.
Once you have done this, you can continue with the flux setup using your newly forked repository.
export GHUSER="xxxxxxxxx"
sudo fluxctl install --git-user=${GHUSER} --git-email=${GHUSER}@users.noreply.github.com - [email protected]:${GHUSER}/flux-config --git-path=workloads --namespace=flux > fluxcmd_install.yaml
sudo kubectl apply -f fluxcmd_install.yaml
sudo fluxctl --k8s-fwd-ns flux
sudo fluxctl list-workloads --k8s-fwd-ns flux
FOR DEBUGGING ONLY: sudo setenforce permissive
- this appears to have no effect, so there is a different problem.
statefulset:
resources:
limits:
memory: "8Gi"
requests:
memory: "8Gi"
conf:
cache: "2Gi"
max-sql-memory: "2Gi"
tls:
enabled: true
sudo helm install cockroachdb --values cockroachdb.values.yaml cockroachdb/cockroachdb
sudo kubectl certificate approve default.node.cockroachdb-0
sudo kubectl certificate approve default.node.cockroachdb-1
sudo kubectl certificate approve default.node.cockroachdb-2
sudo kubectl certificate approve default.client.root
curl -o client-secure.yaml https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/client-secure.yaml
sudo kubectl apply -f client-secure.yaml
sudo kubectl exec -it cockroachdb-client-secure -- ./cockroach sql --certs-dir=/cockroach-certs --host=cockroachdb-public
In SQL, write:
CREATE DATABASE uro_prod;
CREATE USER 'uro-prod' WITH PASSWORD 'blablablablaSOMEDATABASEPASSWORD';
GRANT ALL ON DATABASE uro_prod to "uro-prod";
To make backups:
sudo kubectl exec -it cockroachdb-client-secure -- ./cockroach dump --certs-dir=/cockroach-certs --host=cockroachdb-public uro_prod > uro_prod_backup.txt
Apply secrets:
(On dev machine) MIX_ENV=prod mix phx.gen.secret
# Copy the output of above, and copy the database password from above:
sudo kubectl create secret generic uro-prod --from-literal=secret-key-base='GENERATED_WITH_ABOVE_COMMAND' --from-literal=pass='blablablablaSOMEDATABASEPASSWORD'
sudo kubectl apply -f https://raw.githubusercontent.com/V-Sekai/uro/master/kubernetes.yaml
If this enhancement will not be used often, can it be worked around with a few lines of script?:
Yes, it is possible to deploy this system using docker or in the root system.
Is there a reason why this should be core and not an add-on in the asset library?:
Having experience with kubernetes and maintaining discipline will make scaling or service upgrades smoother in the future.
Copyright (c) 2014-2019 Godot Engine contributors.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.