Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a devworkspace pruner to the DevWorkspace Operator #1376

Open
cgruver opened this issue Feb 7, 2025 · 5 comments · May be fixed by #1402 or #1397
Open

Add a devworkspace pruner to the DevWorkspace Operator #1376

cgruver opened this issue Feb 7, 2025 · 5 comments · May be fixed by #1402 or #1397
Assignees

Comments

@cgruver
Copy link

cgruver commented Feb 7, 2025

Description

A large scale deployment of Eclipse Che / OpenShift Dev Spaces can result in a lot of stale DevWorkspace objects that are really no longer necessary but continue to occupy space in etcd.

Over time the performance of etcd can be impacted resulting in the need to scale up the control plane nodes with more CPU/RAM.

Additional context

Here is a prototype for implementing a devworkspace pruner based on the last time that a workspace was started:

apiVersion: batch/v1
kind: CronJob
metadata:
  name: devworkspace-pruner
  namespace: openshift-operators
spec:
  schedule: "0 0 1 * *"
  successfulJobsHistoryLimit: 3
  failedJobsHistoryLimit: 3
  concurrencyPolicy: Forbid
  jobTemplate:
    spec:
      template:
        spec:
          volumes: 
          - name: script
            configMap:
              name: devworkspace-pruner
              defaultMode: 0555
              items:
              - key: devworkspace-pruner
                path: devworkspace-pruner.sh
          restartPolicy: OnFailure
          serviceAccount: devworkspace-controller-serviceaccount
          containers:
          - name: openshift-cli
            image: image-registry.openshift-image-registry.svc:5000/openshift/cli:latest
            env:
            - name: RETAIN_TIME
              # 30 days
              value: "2592000"
            command:
            - /script/devworkspace-pruner.sh
            resources:
              requests:
                cpu: 100m
                memory: 64Mi
              limits:
                cpu: 100m
                memory: 64Mi
            volumeMounts:
            - mountPath: /script
              name: script
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: devworkspace-pruner
  namespace: openshift-operators
data:
  devworkspace-pruner: |
    #!/usr/bin/env bash
    current_time=$(date +%s)
    for namespace in $(oc get namespaces -l app.kubernetes.io/component=workspaces-namespace -o go-template='{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
    do
      for workspace in $(oc get devworkspaces -n ${namespace} -o go-template='{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
      do
        last_start=$(date -d$(oc get devworkspace ${workspace} -n ${namespace} -o go-template='{{range .status.conditions}}{{if eq .type "Started"}}{{.lastTransitionTime}}{{end}}{{end}}') +%s)
        workspace_age=$(( ${current_time} - ${last_start} ))
        if [[ ${workspace_age} -gt  ${RETAIN_TIME} ]]
        then
          echo "Removing workspace: ${workspace} in ${namespace}"
          oc delete devworkspace ${workspace} -n ${namespace}
        fi
      done
    done
@dkwon17
Copy link
Collaborator

dkwon17 commented Feb 20, 2025

Hello @cgruver how would you like to see this configurable within the DevWorkspace operator?

kind: DevWorkspaceOperatorConfig
metadata:
  name: example
  namespace: openshift-operators
config:
  workspace:
    cleanupCronJob:
      enable: <bool>
      // +optional (need to find proper default for kubernetes / openshift)
      image: image-registry.openshift-image-registry.svc:5000/openshift/cli:latest
      // +optional (defaults to 2592000)
      retainTime: <number>
      // +optional (defaults to `devworkspace-pruner`)
      cronJobScript: <configmapname> 

WDYT?

@cgruver
Copy link
Author

cgruver commented Feb 22, 2025

@dkwon17 LGTM

@cgruver
Copy link
Author

cgruver commented Mar 11, 2025

@dkwon17 Updated the ConfigMap embedded script to better identify namespaces managed by devworkspace operator

oc get namespaces -l app.kubernetes.io/component=workspaces-namespace -o go-template='{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}'

@akurinnoy akurinnoy linked a pull request Mar 18, 2025 that will close this issue
3 tasks
@dkwon17
Copy link
Collaborator

dkwon17 commented Mar 18, 2025

Thank you @cgruver,

To provide an update on this issue, instead of creating a configmap/cronjob resource (draft PR: #1397), we are also investigating using the resource pruner described in the operator-sdk best practices section: https://sdk.operatorframework.io/docs/best-practices/resource-pruning/

@akurinnoy provided a new draft PR: #1402

@dkwon17
Copy link
Collaborator

dkwon17 commented Apr 1, 2025

I propose that this feature to be only configurable via the global DevWorkspace Operator config (DWOC) for the time being:

apiVersion: controller.devfile.io/v1alpha1
kind: DevWorkspaceOperatorConfig
metadata:
  name: devworkspace-operator-config
  namespace: <operator install namespace>
config:
  workspace:
    cleanupCronJob:
      enable: true
      dryRun: false
      retainTime: 60
      schedule: "* * * * *"

Assuming that there can only be one pruner/cronjob running at a time, having the pruner be configurable from only one DWOC makes the most sense IMO. IMHO it should be the global DWOC that should determine the DevWorkspace operator pruner for the cluster.

The reason I bring this up is that in the case of Eclipse Che, there is a Che-owned DWOC. Today, it's not straight forward from DWO's perspective to identify what DevWorkspaces are for Eclipse Che and what are not. As a result, it's not straight forward to define a pruner in the Che-owned DWOC that will target only Eclipse Che DevWorkspaces. Other DevWorkspaces in the cluster may exist in the cluster, from for example, devworkspaces created by the Web Terminal operator.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants