Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(argo-workflows): Add access to be able to see pod information #3192

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

yorickdevries
Copy link

@yorickdevries yorickdevries commented Mar 6, 2025

According to https://argo-workflows.readthedocs.io/en/latest/security/#ui-access UI users need access to pod(logs) to be properly displayed and updated in the argo workflows UI.
In this PR I added these to the view / edit / admin aggregate roles:

Replaces PR here: #3191

- apiGroups:
    - ""
  resources:
    - events
    - pods
    - pods/log
  verbs:
    - get
    - list
    - watch

Checklist:

  • I have bumped the chart version according to versioning
  • I have updated the documentation according to documentation
  • I have updated the chart changelog with all the changes that come with this pull request according to changelog.
  • Any new values are backwards compatible and/or have sensible default.
  • I have signed off all my commits as required by DCO.
  • My build is green (troubleshooting builds).

@tico24
Copy link
Member

tico24 commented Mar 6, 2025

I'll repeat my comment from #3191

I have concerns with this.

Upstream does not include these permissions in their aggregate-to-* roles/clusterroles. So this should be addressed upstream first.

I asked around the Workflows team and these roles are intended for kubectl users. So someone who is admin at kubectl can admin the argo workflows CRs, view can see them etc. They aren't intended to be used directly, more in an aggregated-to role. So they are certainly not intended to be directly used by the argo server. https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles. To hammer this home, a kubectl user with view access to the cluster, should already be able to view pod logs independently of this aggregation rbac.

This is hopefully emphasised by the fact that these aggregated roles are in the controller directory, not the server one.

However, the argo server should be using the argo-server role/cluster role, which does already have these permissions anyway, so this PR shouldn't be required.

@yorickdevries
Copy link
Author

yorickdevries commented Mar 6, 2025

@tico24 Thanks for the clarification. From the helm charts it wasn't clear where these aggregate roles were meant for. I am using the -view role for the service account associated with people logging in via SSO. These users can then view the workflows without being able to edit them. However, a live feed from the pods isn't available now as this access is not part of the aggregate roles. In this PR I therefore added this access on top of the argo-workflow resource access.
Is there another existing role I can use instead for the service account associated with people logging in via SSO? See also https://argo-workflows.readthedocs.io/en/latest/security/#ui-access .

@tico24
Copy link
Member

tico24 commented Mar 6, 2025

I actually don't know... which is a bit embarrassing.

There doesn't seem to be anything in the codebase. The implication appears to be that you are supposed to roll your own. Hopefully one of the other helm team members knows.

@yorickdevries
Copy link
Author

@tico24 Ok, I'd gladly hear whether they know more how to tackle this SSO role. And if nothing exists yet is it something that could be added to the helm chart?

@tico24
Copy link
Member

tico24 commented Mar 6, 2025

Potentially but I think you'd struggle to make it nicely templatable.

Unrelated, but I have hopefully helped to clarify why aggregateRoles exist in this PR: #3193

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants