Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rancher RKE2 Cluster in AWS - Target Group Empty due to providerID invalid error #3977

Open
albertmorenomng opened this issue Dec 11, 2024 · 4 comments
Labels
kind/question lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@albertmorenomng
Copy link

Hello,

Deployed Rancher RKE2 cluster in AWS using Amazon EC2 nodes with CloudProvider set to "RKE2 Embeded". Not EKS Cluster.
Installed aws load balancer controller version v2.10.1 (chart version 1.10.1)

Creating ingress, the controller creates sucessfully Load Balancer and Target Group, but the Target Group is empty (no instances inside), logs from ALB controller shows this messages:

{"level":"error","ts":"2024-12-10T15:11:34Z","msg":"Reconciler error","controller":"targetGroupBinding","controllerGroup":"elbv2.k8s.aws",
"controllerKind":"TargetGroupBinding","TargetGroupBinding":{"name":"k8s-game2048-service2-59aeb62d97","namespace":"game-2048"},"namespace":"game-2048","name":"k8s-game2048-service2-59aeb62d97","reconcileID":"3d2fe906-09e7-4be6-8869-87fcc2e72d14","error":"providerID rke2://amazonec2-workerpool01-9gxf5-fmnsm is invalid for EC2 instances, node: amazonec2-workerpool01-9gxf5-fmnsm"}

It seems that the controller dos not reconize tghe providerID.

Any ideas how to solve it? Is this a bug?

Regards

@shraddhabang
Copy link
Collaborator

We dont support this cloudprovider yet, Will you be able to use EC2 instead?

@albertmorenomng
Copy link
Author

Thanks @shraddhabang

At this moment we can not use ec2 type.

Did you know aprox when rke2 cloudpriver will be supported? Is it in your roadmap?

Regards

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 12, 2025
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/question lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants