-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
POD restarts when applying wasm deploy #104
Comments
it is for now as we need to apply some annotations to the pod, to instruct istio to add some volumes to it. |
Super. Thanks for the prompt response. Will wait for the new release. |
Just a FYI on what i observed -- istio 1.5.1 , minikube and kubernetes 1.17.3 A few more observation on constant POD restarts due to constantly applying newer versions of the wasm filters
First time when i apply wasme deploy to petstore - hangs and does not succeed INFO[0003] added image to cache config... cache="{wasme-cache wasme}" image="webassemblyhub.io/sriramcm/demo-add-header:v0.7" petstore istio - proxy logs looks like below s Pilot running?): cds updates: 1 successful, 0 rejected; lds updates: 0 successful, 1 rejected i need to kill it and then run wasme deploy again to succeed Basically i found sometimes it was erratic. Overall a few minutes for me to get a filter applied from a successful push to webassemblyhub.io to a pull on the kubernetes cluster followed by deploy. |
Related to #67 |
Hi:
I went through the sample tutorial to add a wasm filter to the bookinfo sample in istio service mesh 1.5.1.
https://docs.solo.io/web-assembly-hub/latest/tutorial_code/
Does the wasm deploy force a pod restart while applying the filter. Is this observation correct ? I see the old pods terminating and new ones getting created -- for example if i am applying this on the details pod in the bookinfo example.
The text was updated successfully, but these errors were encountered: