@@ -71,14 +71,14 @@ clear what is expected, this document will use the following conventions.
71
71
If the command "COMMAND" is expected to run in a ` Pod ` and produce "OUTPUT":
72
72
73
73
``` console
74
- pod$ COMMAND
74
+ u@ pod$ COMMAND
75
75
OUTPUT
76
76
```
77
77
78
78
If the command "COMMAND" is expected to run on a ` Node ` and produce "OUTPUT":
79
79
80
80
``` console
81
- node$ COMMAND
81
+ u@ node$ COMMAND
82
82
OUTPUT
83
83
```
84
84
@@ -187,14 +187,14 @@ have another `Pod` that consumes this `Service` by name you would get something
187
187
like :
188
188
189
189
` ` ` console
190
- pod$ wget -qO- hostnames
190
+ u@ pod$ wget -qO- hostnames
191
191
wget: bad address 'hostname'
192
192
` ` `
193
193
194
194
or :
195
195
196
196
` ` ` console
197
- pod$ echo $HOSTNAMES_SERVICE_HOST
197
+ u@ pod$ echo $HOSTNAMES_SERVICE_HOST
198
198
199
199
` ` `
200
200
@@ -246,7 +246,7 @@ Now you can confirm that the `Service` exists.
246
246
From a `Pod` in the same `Namespace` :
247
247
248
248
` ` ` console
249
- pod$ nslookup hostnames
249
+ u@ pod$ nslookup hostnames
250
250
Server: 10.0.0.10
251
251
Address: 10.0.0.10#53
252
252
@@ -258,7 +258,7 @@ If this fails, perhaps your `Pod` and `Service` are in different
258
258
`Namespace`s, try a namespace-qualified name :
259
259
260
260
` ` ` console
261
- pod$ nslookup hostnames.default
261
+ u@ pod$ nslookup hostnames.default
262
262
Server: 10.0.0.10
263
263
Address: 10.0.0.10#53
264
264
@@ -270,7 +270,7 @@ If this works, you'll need to ensure that `Pod`s and `Service`s run in the same
270
270
`Namespace`. If this still fails, try a fully-qualified name :
271
271
272
272
` ` ` console
273
- pod$ nslookup hostnames.default.svc.cluster.local
273
+ u@ pod$ nslookup hostnames.default.svc.cluster.local
274
274
Server: 10.0.0.10
275
275
Address: 10.0.0.10#53
276
276
@@ -286,7 +286,7 @@ You can also try this from a `Node` in the cluster (note: 10.0.0.10 is my DNS
286
286
`Service`) :
287
287
288
288
` ` ` console
289
- node$ nslookup hostnames.default.svc.cluster.local 10.0.0.10
289
+ u@ node$ nslookup hostnames.default.svc.cluster.local 10.0.0.10
290
290
Server: 10.0.0.10
291
291
Address: 10.0.0.10#53
292
292
@@ -308,7 +308,7 @@ can take a step back and see what else is not working. The Kubernetes master
308
308
`Service` should always work :
309
309
310
310
` ` ` console
311
- pod$ nslookup kubernetes.default
311
+ u@ pod$ nslookup kubernetes.default
312
312
Server: 10.0.0.10
313
313
Address 1: 10.0.0.10
314
314
@@ -326,13 +326,13 @@ The next thing to test is whether your `Service` works at all. From a
326
326
` Node` in your cluster, access the `Service`'s IP (from `kubectl get` above).
327
327
328
328
` ` ` console
329
- node$ curl 10.0.1.175:80
329
+ u@ node$ curl 10.0.1.175:80
330
330
hostnames-0uton
331
331
332
- node$ curl 10.0.1.175:80
332
+ u@ node$ curl 10.0.1.175:80
333
333
hostnames-yp2kp
334
334
335
- node$ curl 10.0.1.175:80
335
+ u@ node$ curl 10.0.1.175:80
336
336
hostnames-bvc05
337
337
` ` `
338
338
@@ -431,13 +431,13 @@ Let's check that the `Pod`s are actually working - we can bypass the `Service`
431
431
mechanism and go straight to the `Pod`s.
432
432
433
433
` ` ` console
434
- pod$ wget -qO- 10.244.0.5:9376
434
+ u@ pod$ wget -qO- 10.244.0.5:9376
435
435
hostnames-0uton
436
436
437
437
pod $ wget -qO- 10.244.0.6:9376
438
438
hostnames-bvc05
439
439
440
- pod$ wget -qO- 10.244.0.7:9376
440
+ u@ pod$ wget -qO- 10.244.0.7:9376
441
441
hostnames-yp2kp
442
442
` ` `
443
443
@@ -459,7 +459,7 @@ Confirm that `kube-proxy` is running on your `Node`s. You should get something
459
459
like the below :
460
460
461
461
` ` ` console
462
- node$ ps auxw | grep kube-proxy
462
+ u@ node$ ps auxw | grep kube-proxy
463
463
root 4194 0.4 0.1 101864 17696 ? Sl Jul04 25:43 /usr/local/bin/kube-proxy --master=https://kubernetes-master --kubeconfig=/var/lib/kube-proxy/kubeconfig --v=2
464
464
` ` `
465
465
@@ -500,7 +500,7 @@ rules which implement `Service`s. Let's check that those rules are getting
500
500
written.
501
501
502
502
` ` ` console
503
- node$ iptables-save | grep hostnames
503
+ u@ node$ iptables-save | grep hostnames
504
504
-A KUBE-PORTALS-CONTAINER -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames:default" -m tcp --dport 80 -j REDIRECT --to-ports 48577
505
505
-A KUBE-PORTALS-HOST -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames:default" -m tcp --dport 80 -j DNAT --to-destination 10.240.115.247:48577
506
506
` ` `
@@ -515,7 +515,7 @@ then look at the logs again.
515
515
Assuming you do see the above rules, try again to access your `Service` by IP :
516
516
517
517
` ` ` console
518
- node$ curl 10.0.1.175:80
518
+ u@ node$ curl 10.0.1.175:80
519
519
hostnames-0uton
520
520
` ` `
521
521
@@ -525,7 +525,7 @@ using for your `Service`. In the above examples it is "48577". Now connect to
525
525
that :
526
526
527
527
` ` ` console
528
- node$ curl localhost:48577
528
+ u@ node$ curl localhost:48577
529
529
hostnames-yp2kp
530
530
` ` `
531
531
0 commit comments