@@ -70,21 +70,21 @@ clear what is expected, this document will use the following conventions.
70
70
71
71
If the command "COMMAND" is expected to run in a ` Pod ` and produce "OUTPUT":
72
72
73
- ``` sh
73
+ ``` console
74
74
pod$ COMMAND
75
75
OUTPUT
76
76
```
77
77
78
78
If the command "COMMAND" is expected to run on a ` Node ` and produce "OUTPUT":
79
79
80
- ``` sh
80
+ ``` console
81
81
node$ COMMAND
82
82
OUTPUT
83
83
```
84
84
85
85
If the command is "kubectl ARGS":
86
86
87
- ``` sh
87
+ ``` console
88
88
$ kubectl ARGS
89
89
OUTPUT
90
90
```
@@ -95,7 +95,7 @@ For many steps here you will want to see what a `Pod` running in the cluster
95
95
sees. Kubernetes does not directly support interactive ` Pod ` s (yet), but you can
96
96
approximate it:
97
97
98
- ``` sh
98
+ ``` console
99
99
$ cat << EOF | kubectl create -f -
100
100
apiVersion: v1
101
101
kind: Pod
@@ -115,13 +115,13 @@ pods/busybox-sleep
115
115
Now, when you need to run a command (even an interactive shell) in a ` Pod ` -like
116
116
context, use:
117
117
118
- ``` sh
118
+ ``` console
119
119
$ kubectl exec busybox-sleep -- < COMMAND>
120
120
```
121
121
122
122
or
123
123
124
- ``` sh
124
+ ``` console
125
125
$ kubectl exec -ti busybox-sleep sh
126
126
/ #
127
127
```
@@ -132,7 +132,7 @@ For the purposes of this walk-through, let's run some `Pod`s. Since you're
132
132
probably debugging your own ` Service ` you can substitute your own details, or you
133
133
can follow along and get a second data point.
134
134
135
- ``` sh
135
+ ``` console
136
136
$ kubectl run hostnames --image=gcr.io/google_containers/serve_hostname \
137
137
--labels=app=hostnames \
138
138
--port=9376 \
@@ -168,7 +168,7 @@ spec:
168
168
169
169
Confirm your ` Pod`s are running:
170
170
171
- ` ` ` sh
171
+ ` ` ` console
172
172
$ kubectl get pods -l app=hostnames
173
173
NAME READY STATUS RESTARTS AGE
174
174
hostnames-0uton 1/1 Running 0 12s
@@ -186,37 +186,37 @@ So what would happen if I tried to access a non-existent `Service`? Assuming yo
186
186
have another `Pod` that consumes this `Service` by name you would get something
187
187
like :
188
188
189
- ` ` ` sh
189
+ ` ` ` console
190
190
pod$ wget -qO- hostnames
191
191
wget: bad address 'hostname'
192
192
` ` `
193
193
194
194
or :
195
195
196
- ` ` ` sh
196
+ ` ` ` console
197
197
pod$ echo $HOSTNAMES_SERVICE_HOST
198
198
199
199
` ` `
200
200
201
201
So the first thing to check is whether that `Service` actually exists :
202
202
203
- ` ` ` sh
203
+ ` ` ` console
204
204
$ kubectl get svc hostnames
205
205
Error from server: service "hostnames" not found
206
206
` ` `
207
207
208
208
So we have a culprit, let's create the `Service`. As before, this is for the
209
209
walk-through - you can use your own `Service`'s details here.
210
210
211
- ` ` ` sh
211
+ ` ` ` console
212
212
$ kubectl expose rc hostnames --port=80 --target-port=9376
213
213
NAME LABELS SELECTOR IP(S) PORT(S)
214
214
hostnames app=hostnames app=hostnames 80/TCP
215
215
` ` `
216
216
217
217
And read it back, just to be sure :
218
218
219
- ` ` ` sh
219
+ ` ` ` console
220
220
$ kubectl get svc hostnames
221
221
NAME LABELS SELECTOR IP(S) PORT(S)
222
222
hostnames app=hostnames app=hostnames 10.0.1.175 80/TCP
@@ -245,7 +245,7 @@ Now you can confirm that the `Service` exists.
245
245
246
246
From a `Pod` in the same `Namespace` :
247
247
248
- ` ` ` sh
248
+ ` ` ` console
249
249
pod$ nslookup hostnames
250
250
Server: 10.0.0.10
251
251
Address: 10.0.0.10#53
@@ -257,7 +257,7 @@ Address: 10.0.1.175
257
257
If this fails, perhaps your `Pod` and `Service` are in different
258
258
`Namespace`s, try a namespace-qualified name :
259
259
260
- ` ` ` sh
260
+ ` ` ` console
261
261
pod$ nslookup hostnames.default
262
262
Server: 10.0.0.10
263
263
Address: 10.0.0.10#53
@@ -269,7 +269,7 @@ Address: 10.0.1.175
269
269
If this works, you'll need to ensure that `Pod`s and `Service`s run in the same
270
270
`Namespace`. If this still fails, try a fully-qualified name :
271
271
272
- ` ` ` sh
272
+ ` ` ` console
273
273
pod$ nslookup hostnames.default.svc.cluster.local
274
274
Server: 10.0.0.10
275
275
Address: 10.0.0.10#53
@@ -285,7 +285,7 @@ The "cluster.local" is your cluster domain.
285
285
You can also try this from a `Node` in the cluster (note : 10.0.0.10 is my DNS
286
286
`Service`) :
287
287
288
- ` ` ` sh
288
+ ` ` ` console
289
289
node$ nslookup hostnames.default.svc.cluster.local 10.0.0.10
290
290
Server: 10.0.0.10
291
291
Address: 10.0.0.10#53
@@ -307,7 +307,7 @@ If the above still fails - DNS lookups are not working for your `Service` - we
307
307
can take a step back and see what else is not working. The Kubernetes master
308
308
`Service` should always work :
309
309
310
- ` ` ` sh
310
+ ` ` ` console
311
311
pod$ nslookup kubernetes.default
312
312
Server: 10.0.0.10
313
313
Address 1: 10.0.0.10
@@ -325,7 +325,7 @@ debugging your own `Service`, debug DNS.
325
325
The next thing to test is whether your `Service` works at all. From a
326
326
` Node` in your cluster, access the `Service`'s IP (from `kubectl get` above).
327
327
328
- ` ` ` sh
328
+ ` ` ` console
329
329
node$ curl 10.0.1.175:80
330
330
hostnames-0uton
331
331
@@ -345,7 +345,7 @@ It might sound silly, but you should really double and triple check that your
345
345
` Service` is correct and matches your `Pods`. Read back your `Service` and
346
346
verify it :
347
347
348
- ` ` ` sh
348
+ ` ` ` console
349
349
$ kubectl get service hostnames -o json
350
350
{
351
351
"kind": "Service",
@@ -398,7 +398,7 @@ actually being selected by the `Service`.
398
398
399
399
Earlier we saw that the `Pod`s were running. We can re-check that :
400
400
401
- ` ` ` sh
401
+ ` ` ` console
402
402
$ kubectl get pods -l app=hostnames
403
403
NAME READY STATUS RESTARTS AGE
404
404
hostnames-0uton 1/1 Running 0 1h
@@ -413,7 +413,7 @@ The `-l app=hostnames` argument is a label selector - just like our `Service`
413
413
has. Inside the Kubernetes system is a control loop which evaluates the
414
414
selector of every `Service` and save the results into an `Endpoints` object.
415
415
416
- ` ` ` sh
416
+ ` ` ` console
417
417
$ kubectl get endpoints hostnames
418
418
NAME ENDPOINTS
419
419
hostnames 10.244.0.5:9376,10.244.0.6:9376,10.244.0.7:9376
@@ -430,7 +430,7 @@ At this point, we know that your `Service` exists and has selected your `Pod`s.
430
430
Let's check that the `Pod`s are actually working - we can bypass the `Service`
431
431
mechanism and go straight to the `Pod`s.
432
432
433
- ` ` ` sh
433
+ ` ` ` console
434
434
pod$ wget -qO- 10.244.0.5:9376
435
435
hostnames-0uton
436
436
@@ -458,7 +458,7 @@ suspect. Let's confirm it, piece by piece.
458
458
Confirm that `kube-proxy` is running on your `Node`s. You should get something
459
459
like the below :
460
460
461
- ` ` ` sh
461
+ ` ` ` console
462
462
node$ ps auxw | grep kube-proxy
463
463
root 4194 0.4 0.1 101864 17696 ? Sl Jul04 25:43 /usr/local/bin/kube-proxy --master=https://kubernetes-master --kubeconfig=/var/lib/kube-proxy/kubeconfig --v=2
464
464
` ` `
@@ -469,7 +469,7 @@ depends on your `Node` OS. On some OSes it is a file, such as
469
469
/var/log/kube-proxy.log, while other OSes use `journalctl` to access logs. You
470
470
should see something like :
471
471
472
- ` ` `
472
+ ` ` ` console
473
473
I0707 17:34:53.945651 30031 server.go:88] Running in resource-only container "/kube-proxy"
474
474
I0707 17:34:53.945921 30031 proxier.go:121] Setting proxy IP to 10.240.115.247 and initializing iptables
475
475
I0707 17:34:54.053023 30031 roundrobin.go:262] LoadBalancerRR: Setting endpoints for default/kubernetes: to [10.240.169.188:443]
@@ -499,7 +499,7 @@ One of the main responsibilities of `kube-proxy` is to write the `iptables`
499
499
rules which implement `Service`s. Let's check that those rules are getting
500
500
written.
501
501
502
- ```
502
+ ` ` ` console
503
503
node$ iptables-save | grep hostnames
504
504
-A KUBE-PORTALS-CONTAINER -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames:default" -m tcp --dport 80 -j REDIRECT --to-ports 48577
505
505
-A KUBE-PORTALS-HOST -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames:default" -m tcp --dport 80 -j DNAT --to-destination 10.240.115.247:48577
@@ -514,7 +514,7 @@ then look at the logs again.
514
514
515
515
Assuming you do see the above rules, try again to access your `Service` by IP :
516
516
517
- ```sh
517
+ ` ` ` console
518
518
node$ curl 10.0.1.175:80
519
519
hostnames-0uton
520
520
` ` `
@@ -524,14 +524,14 @@ If this fails, we can try accessing the proxy directly. Look back at the
524
524
using for your `Service`. In the above examples it is "48577". Now connect to
525
525
that :
526
526
527
- ``` sh
527
+ ` ` ` console
528
528
node$ curl localhost:48577
529
529
hostnames-yp2kp
530
530
` ` `
531
531
532
532
If this still fails, look at the `kube-proxy` logs for specific lines like :
533
533
534
- ```
534
+ ` ` ` console
535
535
Setting endpoints for default/hostnames:default to [10.244.0.5:9376 10.244.0.6:9376 10.244.0.7:9376]
536
536
` ` `
537
537
0 commit comments