Skip to content

Commit 12e4e8f

Browse files
committed
Make docs links go through docs.k8s.io
1 parent e8b28c5 commit 12e4e8f

File tree

27 files changed

+127
-127
lines changed

27 files changed

+127
-127
lines changed

CHANGELOG.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
## 0.15.0
44
* Enables v1beta3 API and sets it to the default API version (#6098)
5-
* See the [v1beta3 conversion guide](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/api.md#v1beta3-conversion-tips)
5+
* See the [v1beta3 conversion guide](http://docs.k8s.io/api.md#v1beta3-conversion-tips)
66
* Added multi-port Services (#6182)
77
* New Getting Started Guides
88
* Multi-node local startup guide (#6505)

docs/availability.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -120,7 +120,7 @@ then you need `R + U` clusters. If it is not (e.g you want to ensure low latenc
120120
cluster failure), then you need to have `R * U` clusters (`U` in each of `R` regions). In any case, try to put each cluster in a different zone.
121121

122122
Finally, if any of your clusters would need more than the maximum recommended number of nodes for a Kubernetes cluster, then
123-
you may need even more clusters. Our [roadmap](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/roadmap.md)
123+
you may need even more clusters. Our [roadmap](http://docs.k8s.io/roadmap.md)
124124
calls for maximum 100 node clusters at v1.0 and maximum 1000 node clusters in the middle of 2015.
125125

126126
## Working with multiple clusters

docs/design/networking.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,7 @@ We want to be able to assign IP addresses externally from Docker ([Docker issue
8383

8484
In addition to enabling self-registration with 3rd-party discovery mechanisms, we'd like to setup DDNS automatically ([Issue #146](https://github.com/GoogleCloudPlatform/kubernetes/issues/146)). hostname, $HOSTNAME, etc. should return a name for the pod ([Issue #298](https://github.com/GoogleCloudPlatform/kubernetes/issues/298)), and gethostbyname should be able to resolve names of other pods. Probably we need to set up a DNS resolver to do the latter ([Docker issue #2267](https://github.com/dotcloud/docker/issues/2267)), so that we don't need to keep /etc/hosts files up to date dynamically.
8585

86-
[Service](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md) endpoints are currently found through environment variables. Both [Docker-links-compatible](https://docs.docker.com/userguide/dockerlinks/) variables and kubernetes-specific variables ({NAME}_SERVICE_HOST and {NAME}_SERVICE_BAR) are supported, and resolve to ports opened by the service proxy. We don't actually use [the Docker ambassador pattern](https://docs.docker.com/articles/ambassador_pattern_linking/) to link containers because we don't require applications to identify all clients at configuration time, yet. While services today are managed by the service proxy, this is an implementation detail that applications should not rely on. Clients should instead use the [service portal IP](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md) (which the above environment variables will resolve to). However, a flat service namespace doesn't scale and environment variables don't permit dynamic updates, which complicates service deployment by imposing implicit ordering constraints. We intend to register each service portal IP in DNS, and for that to become the preferred resolution protocol.
86+
[Service](http://docs.k8s.io/services.md) endpoints are currently found through environment variables. Both [Docker-links-compatible](https://docs.docker.com/userguide/dockerlinks/) variables and kubernetes-specific variables ({NAME}_SERVICE_HOST and {NAME}_SERVICE_BAR) are supported, and resolve to ports opened by the service proxy. We don't actually use [the Docker ambassador pattern](https://docs.docker.com/articles/ambassador_pattern_linking/) to link containers because we don't require applications to identify all clients at configuration time, yet. While services today are managed by the service proxy, this is an implementation detail that applications should not rely on. Clients should instead use the [service portal IP](http://docs.k8s.io/services.md) (which the above environment variables will resolve to). However, a flat service namespace doesn't scale and environment variables don't permit dynamic updates, which complicates service deployment by imposing implicit ordering constraints. We intend to register each service portal IP in DNS, and for that to become the preferred resolution protocol.
8787

8888
We'd also like to accommodate other load-balancing solutions (e.g., HAProxy), non-load-balanced services ([Issue #260](https://github.com/GoogleCloudPlatform/kubernetes/issues/260)), and other types of groups (worker pools, etc.). Providing the ability to Watch a label selector applied to pod addresses would enable efficient monitoring of group membership, which could be directly consumed or synced with a discovery mechanism. Event hooks ([Issue #140](https://github.com/GoogleCloudPlatform/kubernetes/issues/140)) for join/leave events would probably make this even easier.
8989

docs/design/secrets.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -72,7 +72,7 @@ service would also consume the secrets associated with the MySQL service.
7272

7373
### Use-Case: Secrets associated with service accounts
7474

75-
[Service Accounts](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/design/service_accounts.md) are proposed as a
75+
[Service Accounts](http://docs.k8s.io/design/service_accounts.md) are proposed as a
7676
mechanism to decouple capabilities and security contexts from individual human users. A
7777
`ServiceAccount` contains references to some number of secrets. A `Pod` can specify that it is
7878
associated with a `ServiceAccount`. Secrets should have a `Type` field to allow the Kubelet and
@@ -236,7 +236,7 @@ memory overcommit on the node.
236236

237237
#### Secret data on the node: isolation
238238

239-
Every pod will have a [security context](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/design/security_context.md).
239+
Every pod will have a [security context](http://docs.k8s.io/design/security_context.md).
240240
Secret data on the node should be isolated according to the security context of the container. The
241241
Kubelet volume plugin API will be changed so that a volume plugin receives the security context of
242242
a volume along with the volume spec. This will allow volume plugins to implement setting the
@@ -248,7 +248,7 @@ Several proposals / upstream patches are notable as background for this proposal
248248

249249
1. [Docker vault proposal](https://github.com/docker/docker/issues/10310)
250250
2. [Specification for image/container standardization based on volumes](https://github.com/docker/docker/issues/9277)
251-
3. [Kubernetes service account proposal](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/design/service_accounts.md)
251+
3. [Kubernetes service account proposal](http://docs.k8s.io/design/service_accounts.md)
252252
4. [Secrets proposal for docker (1)](https://github.com/docker/docker/pull/6075)
253253
5. [Secrets proposal for docker (2)](https://github.com/docker/docker/pull/6697)
254254

docs/design/security.md

+6-6
Original file line numberDiff line numberDiff line change
@@ -63,14 +63,14 @@ Automated process users fall into the following categories:
6363
A pod runs in a *security context* under a *service account* that is defined by an administrator or project administrator, and the *secrets* a pod has access to is limited by that *service account*.
6464

6565

66-
1. The API should authenticate and authorize user actions [authn and authz](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/design/access.md)
66+
1. The API should authenticate and authorize user actions [authn and authz](http://docs.k8s.io/design/access.md)
6767
2. All infrastructure components (kubelets, kube-proxies, controllers, scheduler) should have an infrastructure user that they can authenticate with and be authorized to perform only the functions they require against the API.
6868
3. Most infrastructure components should use the API as a way of exchanging data and changing the system, and only the API should have access to the underlying data store (etcd)
69-
4. When containers run on the cluster and need to talk to other containers or the API server, they should be identified and authorized clearly as an autonomous process via a [service account](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/design/service_accounts.md)
69+
4. When containers run on the cluster and need to talk to other containers or the API server, they should be identified and authorized clearly as an autonomous process via a [service account](http://docs.k8s.io/design/service_accounts.md)
7070
1. If the user who started a long-lived process is removed from access to the cluster, the process should be able to continue without interruption
7171
2. If the user who started processes are removed from the cluster, administrators may wish to terminate their processes in bulk
7272
3. When containers run with a service account, the user that created / triggered the service account behavior must be associated with the container's action
73-
5. When container processes run on the cluster, they should run in a [security context](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/design/security_context.md) that isolates those processes via Linux user security, user namespaces, and permissions.
73+
5. When container processes run on the cluster, they should run in a [security context](http://docs.k8s.io/design/security_context.md) that isolates those processes via Linux user security, user namespaces, and permissions.
7474
1. Administrators should be able to configure the cluster to automatically confine all container processes as a non-root, randomly assigned UID
7575
2. Administrators should be able to ensure that container processes within the same namespace are all assigned the same unix user UID
7676
3. Administrators should be able to limit which developers and project administrators have access to higher privilege actions
@@ -79,7 +79,7 @@ A pod runs in a *security context* under a *service account* that is defined by
7979
6. Developers may need to ensure their images work within higher security requirements specified by administrators
8080
7. When available, Linux kernel user namespaces can be used to ensure 5.2 and 5.4 are met.
8181
8. When application developers want to share filesytem data via distributed filesystems, the Unix user ids on those filesystems must be consistent across different container processes
82-
6. Developers should be able to define [secrets](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/design/secrets.md) that are automatically added to the containers when pods are run
82+
6. Developers should be able to define [secrets](http://docs.k8s.io/design/secrets.md) that are automatically added to the containers when pods are run
8383
1. Secrets are files injected into the container whose values should not be displayed within a pod. Examples:
8484
1. An SSH private key for git cloning remote data
8585
2. A client certificate for accessing a remote system
@@ -93,11 +93,11 @@ A pod runs in a *security context* under a *service account* that is defined by
9393

9494
### Related design discussion
9595

96-
* Authorization and authentication https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/design/access.md
96+
* Authorization and authentication http://docs.k8s.io/design/access.md
9797
* Secret distribution via files https://github.com/GoogleCloudPlatform/kubernetes/pull/2030
9898
* Docker secrets https://github.com/docker/docker/pull/6697
9999
* Docker vault https://github.com/docker/docker/issues/10310
100-
* Service Accounts: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/design/service_accounts.md
100+
* Service Accounts: http://docs.k8s.io/design/service_accounts.md
101101
* Secret volumes https://github.com/GoogleCloudPlatform/kubernetes/4126
102102

103103
## Specific Design Points

docs/getting-started-guides/centos/centos_manual_config.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33

44
This is a getting started guide for CentOS. It is a manual configuration so you understand all the underlying packages / services / ports, etc...
55

6-
This guide will only get ONE minion working. Multiple minions requires a functional [networking configuration](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious.
6+
This guide will only get ONE minion working. Multiple minions requires a functional [networking configuration](http://docs.k8s.io/networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious.
77

88
The kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, centos-master, will be the kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_. The remaining host, centos-minion will be the minion and run kubelet, proxy, cadvisor and docker.
99

docs/getting-started-guides/cloudstack.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ There are currently two deployment techniques.
1010
This uses [libcloud](http://libcloud.apache.org) to launch CoreOS instances and pass the appropriate cloud-config setup using userdata. Several manual steps are required. This is obsoleted by the Ansible playbook detailed below.
1111

1212
* [Ansible playbook](https://github.com/runseb/ansible-kubernetes).
13-
This is completely automated, a single playbook deploys Kubernetes based on the coreOS [instructions](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/coreos/coreos_multinode_cluster.md).
13+
This is completely automated, a single playbook deploys Kubernetes based on the coreOS [instructions](http://docs.k8s.io/getting-started-guides/coreos/coreos_multinode_cluster.md).
1414

1515
#Ansible playbook
1616

docs/getting-started-guides/coreos/bare_metal_offline.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -195,7 +195,7 @@ Now for the good stuff!
195195
## Cloud Configs
196196
The following config files are tailored for the OFFLINE version of a Kubernetes deployment.
197197

198-
These are based on the work found here: [master.yml](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/coreos/cloud-configs/master.yaml), [node.yml](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/coreos/cloud-configs/node.yaml)
198+
These are based on the work found here: [master.yml](http://docs.k8s.io/getting-started-guides/coreos/cloud-configs/master.yaml), [node.yml](http://docs.k8s.io/getting-started-guides/coreos/cloud-configs/node.yaml)
199199

200200

201201
### master.yml

docs/getting-started-guides/docker.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ docker run --net=host -d kubernetes/etcd:2.0.5.1 /usr/local/bin/etcd --addr=127.
1515
docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.15.0 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests
1616
```
1717

18-
This actually runs the kubelet, which in turn runs a [pod](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/pods.md) that contains the other master components.
18+
This actually runs the kubelet, which in turn runs a [pod](http://docs.k8s.io/pods.md) that contains the other master components.
1919

2020
### Step Three: Run the service proxy
2121
*Note, this could be combined with master above, but it requires --privileged for iptables manipulation*

docs/getting-started-guides/fedora/fedora_manual_config.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
This is a getting started guide for Fedora. It is a manual configuration so you understand all the underlying packages / services / ports, etc...
44

5-
This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious.
5+
This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](http://docs.k8s.io/networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious.
66

77
The kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, fed-master, will be the kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_ (not needed if _etcd_ runs on a different host but this guide assumes that _etcd_ and kubernetes master run on the same host). The remaining host, fed-node will be the node and run kubelet, proxy and docker.
88

docs/getting-started-guides/locally.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -67,8 +67,8 @@ cluster/kubectl.sh get replicationControllers
6767

6868
### Running a user defined pod
6969

70-
Note the difference between a [container](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/containers.md)
71-
and a [pod](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/pods.md). Since you only asked for the former, kubernetes will create a wrapper pod for you.
70+
Note the difference between a [container](http://docs.k8s.io/containers.md)
71+
and a [pod](http://docs.k8s.io/pods.md). Since you only asked for the former, kubernetes will create a wrapper pod for you.
7272
However you can't view the nginx start page on localhost. To verify that nginx is running you need to run `curl` within the docker container (try `docker exec`).
7373

7474
You can control the specifications of a pod via a user defined manifest, and reach nginx through your browser on the port specified therein:

docs/getting-started-guides/ubuntu_multinodes_cluster.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Kubernetes deployed on multiple ubuntu nodes
22

3-
This document describes how to deploy kubernetes on multiple ubuntu nodes, including 1 master node and 3 minion nodes, and people uses this approach can scale to **any number of minion nodes** by changing some settings with ease. Although there exists saltstack based ubuntu k8s installation , it may be tedious and hard for a guy that knows little about saltstack but want to build a really distributed k8s cluster. This approach is inspired by [k8s deploy on a single node](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/ubuntu_single_node.md).
3+
This document describes how to deploy kubernetes on multiple ubuntu nodes, including 1 master node and 3 minion nodes, and people uses this approach can scale to **any number of minion nodes** by changing some settings with ease. Although there exists saltstack based ubuntu k8s installation , it may be tedious and hard for a guy that knows little about saltstack but want to build a really distributed k8s cluster. This approach is inspired by [k8s deploy on a single node](http://docs.k8s.io/getting-started-guides/ubuntu_single_node.md).
44

55
[Cloud team from ZJU](https://github.com/ZJU-SEL) will keep updating this work.
66

docs/getting-started-guides/ubuntu_single_node.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ This document describes how to get started to run kubernetes services on a singl
77
3. Customizing ubuntu launch
88

99
### 1. Make kubernetes and etcd binaries
10-
Either build or download the latest [kubernetes binaries] (https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/binary_release.md)
10+
Either build or download the latest [kubernetes binaries] (http://docs.k8s.io/getting-started-guides/binary_release.md)
1111

1212
Copy the kube binaries into `/opt/bin` or a path of your choice
1313

0 commit comments

Comments
 (0)