summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorSlava Semushin <vsemushi@redhat.com>2017-09-21 16:25:49 +0200
committerSlava Semushin <vsemushi@redhat.com>2017-09-21 16:25:49 +0200
commitca6e45b508cd1031905ad433803de12bfe72b8b6 (patch)
tree5abd87a4fa7e99ec42631eea435349bad93d7ef1
parentc1eab08d837d27fe5aabf083417745737af7c1f6 (diff)
Fix broken links after moving proposals to subdirs.
-rw-r--r--contributors/design-proposals/api-machinery/aggregated-api-servers.md2
-rw-r--r--contributors/design-proposals/cluster-lifecycle/bootstrap-discovery.md6
-rw-r--r--contributors/design-proposals/cluster-lifecycle/self-hosted-kubernetes.md2
-rw-r--r--contributors/design-proposals/federation/federated-placement-policy.md4
-rw-r--r--contributors/design-proposals/instrumentation/core-metrics-pipeline.md2
-rw-r--r--contributors/design-proposals/instrumentation/metrics-server.md2
-rw-r--r--contributors/design-proposals/node/cpu-manager.md4
-rw-r--r--contributors/design-proposals/node/kubelet-authorizer.md2
-rw-r--r--contributors/design-proposals/node/kubelet-eviction.md2
-rw-r--r--contributors/design-proposals/node/resource-qos.md2
-rw-r--r--contributors/design-proposals/scheduling/pod-preemption.md8
-rw-r--r--contributors/design-proposals/scheduling/pod-priority-api.md2
-rw-r--r--contributors/devel/api-conventions.md4
-rw-r--r--contributors/devel/controllers.md2
-rw-r--r--contributors/devel/cri-container-stats.md2
-rw-r--r--contributors/devel/flexvolume.md2
-rw-r--r--contributors/devel/generating-clientset.md2
-rw-r--r--contributors/devel/mesos-style.md2
-rw-r--r--contributors/devel/release/README.md2
-rw-r--r--contributors/devel/scheduler_algorithm.md2
-rw-r--r--contributors/devel/strategic-merge-patch.md4
21 files changed, 30 insertions, 30 deletions
diff --git a/contributors/design-proposals/api-machinery/aggregated-api-servers.md b/contributors/design-proposals/api-machinery/aggregated-api-servers.md
index db939f1d..ae3f59e0 100644
--- a/contributors/design-proposals/api-machinery/aggregated-api-servers.md
+++ b/contributors/design-proposals/api-machinery/aggregated-api-servers.md
@@ -140,7 +140,7 @@ complete user information, including user, groups, and "extra" for backing API s
Each API server is responsible for storing their resources. They can have their
own etcd or can use kubernetes server's etcd using [third party
-resources](../design-proposals/extending-api.md#adding-custom-resources-to-the-kubernetes-api-server).
+resources](../design-proposals/api-machinery/extending-api.md#adding-custom-resources-to-the-kubernetes-api-server).
### Health check
diff --git a/contributors/design-proposals/cluster-lifecycle/bootstrap-discovery.md b/contributors/design-proposals/cluster-lifecycle/bootstrap-discovery.md
index 5eef94e3..7e0a4070 100644
--- a/contributors/design-proposals/cluster-lifecycle/bootstrap-discovery.md
+++ b/contributors/design-proposals/cluster-lifecycle/bootstrap-discovery.md
@@ -221,7 +221,7 @@ Only one of `--discovery-file` or `--discovery-token` can be set. If more than
Our documentations (and output from `kubeadm`) should stress to users that when the token is configured for authentication and used for TLS bootstrap is a pretty powerful credential due to that any person with access to it can claim to be a node.
The highest risk regarding being able to claim a credential in the `system:nodes` group is that it can read all Secrets in the cluster, which may compromise the cluster.
-The [Node Authorizer](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/kubelet-authorizer.md) locks this down a bit, but an untrusted person could still try to
+The [Node Authorizer](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/kubelet-authorizer.md) locks this down a bit, but an untrusted person could still try to
guess a node's name, get such a credential, guess the name of the Secret and be able to get that.
Users should set a TTL on the token to limit the above mentioned risk. `kubeadm` sets a 24h TTL on the node bootstrap token by default in v1.8.
@@ -239,8 +239,8 @@ The binding of the `system:bootstrappers` (or similar) group to the ability to s
## Revision history
- - Initial proposal ([@jbeda](https://github.com/jbeda)): [link](https://github.com/kubernetes/community/blob/cb9f198a0763e0a7540cdcc9db912a403ab1acab/contributors/design-proposals/bootstrap-discovery.md)
- - v1.6 updates ([@jbeda](https://github.com/jbeda)): [link](https://github.com/kubernetes/community/blob/d8ce9e91b0099795318bb06c13f00d9dad41ac26/contributors/design-proposals/bootstrap-discovery.md)
+ - Initial proposal ([@jbeda](https://github.com/jbeda)): [link](https://github.com/kubernetes/community/blob/cb9f198a0763e0a7540cdcc9db912a403ab1acab/contributors/design-proposals/cluster-lifecycle/bootstrap-discovery.md)
+ - v1.6 updates ([@jbeda](https://github.com/jbeda)): [link](https://github.com/kubernetes/community/blob/d8ce9e91b0099795318bb06c13f00d9dad41ac26/contributors/design-proposals/cluster-lifecycle/bootstrap-discovery.md)
- v1.8 updates ([@luxas](https://github.com/luxas))
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
diff --git a/contributors/design-proposals/cluster-lifecycle/self-hosted-kubernetes.md b/contributors/design-proposals/cluster-lifecycle/self-hosted-kubernetes.md
index 26220180..c570b2f0 100644
--- a/contributors/design-proposals/cluster-lifecycle/self-hosted-kubernetes.md
+++ b/contributors/design-proposals/cluster-lifecycle/self-hosted-kubernetes.md
@@ -100,4 +100,4 @@ Kubernetes self-hosted is working today. Bootkube is an implementation of the "t
- [Health check endpoints for components don't work correctly](https://github.com/kubernetes-incubator/bootkube/issues/64#issuecomment-228144345)
- [kubeadm does do self-hosted, but isn't tested yet](https://github.com/kubernetes/kubernetes/pull/40075)
-- The Kubernetes [versioning policy](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/versioning.md) allows for version skew of kubelet and control plane but not skew between control plane components themselves. We must add testing and validation to Kubernetes that this skew works. Otherwise the work to make Kubernetes HA is rather pointless if it can't be upgraded in an HA manner as well.
+- The Kubernetes [versioning policy](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/release/versioning.md) allows for version skew of kubelet and control plane but not skew between control plane components themselves. We must add testing and validation to Kubernetes that this skew works. Otherwise the work to make Kubernetes HA is rather pointless if it can't be upgraded in an HA manner as well.
diff --git a/contributors/design-proposals/federation/federated-placement-policy.md b/contributors/design-proposals/federation/federated-placement-policy.md
index 8b3d4b49..e1292bd9 100644
--- a/contributors/design-proposals/federation/federated-placement-policy.md
+++ b/contributors/design-proposals/federation/federated-placement-policy.md
@@ -28,7 +28,7 @@ A simple example of a placement policy is
> compliance.
The [Kubernetes Cluster
-Federation](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/federation.md#policy-engine-and-migrationreplication-controllers)
+Federation](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/federation/federation.md#policy-engine-and-migrationreplication-controllers)
design proposal includes a pluggable policy engine component that decides how
applications/resources are placed across federated clusters.
@@ -283,7 +283,7 @@ When the remediator component (in the sidecar) receives the notification it
sends a PATCH request to the federation-apiserver to update the affected
resource. This way, the actual rebalancing of ReplicaSets is still handled by
the [Rescheduling
-Algorithm](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/federated-replicasets.md)
+Algorithm](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/federation/federated-replicasets.md)
in the Federated ReplicaSet controller.
The remediator component must be deployed with a kubeconfig for the
diff --git a/contributors/design-proposals/instrumentation/core-metrics-pipeline.md b/contributors/design-proposals/instrumentation/core-metrics-pipeline.md
index d19fe781..39307b2a 100644
--- a/contributors/design-proposals/instrumentation/core-metrics-pipeline.md
+++ b/contributors/design-proposals/instrumentation/core-metrics-pipeline.md
@@ -84,7 +84,7 @@ Metrics requirements for "First Class Resource Isolation and Utilization Feature
- Kubelet
- Node-level usage metrics for Filesystems, CPU, and Memory
- Pod-level usage metrics for Filesystems and Memory
- - Metrics Server (outlined in [Monitoring Architecture](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/monitoring_architecture.md)), which exposes the [Resource Metrics API](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/resource-metrics-api.md) to the following system components:
+ - Metrics Server (outlined in [Monitoring Architecture](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/monitoring_architecture.md)), which exposes the [Resource Metrics API](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/resource-metrics-api.md) to the following system components:
- Scheduler
- Node-level usage metrics for Filesystems, CPU, and Memory
- Pod-level usage metrics for Filesystems, CPU, and Memory
diff --git a/contributors/design-proposals/instrumentation/metrics-server.md b/contributors/design-proposals/instrumentation/metrics-server.md
index 80cefaf9..9ac5bb64 100644
--- a/contributors/design-proposals/instrumentation/metrics-server.md
+++ b/contributors/design-proposals/instrumentation/metrics-server.md
@@ -5,7 +5,7 @@ Resource Metrics API is an effort to provide a first-class Kubernetes API
(stable, versioned, discoverable, available through apiserver and with client support)
that serves resource usage metrics for pods and nodes. The use cases were discussed
and the API was proposed a while ago in
-[another proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/resource-metrics-api.md).
+[another proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/resource-metrics-api.md).
This document describes the architecture and the design of the second part of this effort:
making the mentioned API available in the same way as the other Kubernetes APIs.
diff --git a/contributors/design-proposals/node/cpu-manager.md b/contributors/design-proposals/node/cpu-manager.md
index e102d2e2..4d6366d4 100644
--- a/contributors/design-proposals/node/cpu-manager.md
+++ b/contributors/design-proposals/node/cpu-manager.md
@@ -418,7 +418,7 @@ func (p *dynamicPolicy) RemoveContainer(s State, containerID string) error {
[cpuset-files]: http://man7.org/linux/man-pages/man7/cpuset.7.html#FILES
[ht]: http://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/hyper-threading-technology.html
[hwloc]: https://www.open-mpi.org/projects/hwloc
-[node-allocatable]: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node-allocatable.md#phase-2---enforce-allocatable-on-pods
+[node-allocatable]: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/node-allocatable.md#phase-2---enforce-allocatable-on-pods
[procfs]: http://man7.org/linux/man-pages/man5/proc.5.html
-[qos]: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/resource-qos.md
+[qos]: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/resource-qos.md
[topo]: http://github.com/intelsdi-x/swan/tree/master/pkg/isolation/topo
diff --git a/contributors/design-proposals/node/kubelet-authorizer.md b/contributors/design-proposals/node/kubelet-authorizer.md
index f3c24417..065c8aa0 100644
--- a/contributors/design-proposals/node/kubelet-authorizer.md
+++ b/contributors/design-proposals/node/kubelet-authorizer.md
@@ -180,5 +180,5 @@ Future work could further limit a kubelet's API access:
Features that expand or modify the APIs or objects accessed by the kubelet will need to involve the node authorizer.
Known features in the design or development stages that might modify kubelet API access are:
* [Dynamic kubelet configuration](https://github.com/kubernetes/features/issues/281)
-* [Local storage management](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/local-storage-overview.md)
+* [Local storage management](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/local-storage-overview.md)
* [Bulk watch of secrets/configmaps](https://github.com/kubernetes/community/pull/443)
diff --git a/contributors/design-proposals/node/kubelet-eviction.md b/contributors/design-proposals/node/kubelet-eviction.md
index 1700babe..4dd78861 100644
--- a/contributors/design-proposals/node/kubelet-eviction.md
+++ b/contributors/design-proposals/node/kubelet-eviction.md
@@ -242,7 +242,7 @@ the `kubelet` will select a subsequent pod.
## Eviction Strategy
The `kubelet` will implement an eviction strategy oriented around
-[Priority](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/pod-priority-api.md)
+[Priority](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/pod-priority-api.md)
and pod usage relative to requests. It will target pods that are the lowest
Priority, and are the largest consumers of the starved resource relative to
their scheduling request.
diff --git a/contributors/design-proposals/node/resource-qos.md b/contributors/design-proposals/node/resource-qos.md
index 13ad0bd4..14057b03 100644
--- a/contributors/design-proposals/node/resource-qos.md
+++ b/contributors/design-proposals/node/resource-qos.md
@@ -20,7 +20,7 @@ Borg increased utilization by about 20% when it started allowing use of such non
## Requests and Limits
-For each resource, containers can specify a resource request and limit, `0 <= request <= `[`Node Allocatable`](../design-proposals/node-allocatable.md) & `request <= limit <= Infinity`.
+For each resource, containers can specify a resource request and limit, `0 <= request <= `[`Node Allocatable`](../design-proposals/node/node-allocatable.md) & `request <= limit <= Infinity`.
If a pod is successfully scheduled, the container is guaranteed the amount of resources requested.
Scheduling is based on `requests` and not `limits`.
The pods and its containers will not be allowed to exceed the specified limit.
diff --git a/contributors/design-proposals/scheduling/pod-preemption.md b/contributors/design-proposals/scheduling/pod-preemption.md
index 4fba2bf1..3e5cd75f 100644
--- a/contributors/design-proposals/scheduling/pod-preemption.md
+++ b/contributors/design-proposals/scheduling/pod-preemption.md
@@ -42,7 +42,7 @@ _Author: @bsalamat_
# Background
-Running various types of workloads with different priorities is a common practice in medium and large clusters to achieve higher resource utilization. In such scenarios, the amount of workload can be larger than what the total resources of the cluster can handle. If so, the cluster chooses the most important workloads and runs them. The importance of workloads are specified by a combination of [priority](https://github.com/bsalamat/community/blob/564ebff843532faf5dcb06a7e50b0db5c5b501cf/contributors/design-proposals/pod-priority-api.md), QoS, or other cluster-specific metrics. The potential to have more work than what cluster resources can handle is called "overcommitment". Overcommitment is very common in on-prem clusters where the number of nodes is fixed, but it can similarly happen in cloud as cloud customers may choose to run their clusters overcommitted/overloaded at times in order to save money. For example, a cloud customer may choose to run at most 100 nodes, knowing that all of their critical workloads fit on 100 nodes and if there is more work, they won't be critical and can wait until cluster load decreases.
+Running various types of workloads with different priorities is a common practice in medium and large clusters to achieve higher resource utilization. In such scenarios, the amount of workload can be larger than what the total resources of the cluster can handle. If so, the cluster chooses the most important workloads and runs them. The importance of workloads are specified by a combination of [priority](https://github.com/bsalamat/community/blob/564ebff843532faf5dcb06a7e50b0db5c5b501cf/contributors/design-proposals/scheduling/pod-priority-api.md), QoS, or other cluster-specific metrics. The potential to have more work than what cluster resources can handle is called "overcommitment". Overcommitment is very common in on-prem clusters where the number of nodes is fixed, but it can similarly happen in cloud as cloud customers may choose to run their clusters overcommitted/overloaded at times in order to save money. For example, a cloud customer may choose to run at most 100 nodes, knowing that all of their critical workloads fit on 100 nodes and if there is more work, they won't be critical and can wait until cluster load decreases.
## Terminology
@@ -71,11 +71,11 @@ The race condition will still exist if we have multiple schedulers. More on this
## Preemption order
-When scheduling a pending pod, scheduler tries to place the pod on a node that does not require preemption. If there is no such a node, scheduler may favor a node where the number and/or priority of victims (preempted pods) is smallest. After choosing the node, scheduler considers the lowest [priority](https://github.com/bsalamat/community/blob/564ebff843532faf5dcb06a7e50b0db5c5b501cf/contributors/design-proposals/pod-priority-api.md) pods for preemption first. Scheduler starts from the lowest priority and considers enough pods that should be preempted to allow the pending pod to schedule. Scheduler only considers pods that have lower priority than the pending pod.
+When scheduling a pending pod, scheduler tries to place the pod on a node that does not require preemption. If there is no such a node, scheduler may favor a node where the number and/or priority of victims (preempted pods) is smallest. After choosing the node, scheduler considers the lowest [priority](https://github.com/bsalamat/community/blob/564ebff843532faf5dcb06a7e50b0db5c5b501cf/contributors/design-proposals/scheduling/pod-priority-api.md) pods for preemption first. Scheduler starts from the lowest priority and considers enough pods that should be preempted to allow the pending pod to schedule. Scheduler only considers pods that have lower priority than the pending pod.
#### Important notes
-- When ordering the pods from lowest to highest priority for considering which pod(s) to preempt, among pods with equal priority the pods are ordered by their [QoS class](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/resource-qos.md#qos-classes): Best Effort, Burstable, Guaranteed.
+- When ordering the pods from lowest to highest priority for considering which pod(s) to preempt, among pods with equal priority the pods are ordered by their [QoS class](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/resource-qos.md#qos-classes): Best Effort, Burstable, Guaranteed.
- Scheduler respects pods' disruption budget when considering them for preemption.
- Scheduler will try to minimize the number of preempted pods. As a result, it may preempt a pod while leaving lower priority pods running if preemption of those lower priority pods is not enough to schedule the pending pod while preemption of the higher priority pod(s) is enough to schedule the pending pod. For example, if node capacity is 10, and pending pod is priority 10 and requires 5 units of resource, and the running pods are {priority 0 request 3, priority 1 request 1, priority 2 request 5, priority 3 request 1}, scheduler will preempt the priority 2 pod only and leaves priority 1 and priority 0 running.
- Scheduler does not have the knowledge of resource usage of pods. It makes scheduling decisions based on the requested resources ("requests") of the pods and when it considers a pod for preemption, it assumes the "requests" to be freed on the node.
@@ -183,6 +183,6 @@ To solve the problem, the user might try running his web server as Guaranteed, b
# References
-- [Controlled Rescheduling in Kubernetes](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/rescheduling.md)
+- [Controlled Rescheduling in Kubernetes](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/rescheduling.md)
- [Resource sharing architecture for batch and serving workloads in Kubernetes](https://docs.google.com/document/d/1-H2hnZap7gQivcSU-9j4ZrJ8wE_WwcfOkTeAGjzUyLA)
- [Design proposal for adding priority to Kubernetes API](https://github.com/kubernetes/community/pull/604/files) \ No newline at end of file
diff --git a/contributors/design-proposals/scheduling/pod-priority-api.md b/contributors/design-proposals/scheduling/pod-priority-api.md
index 914d229c..785a9d62 100644
--- a/contributors/design-proposals/scheduling/pod-priority-api.md
+++ b/contributors/design-proposals/scheduling/pod-priority-api.md
@@ -233,7 +233,7 @@ absolutely needed. Changing priority classes has the following disadvantages:
### Priority and QoS classes
Kubernetes has [three QoS
-classes](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/resource-qos.md#qos-classes)
+classes](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/resource-qos.md#qos-classes)
which are derived from request and limit of pods. Priority is introduced as an
independent concept; meaning that any QoS class may have any valid priority.
When a node is out of resources and pods needs to be preempted, we give
diff --git a/contributors/devel/api-conventions.md b/contributors/devel/api-conventions.md
index e2c7bab6..a29fd9f0 100644
--- a/contributors/devel/api-conventions.md
+++ b/contributors/devel/api-conventions.md
@@ -350,7 +350,7 @@ Some resources in the v1 API contain fields called **`phase`**, and associated
`message`, `reason`, and other status fields. The pattern of using `phase` is
deprecated. Newer API types should use conditions instead. Phase was essentially
a state-machine enumeration field, that contradicted
-[system-design principles](../design-proposals/principles.md#control-logic) and hampered
+[system-design principles](../design-proposals/architecture/principles.md#control-logic) and hampered
evolution, since [adding new enum values breaks backward
compatibility](api_changes.md). Rather than encouraging clients to infer
implicit properties from phases, we intend to explicitly expose the conditions
@@ -374,7 +374,7 @@ only provided with reasonable effort, and is not guaranteed to not be lost.
Status information that may be large (especially proportional in size to
collections of other resources, such as lists of references to other objects --
see below) and/or rapidly changing, such as
-[resource usage](../design-proposals/resources.md#usage-data), should be put into separate
+[resource usage](../design-proposals/scheduling/resources.md#usage-data), should be put into separate
objects, with possibly a reference from the original object. This helps to
ensure that GETs and watch remain reasonably efficient for the majority of
clients, which may not need that data.
diff --git a/contributors/devel/controllers.md b/contributors/devel/controllers.md
index b8addd94..f0df750e 100644
--- a/contributors/devel/controllers.md
+++ b/contributors/devel/controllers.md
@@ -62,7 +62,7 @@ When you're writing controllers, there are few guidelines that will help make su
This lets clients know that the controller has processed a resource. Make sure that your controller is the main controller that is responsible for that resource, otherwise if you need to communicate observation via your own controller, you will need to create a different kind of ObservedGeneration in the Status of the resource.
-1. Consider using owner references for resources that result in the creation of other resources (eg. a ReplicaSet results in creating Pods). Thus you ensure that children resources are going to be garbage-collected once a resource managed by your controller is deleted. For more information on owner references, read more [here](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/controller-ref.md).
+1. Consider using owner references for resources that result in the creation of other resources (eg. a ReplicaSet results in creating Pods). Thus you ensure that children resources are going to be garbage-collected once a resource managed by your controller is deleted. For more information on owner references, read more [here](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/controller-ref.md).
Pay special attention in the way you are doing adoption. You shouldn't adopt children for a resource when either the parent or the children are marked for deletion. If you are using a cache for your resources, you will likely need to bypass it with a direct API read in case you observe that an owner reference has been updated for one of the children. Thus, you ensure your controller is not racing with the garbage collector.
diff --git a/contributors/devel/cri-container-stats.md b/contributors/devel/cri-container-stats.md
index b44cfdbf..bfeaf31a 100644
--- a/contributors/devel/cri-container-stats.md
+++ b/contributors/devel/cri-container-stats.md
@@ -23,7 +23,7 @@ progression to augment CRI to serve container metrics to eliminate a separate
integration point.
*See the [core metrics design
-proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/core-metrics-pipeline.md)
+proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/core-metrics-pipeline.md)
for more information on metrics exposed by Kubelet, and [monitoring
architecture](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/monitoring_architecture.md)
for the evolving monitoring pipeline in Kubernetes.*
diff --git a/contributors/devel/flexvolume.md b/contributors/devel/flexvolume.md
index 1f796e9f..d967b1fe 100644
--- a/contributors/devel/flexvolume.md
+++ b/contributors/devel/flexvolume.md
@@ -14,7 +14,7 @@ The vendor and driver names must match flexVolume.driver in the volume spec, wit
## Dynamic Plugin Discovery
Beginning in v1.8, Flexvolume supports the ability to detect drivers on the fly. Instead of requiring drivers to exist at system initialization time or having to restart kubelet or controller manager, drivers can be installed, upgraded/downgraded, and uninstalled while the system is running.
-For more information, please refer to the [design document](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/flexvolume-deployment.md).
+For more information, please refer to the [design document](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/flexvolume-deployment.md).
## Automated Plugin Installation/Upgrade
One possible way to install and upgrade your Flexvolume drivers is by using a DaemonSet. See [Recommended Driver Deployment Method](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/flexvolume-deployment.md#recommended-driver-deployment-method) for details.
diff --git a/contributors/devel/generating-clientset.md b/contributors/devel/generating-clientset.md
index 7b519214..9b8a2006 100644
--- a/contributors/devel/generating-clientset.md
+++ b/contributors/devel/generating-clientset.md
@@ -1,6 +1,6 @@
# Generation and release cycle of clientset
-Client-gen is an automatic tool that generates [clientset](../design-proposals/client-package-structure.md#high-level-client-sets) based on API types. This doc introduces the use the client-gen, and the release cycle of the generated clientsets.
+Client-gen is an automatic tool that generates [clientset](../design-proposals/api-machinery/client-package-structure.md#high-level-client-sets) based on API types. This doc introduces the use the client-gen, and the release cycle of the generated clientsets.
## Using client-gen
diff --git a/contributors/devel/mesos-style.md b/contributors/devel/mesos-style.md
index 53ee139e..9d2f38c0 100644
--- a/contributors/devel/mesos-style.md
+++ b/contributors/devel/mesos-style.md
@@ -61,7 +61,7 @@ machine manages the collection during their lifetimes
Out-of-the-box Kubernetes has *workload-specific* abstractions (ReplicaSet, Job,
DaemonSet, etc.) and corresponding controllers, and in the future may have
-[workload-specific schedulers](../design-proposals/multiple-schedulers.md),
+[workload-specific schedulers](../design-proposals/scheduling/multiple-schedulers.md),
e.g. different schedulers for long-running services vs. short-running batch. But
these abstractions, controllers, and schedulers are not *application-specific*.
diff --git a/contributors/devel/release/README.md b/contributors/devel/release/README.md
index a3793af7..4752d6b6 100644
--- a/contributors/devel/release/README.md
+++ b/contributors/devel/release/README.md
@@ -6,7 +6,7 @@
This document captures the requirements and duties of the individuals responsible for Kubernetes releases.
-As documented in the [Kubernetes Versioning doc](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/versioning.md), there are 3 types of Kubernetes releases:
+As documented in the [Kubernetes Versioning doc](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/release/versioning.md), there are 3 types of Kubernetes releases:
* Major (x.0.0)
* Minor (x.x.0)
* Patch (x.x.x)
diff --git a/contributors/devel/scheduler_algorithm.md b/contributors/devel/scheduler_algorithm.md
index a115a982..dc98fe5e 100644
--- a/contributors/devel/scheduler_algorithm.md
+++ b/contributors/devel/scheduler_algorithm.md
@@ -8,7 +8,7 @@ The purpose of filtering the nodes is to filter out the nodes that do not meet c
- `NoDiskConflict`: Evaluate if a pod can fit due to the volumes it requests, and those that are already mounted. Currently supported volumes are: AWS EBS, GCE PD, ISCSI and Ceph RBD. Only Persistent Volume Claims for those supported types are checked. Persistent Volumes added directly to pods are not evaluated and are not constrained by this policy.
- `NoVolumeZoneConflict`: Evaluate if the volumes a pod requests are available on the node, given the Zone restrictions.
-- `PodFitsResources`: Check if the free resource (CPU and Memory) meets the requirement of the Pod. The free resource is measured by the capacity minus the sum of requests of all Pods on the node. To learn more about the resource QoS in Kubernetes, please check [QoS proposal](../design-proposals/resource-qos.md).
+- `PodFitsResources`: Check if the free resource (CPU and Memory) meets the requirement of the Pod. The free resource is measured by the capacity minus the sum of requests of all Pods on the node. To learn more about the resource QoS in Kubernetes, please check [QoS proposal](../design-proposals/node/resource-qos.md).
- `PodFitsHostPorts`: Check if any HostPort required by the Pod is already occupied on the node.
- `HostName`: Filter out all nodes except the one specified in the PodSpec's NodeName field.
- `MatchNodeSelector`: Check if the labels of the node match the labels specified in the Pod's `nodeSelector` field and, as of Kubernetes v1.2, also match the `scheduler.alpha.kubernetes.io/affinity` pod annotation if present. See [here](https://kubernetes.io/docs/user-guide/node-selection/) for more details on both.
diff --git a/contributors/devel/strategic-merge-patch.md b/contributors/devel/strategic-merge-patch.md
index b79c46ba..82a2fd48 100644
--- a/contributors/devel/strategic-merge-patch.md
+++ b/contributors/devel/strategic-merge-patch.md
@@ -216,7 +216,7 @@ item that has duplicates will delete all matching items.
`setElementOrder` directive provides a way to specify the order of a list.
The relative order specified in this directive will be retained.
-Please refer to [proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/preserve-order-in-strategic-merge-patch.md) for more information.
+Please refer to [proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/cli/preserve-order-in-strategic-merge-patch.md) for more information.
### Syntax
@@ -295,7 +295,7 @@ containers:
`retainKeys` directive provides a mechanism for union types to clear mutual exclusive fields.
When this directive is present in the patch, all the fields not in this directive will be cleared.
-Please refer to [proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/add-new-patchStrategy-to-clear-fields-not-present-in-patch.md) for more information.
+Please refer to [proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/add-new-patchStrategy-to-clear-fields-not-present-in-patch.md) for more information.
### Syntax