summaryrefslogtreecommitdiff
path: root/namespaces.md
diff options
context:
space:
mode:
authork8s-merge-robot <k8s.production.user@gmail.com>2016-04-28 06:49:46 -0700
committerk8s-merge-robot <k8s.production.user@gmail.com>2016-04-28 06:49:46 -0700
commit3e6141508af8f26d9c17c011e6c09b9e2c10ea52 (patch)
treeb6a6f6b7d575e59661ed9ea33912fa830175dff2 /namespaces.md
parent83691fd455723f28510f4b7ca13e7a272d61b36a (diff)
parentd2ab00b82036d3396df1e51f5a43ff4755f8f915 (diff)
Merge pull request #24231 from mikebrow/design-docs-80col-updates
Automatic merge from submit-queue Cleans up line wrap at 80 cols and some minor editing issues Address line wrap issue #1488. Also cleans up other minor editing issues in the docs/design/* tree such as spelling errors. Signed-off-by: mikebrow <brownwm@us.ibm.com>
Diffstat (limited to 'namespaces.md')
-rw-r--r--namespaces.md180
1 files changed, 105 insertions, 75 deletions
diff --git a/namespaces.md b/namespaces.md
index e2a532b2..d63015bc 100644
--- a/namespaces.md
+++ b/namespaces.md
@@ -41,9 +41,11 @@ a logically named group.
## Motivation
-A single cluster should be able to satisfy the needs of multiple user communities.
+A single cluster should be able to satisfy the needs of multiple user
+communities.
-Each user community wants to be able to work in isolation from other communities.
+Each user community wants to be able to work in isolation from other
+communities.
Each user community has its own:
@@ -61,13 +63,16 @@ The Namespace provides a unique scope for:
## Use cases
-1. As a cluster operator, I want to support multiple user communities on a single cluster.
-2. As a cluster operator, I want to delegate authority to partitions of the cluster to trusted users
- in those communities.
-3. As a cluster operator, I want to limit the amount of resources each community can consume in order
- to limit the impact to other communities using the cluster.
-4. As a cluster user, I want to interact with resources that are pertinent to my user community in
- isolation of what other user communities are doing on the cluster.
+1. As a cluster operator, I want to support multiple user communities on a
+single cluster.
+2. As a cluster operator, I want to delegate authority to partitions of the
+cluster to trusted users in those communities.
+3. As a cluster operator, I want to limit the amount of resources each
+community can consume in order to limit the impact to other communities using
+the cluster.
+4. As a cluster user, I want to interact with resources that are pertinent to
+my user community in isolation of what other user communities are doing on the
+cluster.
## Design
@@ -91,20 +96,26 @@ A *Namespace* must exist prior to associating content with it.
A *Namespace* must not be deleted if there is content associated with it.
-To associate a resource with a *Namespace* the following conditions must be satisfied:
+To associate a resource with a *Namespace* the following conditions must be
+satisfied:
-1. The resource's *Kind* must be registered as having *RESTScopeNamespace* with the server
-2. The resource's *TypeMeta.Namespace* field must have a value that references an existing *Namespace*
+1. The resource's *Kind* must be registered as having *RESTScopeNamespace* with
+the server
+2. The resource's *TypeMeta.Namespace* field must have a value that references
+an existing *Namespace*
-The *Name* of a resource associated with a *Namespace* is unique to that *Kind* in that *Namespace*.
+The *Name* of a resource associated with a *Namespace* is unique to that *Kind*
+in that *Namespace*.
-It is intended to be used in resource URLs; provided by clients at creation time, and encouraged to be
-human friendly; intended to facilitate idempotent creation, space-uniqueness of singleton objects,
-distinguish distinct entities, and reference particular entities across operations.
+It is intended to be used in resource URLs; provided by clients at creation
+time, and encouraged to be human friendly; intended to facilitate idempotent
+creation, space-uniqueness of singleton objects, distinguish distinct entities,
+and reference particular entities across operations.
### Authorization
-A *Namespace* provides an authorization scope for accessing content associated with the *Namespace*.
+A *Namespace* provides an authorization scope for accessing content associated
+with the *Namespace*.
See [Authorization plugins](../admin/authorization.md)
@@ -112,19 +123,21 @@ See [Authorization plugins](../admin/authorization.md)
A *Namespace* provides a scope to limit resource consumption.
-A *LimitRange* defines min/max constraints on the amount of resources a single entity can consume in
-a *Namespace*.
+A *LimitRange* defines min/max constraints on the amount of resources a single
+entity can consume in a *Namespace*.
See [Admission control: Limit Range](admission_control_limit_range.md)
-A *ResourceQuota* tracks aggregate usage of resources in the *Namespace* and allows cluster operators
-to define *Hard* resource usage limits that a *Namespace* may consume.
+A *ResourceQuota* tracks aggregate usage of resources in the *Namespace* and
+allows cluster operators to define *Hard* resource usage limits that a
+*Namespace* may consume.
See [Admission control: Resource Quota](admission_control_resource_quota.md)
### Finalizers
-Upon creation of a *Namespace*, the creator may provide a list of *Finalizer* objects.
+Upon creation of a *Namespace*, the creator may provide a list of *Finalizer*
+objects.
```go
type FinalizerName string
@@ -143,13 +156,14 @@ type NamespaceSpec struct {
A *FinalizerName* is a qualified name.
-The API Server enforces that a *Namespace* can only be deleted from storage if and only if
-it's *Namespace.Spec.Finalizers* is empty.
+The API Server enforces that a *Namespace* can only be deleted from storage if
+and only if it's *Namespace.Spec.Finalizers* is empty.
-A *finalize* operation is the only mechanism to modify the *Namespace.Spec.Finalizers* field post creation.
+A *finalize* operation is the only mechanism to modify the
+*Namespace.Spec.Finalizers* field post creation.
-Each *Namespace* created has *kubernetes* as an item in its list of initial *Namespace.Spec.Finalizers*
-set by default.
+Each *Namespace* created has *kubernetes* as an item in its list of initial
+*Namespace.Spec.Finalizers* set by default.
### Phases
@@ -168,39 +182,48 @@ type NamespaceStatus struct {
}
```
-A *Namespace* is in the **Active** phase if it does not have a *ObjectMeta.DeletionTimestamp*.
+A *Namespace* is in the **Active** phase if it does not have a
+*ObjectMeta.DeletionTimestamp*.
-A *Namespace* is in the **Terminating** phase if it has a *ObjectMeta.DeletionTimestamp*.
+A *Namespace* is in the **Terminating** phase if it has a
+*ObjectMeta.DeletionTimestamp*.
**Active**
-Upon creation, a *Namespace* goes in the *Active* phase. This means that content may be associated with
-a namespace, and all normal interactions with the namespace are allowed to occur in the cluster.
+Upon creation, a *Namespace* goes in the *Active* phase. This means that content
+may be associated with a namespace, and all normal interactions with the
+namespace are allowed to occur in the cluster.
-If a DELETE request occurs for a *Namespace*, the *Namespace.ObjectMeta.DeletionTimestamp* is set
-to the current server time. A *namespace controller* observes the change, and sets the *Namespace.Status.Phase*
-to *Terminating*.
+If a DELETE request occurs for a *Namespace*, the
+*Namespace.ObjectMeta.DeletionTimestamp* is set to the current server time. A
+*namespace controller* observes the change, and sets the
+*Namespace.Status.Phase* to *Terminating*.
**Terminating**
-A *namespace controller* watches for *Namespace* objects that have a *Namespace.ObjectMeta.DeletionTimestamp*
-value set in order to know when to initiate graceful termination of the *Namespace* associated content that
-are known to the cluster.
+A *namespace controller* watches for *Namespace* objects that have a
+*Namespace.ObjectMeta.DeletionTimestamp* value set in order to know when to
+initiate graceful termination of the *Namespace* associated content that are
+known to the cluster.
-The *namespace controller* enumerates each known resource type in that namespace and deletes it one by one.
+The *namespace controller* enumerates each known resource type in that namespace
+and deletes it one by one.
-Admission control blocks creation of new resources in that namespace in order to prevent a race-condition
-where the controller could believe all of a given resource type had been deleted from the namespace,
-when in fact some other rogue client agent had created new objects. Using admission control in this
-scenario allows each of registry implementations for the individual objects to not need to take into account Namespace life-cycle.
+Admission control blocks creation of new resources in that namespace in order to
+prevent a race-condition where the controller could believe all of a given
+resource type had been deleted from the namespace, when in fact some other rogue
+client agent had created new objects. Using admission control in this scenario
+allows each of registry implementations for the individual objects to not need
+to take into account Namespace life-cycle.
-Once all objects known to the *namespace controller* have been deleted, the *namespace controller*
-executes a *finalize* operation on the namespace that removes the *kubernetes* value from
-the *Namespace.Spec.Finalizers* list.
+Once all objects known to the *namespace controller* have been deleted, the
+*namespace controller* executes a *finalize* operation on the namespace that
+removes the *kubernetes* value from the *Namespace.Spec.Finalizers* list.
-If the *namespace controller* sees a *Namespace* whose *ObjectMeta.DeletionTimestamp* is set, and
-whose *Namespace.Spec.Finalizers* list is empty, it will signal the server to permanently remove
-the *Namespace* from storage by sending a final DELETE action to the API server.
+If the *namespace controller* sees a *Namespace* whose
+*ObjectMeta.DeletionTimestamp* is set, and whose *Namespace.Spec.Finalizers*
+list is empty, it will signal the server to permanently remove the *Namespace*
+from storage by sending a final DELETE action to the API server.
### REST API
@@ -232,15 +255,18 @@ To interact with content associated with a Namespace:
| WATCH | GET | /api/{version}/watch/{resourceType} | Watch for changes to a {resourceType} across all namespaces |
| LIST | GET | /api/{version}/list/{resourceType} | List instances of {resourceType} across all namespaces |
-The API server verifies the *Namespace* on resource creation matches the *{namespace}* on the path.
+The API server verifies the *Namespace* on resource creation matches the
+*{namespace}* on the path.
-The API server will associate a resource with a *Namespace* if not populated by the end-user based on the *Namespace* context
-of the incoming request. If the *Namespace* of the resource being created, or updated does not match the *Namespace* on the request,
-then the API server will reject the request.
+The API server will associate a resource with a *Namespace* if not populated by
+the end-user based on the *Namespace* context of the incoming request. If the
+*Namespace* of the resource being created, or updated does not match the
+*Namespace* on the request, then the API server will reject the request.
### Storage
-A namespace provides a unique identifier space and therefore must be in the storage path of a resource.
+A namespace provides a unique identifier space and therefore must be in the
+storage path of a resource.
In etcd, we want to continue to still support efficient WATCH across namespaces.
@@ -248,18 +274,19 @@ Resources that persist content in etcd will have storage paths as follows:
/{k8s_storage_prefix}/{resourceType}/{resource.Namespace}/{resource.Name}
-This enables consumers to WATCH /registry/{resourceType} for changes across namespace of a particular {resourceType}.
+This enables consumers to WATCH /registry/{resourceType} for changes across
+namespace of a particular {resourceType}.
### Kubelet
-The kubelet will register pod's it sources from a file or http source with a namespace associated with the
-*cluster-id*
+The kubelet will register pod's it sources from a file or http source with a
+namespace associated with the *cluster-id*
### Example: OpenShift Origin managing a Kubernetes Namespace
In this example, we demonstrate how the design allows for agents built on-top of
-Kubernetes that manage their own set of resource types associated with a *Namespace*
-to take part in Namespace termination.
+Kubernetes that manage their own set of resource types associated with a
+*Namespace* to take part in Namespace termination.
OpenShift creates a Namespace in Kubernetes
@@ -282,9 +309,10 @@ OpenShift creates a Namespace in Kubernetes
}
```
-OpenShift then goes and creates a set of resources (pods, services, etc) associated
-with the "development" namespace. It also creates its own set of resources in its
-own storage associated with the "development" namespace unknown to Kubernetes.
+OpenShift then goes and creates a set of resources (pods, services, etc)
+associated with the "development" namespace. It also creates its own set of
+resources in its own storage associated with the "development" namespace unknown
+to Kubernetes.
User deletes the Namespace in Kubernetes, and Namespace now has following state:
@@ -308,10 +336,10 @@ User deletes the Namespace in Kubernetes, and Namespace now has following state:
}
```
-The Kubernetes *namespace controller* observes the namespace has a *deletionTimestamp*
-and begins to terminate all of the content in the namespace that it knows about. Upon
-success, it executes a *finalize* action that modifies the *Namespace* by
-removing *kubernetes* from the list of finalizers:
+The Kubernetes *namespace controller* observes the namespace has a
+*deletionTimestamp* and begins to terminate all of the content in the namespace
+that it knows about. Upon success, it executes a *finalize* action that modifies
+the *Namespace* by removing *kubernetes* from the list of finalizers:
```json
{
@@ -333,11 +361,11 @@ removing *kubernetes* from the list of finalizers:
}
```
-OpenShift Origin has its own *namespace controller* that is observing cluster state, and
-it observes the same namespace had a *deletionTimestamp* assigned to it. It too will go
-and purge resources from its own storage that it manages associated with that namespace.
-Upon completion, it executes a *finalize* action and removes the reference to "openshift.com/origin"
-from the list of finalizers.
+OpenShift Origin has its own *namespace controller* that is observing cluster
+state, and it observes the same namespace had a *deletionTimestamp* assigned to
+it. It too will go and purge resources from its own storage that it manages
+associated with that namespace. Upon completion, it executes a *finalize* action
+and removes the reference to "openshift.com/origin" from the list of finalizers.
This results in the following state:
@@ -361,12 +389,14 @@ This results in the following state:
}
```
-At this point, the Kubernetes *namespace controller* in its sync loop will see that the namespace
-has a deletion timestamp and that its list of finalizers is empty. As a result, it knows all
-content associated from that namespace has been purged. It performs a final DELETE action
-to remove that Namespace from the storage.
+At this point, the Kubernetes *namespace controller* in its sync loop will see
+that the namespace has a deletion timestamp and that its list of finalizers is
+empty. As a result, it knows all content associated from that namespace has been
+purged. It performs a final DELETE action to remove that Namespace from the
+storage.
-At this point, all content associated with that Namespace, and the Namespace itself are gone.
+At this point, all content associated with that Namespace, and the Namespace
+itself are gone.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->