summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--access.md2
-rw-r--r--service_accounts.md2
-rw-r--r--simple-rolling-update.md14
3 files changed, 9 insertions, 9 deletions
diff --git a/access.md b/access.md
index 647ce552..dd64784e 100644
--- a/access.md
+++ b/access.md
@@ -193,7 +193,7 @@ K8s authorization should:
- Allow for a range of maturity levels, from single-user for those test driving the system, to integration with existing to enterprise authorization systems.
- Allow for centralized management of users and policies. In some organizations, this will mean that the definition of users and access policies needs to reside on a system other than k8s and encompass other web services (such as a storage service).
- Allow processes running in K8s Pods to take on identity, and to allow narrow scoping of permissions for those identities in order to limit damage from software faults.
-- Have Authorization Policies exposed as API objects so that a single config file can create or delete Pods, Controllers, Services, and the identities and policies for those Pods and Controllers.
+- Have Authorization Policies exposed as API objects so that a single config file can create or delete Pods, Replication Controllers, Services, and the identities and policies for those Pods and Replication Controllers.
- Be separate as much as practical from Authentication, to allow Authentication methods to change over time and space, without impacting Authorization policies.
K8s will implement a relatively simple
diff --git a/service_accounts.md b/service_accounts.md
index e87e8e6c..63c12a30 100644
--- a/service_accounts.md
+++ b/service_accounts.md
@@ -5,7 +5,7 @@
Processes in Pods may need to call the Kubernetes API. For example:
- scheduler
- replication controller
- - minion controller
+ - node controller
- a map-reduce type framework which has a controller that then tries to make a dynamically determined number of workers and watch them
- continuous build and push system
- monitoring system
diff --git a/simple-rolling-update.md b/simple-rolling-update.md
index e5b47d98..0208b609 100644
--- a/simple-rolling-update.md
+++ b/simple-rolling-update.md
@@ -8,20 +8,20 @@ Assume that we have a current replication controller named ```foo``` and it is r
```kubectl rolling-update rc foo [foo-v2] --image=myimage:v2```
-If the user doesn't specify a name for the 'next' controller, then the 'next' controller is renamed to
-the name of the original controller.
+If the user doesn't specify a name for the 'next' replication controller, then the 'next' replication controller is renamed to
+the name of the original replication controller.
Obviously there is a race here, where if you kill the client between delete foo, and creating the new version of 'foo' you might be surprised about what is there, but I think that's ok.
See [Recovery](#recovery) below
-If the user does specify a name for the 'next' controller, then the 'next' controller is retained with its existing name,
-and the old 'foo' controller is deleted. For the purposes of the rollout, we add a unique-ifying label ```kubernetes.io/deployment``` to both the ```foo``` and ```foo-next``` controllers.
-The value of that label is the hash of the complete JSON representation of the```foo-next``` or```foo``` controller. The name of this label can be overridden by the user with the ```--deployment-label-key``` flag.
+If the user does specify a name for the 'next' replication controller, then the 'next' replication controller is retained with its existing name,
+and the old 'foo' replication controller is deleted. For the purposes of the rollout, we add a unique-ifying label ```kubernetes.io/deployment``` to both the ```foo``` and ```foo-next``` replication controllers.
+The value of that label is the hash of the complete JSON representation of the```foo-next``` or```foo``` replication controller. The name of this label can be overridden by the user with the ```--deployment-label-key``` flag.
#### Recovery
If a rollout fails or is terminated in the middle, it is important that the user be able to resume the roll out.
-To facilitate recovery in the case of a crash of the updating process itself, we add the following annotations to each replicaController in the ```kubernetes.io/``` annotation namespace:
- * ```desired-replicas``` The desired number of replicas for this controller (either N or zero)
+To facilitate recovery in the case of a crash of the updating process itself, we add the following annotations to each replication controller in the ```kubernetes.io/``` annotation namespace:
+ * ```desired-replicas``` The desired number of replicas for this replication controller (either N or zero)
* ```update-partner``` A pointer to the replication controller resource that is the other half of this update (syntax ```<name>``` the namespace is assumed to be identical to the namespace of this replication controller.)
Recovery is achieved by issuing the same command again: