summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorEric Paris <eparis@redhat.com>2015-09-03 10:10:11 -0400
committerEric Paris <eparis@redhat.com>2015-09-03 10:10:11 -0400
commita7118ba1b290cd97dcc65b3d906dede726955432 (patch)
tree2eae7381220920f35ce70cc52376de277b61a573
parentd4145fbc8dac4f657614b7c8ec13d53720be9dbf (diff)
s|github.com/GoogleCloudPlatform/kubernetes|github.com/kubernetes/kubernetes|
-rw-r--r--autoscaling.md2
-rw-r--r--deployment.md2
-rw-r--r--horizontal-pod-autoscaler.md8
-rw-r--r--job.md6
4 files changed, 9 insertions, 9 deletions
diff --git a/autoscaling.md b/autoscaling.md
index 9c5ec752..ea60af74 100644
--- a/autoscaling.md
+++ b/autoscaling.md
@@ -47,7 +47,7 @@ done automatically based on statistical analysis and thresholds.
* Provide a concrete proposal for implementing auto-scaling pods within Kubernetes
* Implementation proposal should be in line with current discussions in existing issues:
* Scale verb - [1629](http://issue.k8s.io/1629)
- * Config conflicts - [Config](https://github.com/GoogleCloudPlatform/kubernetes/blob/c7cb991987193d4ca33544137a5cb7d0292cf7df/docs/config.md#automated-re-configuration-processes)
+ * Config conflicts - [Config](https://github.com/kubernetes/kubernetes/blob/c7cb991987193d4ca33544137a5cb7d0292cf7df/docs/config.md#automated-re-configuration-processes)
* Rolling updates - [1353](http://issue.k8s.io/1353)
* Multiple scalable types - [1624](http://issue.k8s.io/1624)
diff --git a/deployment.md b/deployment.md
index 0a79ca86..6819acee 100644
--- a/deployment.md
+++ b/deployment.md
@@ -260,7 +260,7 @@ Apart from the above, we want to add support for the following:
## References
-- https://github.com/GoogleCloudPlatform/kubernetes/issues/1743 has most of the
+- https://github.com/kubernetes/kubernetes/issues/1743 has most of the
discussion that resulted in this proposal.
diff --git a/horizontal-pod-autoscaler.md b/horizontal-pod-autoscaler.md
index c10f54f7..6ae84532 100644
--- a/horizontal-pod-autoscaler.md
+++ b/horizontal-pod-autoscaler.md
@@ -61,7 +61,7 @@ HorizontalPodAutoscaler object will be bound with exactly one Scale subresource
autoscaling associated replication controller/deployment through it.
The main advantage of such approach is that whenever we introduce another type we want to auto-scale,
we just need to implement Scale subresource for it (w/o modifying autoscaler code or API).
-The wider discussion regarding Scale took place in [#1629](https://github.com/GoogleCloudPlatform/kubernetes/issues/1629).
+The wider discussion regarding Scale took place in [#1629](https://github.com/kubernetes/kubernetes/issues/1629).
Scale subresource will be present in API for replication controller or deployment under the following paths:
@@ -192,7 +192,7 @@ The autoscaler will be implemented as a control loop.
It will periodically (e.g.: every 1 minute) query pods described by ```Status.PodSelector``` of Scale subresource,
and check their average CPU or memory usage from the last 1 minute
(there will be API on master for this purpose, see
-[#11951](https://github.com/GoogleCloudPlatform/kubernetes/issues/11951).
+[#11951](https://github.com/kubernetes/kubernetes/issues/11951).
Then, it will compare the current CPU or memory consumption with the Target,
and adjust the count of the Scale if needed to match the target
(preserving condition: MinCount <= Count <= MaxCount).
@@ -265,9 +265,9 @@ Our design is in general compatible with them.
and then turned-on when there is a demand for them.
When a request to service with no pods arrives, kube-proxy will generate an event for autoscaler
to create a new pod.
- Discussed in [#3247](https://github.com/GoogleCloudPlatform/kubernetes/issues/3247).
+ Discussed in [#3247](https://github.com/kubernetes/kubernetes/issues/3247).
* When scaling down, make more educated decision which pods to kill (e.g.: if two or more pods are on the same node, kill one of them).
- Discussed in [#4301](https://github.com/GoogleCloudPlatform/kubernetes/issues/4301).
+ Discussed in [#4301](https://github.com/kubernetes/kubernetes/issues/4301).
* Allow rule based autoscaling: instead of specifying the target value for metric,
specify a rule, e.g.: “if average CPU consumption of pod is higher than 80% add two more replicas”.
This approach was initially suggested in
diff --git a/job.md b/job.md
index 57717ea5..198a1437 100644
--- a/job.md
+++ b/job.md
@@ -40,8 +40,8 @@ for managing pod(s) that require running once to completion even if the machine
the pod is running on fails, in contrast to what ReplicationController currently offers.
Several existing issues and PRs were already created regarding that particular subject:
-* Job Controller [#1624](https://github.com/GoogleCloudPlatform/kubernetes/issues/1624)
-* New Job resource [#7380](https://github.com/GoogleCloudPlatform/kubernetes/pull/7380)
+* Job Controller [#1624](https://github.com/kubernetes/kubernetes/issues/1624)
+* New Job resource [#7380](https://github.com/kubernetes/kubernetes/pull/7380)
## Use Cases
@@ -181,7 +181,7 @@ Below are the possible future extensions to the Job controller:
* Be able to limit the execution time for a job, similarly to ActiveDeadlineSeconds for Pods.
* Be able to create a chain of jobs dependent one on another.
* Be able to specify the work each of the workers should execute (see type 1 from
- [this comment](https://github.com/GoogleCloudPlatform/kubernetes/issues/1624#issuecomment-97622142))
+ [this comment](https://github.com/kubernetes/kubernetes/issues/1624#issuecomment-97622142))
* Be able to inspect Pods running a Job, especially after a Job has finished, e.g.
by providing pointers to Pods in the JobStatus ([see comment](https://github.com/kubernetes/kubernetes/pull/11746/files#r37142628)).