summaryrefslogtreecommitdiff
path: root/contributors
diff options
context:
space:
mode:
authork8s-ci-robot <k8s-ci-robot@users.noreply.github.com>2017-12-22 15:59:37 -0800
committerGitHub <noreply@github.com>2017-12-22 15:59:37 -0800
commitbe9eeca6ee3becfa5b4c96bedf62b5b3ff5b1f8d (patch)
tree8c66f02e2e740e162bc0dc52f889c3be832aaf1b /contributors
parentd65527a4aa72be4dc5899922d7f8ec263d541486 (diff)
parentf2816c8bab6330512461c83400e5d69ea9f5d19b (diff)
Merge pull request #1541 from cblecker/link-updates
Fix all the links
Diffstat (limited to 'contributors')
-rw-r--r--contributors/design-proposals/api-machinery/add-new-patchStrategy-to-clear-fields-not-present-in-patch.md2
-rw-r--r--contributors/design-proposals/api-machinery/admission-control-webhooks.md8
-rw-r--r--contributors/design-proposals/api-machinery/apiserver-build-in-admission-plugins.md64
-rw-r--r--contributors/design-proposals/api-machinery/apiserver-watch.md2
-rw-r--r--contributors/design-proposals/api-machinery/auditing.md2
-rw-r--r--contributors/design-proposals/api-machinery/csi-new-client-library-procedure.md14
-rw-r--r--contributors/design-proposals/api-machinery/event_compression.md4
-rw-r--r--contributors/design-proposals/architecture/architecture.md4
-rw-r--r--contributors/design-proposals/cli/multi-fields-merge-key.md4
-rw-r--r--contributors/design-proposals/cloud-provider/cloudprovider-storage-metrics.md4
-rw-r--r--contributors/design-proposals/cluster-lifecycle/bootstrap-discovery.md2
-rw-r--r--contributors/design-proposals/cluster-lifecycle/draft-20171020-bootstrap-checkpointing.md2
-rw-r--r--contributors/design-proposals/cluster-lifecycle/dramatically-simplify-cluster-creation.md2
-rw-r--r--contributors/design-proposals/cluster-lifecycle/self-hosted-kubernetes.md2
-rw-r--r--contributors/design-proposals/instrumentation/core-metrics-pipeline.md18
-rw-r--r--contributors/design-proposals/instrumentation/metrics-server.md12
-rw-r--r--contributors/design-proposals/instrumentation/performance-related-monitoring.md2
-rw-r--r--contributors/design-proposals/multi-platform.md4
-rw-r--r--contributors/design-proposals/multicluster/cluster-registry/api-design.md10
-rw-r--r--contributors/design-proposals/multicluster/federated-placement-policy.md4
-rw-r--r--contributors/design-proposals/multicluster/federation-clusterselector.md2
-rw-r--r--contributors/design-proposals/network/coredns.md4
-rw-r--r--contributors/design-proposals/network/pod-resolv-conf.md2
-rw-r--r--contributors/design-proposals/node/accelerator-monitoring.md2
-rw-r--r--contributors/design-proposals/node/cpu-manager.md4
-rw-r--r--contributors/design-proposals/node/kubelet-authorizer.md2
-rw-r--r--contributors/design-proposals/node/kubelet-eviction.md2
-rw-r--r--contributors/design-proposals/node/kubelet-systemd.md2
-rw-r--r--contributors/design-proposals/node/runtime-pod-cache.md2
-rw-r--r--contributors/design-proposals/node/sysctl.md4
-rw-r--r--contributors/design-proposals/node/troubleshoot-running-pods.md2
-rw-r--r--contributors/design-proposals/scheduling/pod-preemption.md4
-rw-r--r--contributors/design-proposals/scheduling/pod-priority-api.md2
-rw-r--r--contributors/design-proposals/scheduling/podaffinity.md2
-rw-r--r--contributors/design-proposals/scheduling/scheduler_extender.md2
-rw-r--r--contributors/design-proposals/storage/container-storage-interface.md14
-rw-r--r--contributors/design-proposals/storage/flexvolume-deployment.md2
-rw-r--r--contributors/design-proposals/storage/volume-metrics.md4
-rw-r--r--contributors/devel/README.md2
-rw-r--r--contributors/devel/api_changes.md4
-rw-r--r--contributors/devel/architectural-roadmap.md10
-rw-r--r--contributors/devel/automation.md6
-rw-r--r--contributors/devel/bazel.md4
-rw-r--r--contributors/devel/cherry-picks.md2
-rw-r--r--contributors/devel/container-runtime-interface.md8
-rw-r--r--contributors/devel/contributor-cheatsheet.md6
-rw-r--r--contributors/devel/controllers.md4
-rw-r--r--contributors/devel/cri-container-stats.md8
-rw-r--r--contributors/devel/development.md12
-rw-r--r--contributors/devel/e2e-node-tests.md6
-rw-r--r--contributors/devel/e2e-tests.md12
-rw-r--r--contributors/devel/flexvolume.md14
-rw-r--r--contributors/devel/generating-clientset.md4
-rw-r--r--contributors/devel/gubernator.md2
-rw-r--r--contributors/devel/issues.md4
-rw-r--r--contributors/devel/kubectl-conventions.md2
-rw-r--r--contributors/devel/node-performance-testing.md4
-rw-r--r--contributors/devel/on-call-federation-build-cop.md18
-rw-r--r--contributors/devel/owners.md20
-rw-r--r--contributors/devel/pull-requests.md12
-rw-r--r--contributors/devel/scalability-good-practices.md2
-rw-r--r--contributors/devel/strategic-merge-patch.md4
-rw-r--r--contributors/devel/testing.md4
-rw-r--r--contributors/devel/vagrant.md2
-rw-r--r--contributors/devel/writing-good-e2e-tests.md10
-rw-r--r--contributors/guide/README.md29
66 files changed, 219 insertions, 220 deletions
diff --git a/contributors/design-proposals/api-machinery/add-new-patchStrategy-to-clear-fields-not-present-in-patch.md b/contributors/design-proposals/api-machinery/add-new-patchStrategy-to-clear-fields-not-present-in-patch.md
index 866ffb81..5f035f9b 100644
--- a/contributors/design-proposals/api-machinery/add-new-patchStrategy-to-clear-fields-not-present-in-patch.md
+++ b/contributors/design-proposals/api-machinery/add-new-patchStrategy-to-clear-fields-not-present-in-patch.md
@@ -44,7 +44,7 @@ that does not contain a discriminator.
|---|---|
| non-inlined non-discriminated union | Yes |
| non-inlined discriminated union | Yes |
-| inlined union with [patchMergeKey](https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#strategic-merge-patch) only | Yes |
+| inlined union with [patchMergeKey](/contributors/devel/api-conventions.md#strategic-merge-patch) only | Yes |
| other inlined union | No |
For the inlined union with patchMergeKey, we move the tag to the parent struct's instead of
diff --git a/contributors/design-proposals/api-machinery/admission-control-webhooks.md b/contributors/design-proposals/api-machinery/admission-control-webhooks.md
index 6bf891fc..100c27fa 100644
--- a/contributors/design-proposals/api-machinery/admission-control-webhooks.md
+++ b/contributors/design-proposals/api-machinery/admission-control-webhooks.md
@@ -21,7 +21,7 @@ This document proposes a detailed plan for bringing Webhooks to Beta. Highlights
* Versioned rather than Internal data sent on hook
* Ordering behavior within webhooks, and with other admission phases, is better defined
-This plan is compatible with the [original design doc]( https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/admission_control_extension.md).
+This plan is compatible with the [original design doc](/contributors/design-proposals/api-machinery/admission_control_extension.md).
# Definitions
@@ -391,12 +391,12 @@ Specific Use cases:
* Kubernetes static Admission Controllers
* Documented [here](https://kubernetes.io/docs/admin/admission-controllers/)
- * Discussed [here](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/admission_control_extension.md)
+ * Discussed [here](/contributors/design-proposals/api-machinery/admission_control_extension.md)
* All are highly reliable. Most are simple. No external deps.
* Many need update checks.
* Can be separated into mutation and validate phases.
* OpenShift static Admission Controllers
- * Discussed [here](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/admission_control_extension.md)
+ * Discussed [here](/contributors/design-proposals/api-machinery/admission_control_extension.md)
* Similar to Kubernetes ones.
* Istio, Case 1: Add Container to all Pods.
* Currently uses Initializer but can use Mutating Webhook.
@@ -411,7 +411,7 @@ Specific Use cases:
* Simple, can be highly reliable and fast. No external deps.
* No current use case for updates.
-Good further discussion of use cases [here](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/admission_control_extension.md)
+Good further discussion of use cases [here](/contributors/design-proposals/api-machinery/admission_control_extension.md)
## Details of Porting Admission Controllers
diff --git a/contributors/design-proposals/api-machinery/apiserver-build-in-admission-plugins.md b/contributors/design-proposals/api-machinery/apiserver-build-in-admission-plugins.md
index 1dbda591..cefaf8fd 100644
--- a/contributors/design-proposals/api-machinery/apiserver-build-in-admission-plugins.md
+++ b/contributors/design-proposals/api-machinery/apiserver-build-in-admission-plugins.md
@@ -4,66 +4,66 @@
| Topic | Link |
| ----- | ---- |
-| Admission Control | https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/admission_control.md |
+| Admission Control | https://git.k8s.io/community/contributors/design-proposals/api-machinery/admission_control.md |
## Introduction
An admission controller is a piece of code that intercepts requests to the Kubernetes API - think a middleware.
-The API server lets you have a whole chain of them. Each is run in sequence before a request is accepted
-into the cluster. If any of the plugins in the sequence rejects the request, the entire request is rejected
+The API server lets you have a whole chain of them. Each is run in sequence before a request is accepted
+into the cluster. If any of the plugins in the sequence rejects the request, the entire request is rejected
immediately and an error is returned to the user.
-Many features in Kubernetes require an admission control plugin to be enabled in order to properly support the feature.
-In fact in the [documentation](https://kubernetes.io/docs/admin/admission-controllers/#is-there-a-recommended-set-of-plug-ins-to-use) you will find
+Many features in Kubernetes require an admission control plugin to be enabled in order to properly support the feature.
+In fact in the [documentation](https://kubernetes.io/docs/admin/admission-controllers/#is-there-a-recommended-set-of-plug-ins-to-use) you will find
a recommended set of them to use.
-At the moment admission controllers are implemented as plugins and they have to be compiled into the
+At the moment admission controllers are implemented as plugins and they have to be compiled into the
final binary in order to be used at a later time. Some even require an access to cache, an authorizer etc.
-This is where an admission plugin initializer kicks in. An admission plugin initializer is used to pass additional
+This is where an admission plugin initializer kicks in. An admission plugin initializer is used to pass additional
configuration and runtime references to a cache, a client and an authorizer.
-To streamline the process of adding new plugins especially for aggregated API servers we would like to build some plugins
-into the generic API server library and provide a plugin initializer. While anyone can author and register one, having a known set of
+To streamline the process of adding new plugins especially for aggregated API servers we would like to build some plugins
+into the generic API server library and provide a plugin initializer. While anyone can author and register one, having a known set of
provided references let's people focus on what they need their admission plugin to do instead of paying attention to wiring.
## Implementation
-The first step would involve creating a "standard" plugin initializer that would be part of the
-generic API server. It would use kubeconfig to populate
-[external clients](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubeapiserver/admission/initializer.go#L29)
-and [external informers](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubeapiserver/admission/initializer.go#L35).
-By default for servers that would be run on the kubernetes cluster in-cluster config would be used.
-The standard initializer would also provide a client config for connecting to the core kube-apiserver.
-Some API servers might be started as static pods, which don't have in-cluster configs.
-In that case the config could be easily populated form the file.
-
-The second step would be to move some plugins from [admission pkg](https://github.com/kubernetes/kubernetes/tree/master/plugin/pkg/admission)
-to the generic API server library. Some admission plugins are used to ensure consistent user expectations.
-These plugins should be moved. One example is the Namespace Lifecycle plugin which prevents users
+The first step would involve creating a "standard" plugin initializer that would be part of the
+generic API server. It would use kubeconfig to populate
+[external clients](https://git.k8s.io/kubernetes/pkg/kubeapiserver/admission/initializer.go#L29)
+and [external informers](https://git.k8s.io/kubernetes/pkg/kubeapiserver/admission/initializer.go#L35).
+By default for servers that would be run on the kubernetes cluster in-cluster config would be used.
+The standard initializer would also provide a client config for connecting to the core kube-apiserver.
+Some API servers might be started as static pods, which don't have in-cluster configs.
+In that case the config could be easily populated form the file.
+
+The second step would be to move some plugins from [admission pkg](https://git.k8s.io/kubernetes/plugin/pkg/admission)
+to the generic API server library. Some admission plugins are used to ensure consistent user expectations.
+These plugins should be moved. One example is the Namespace Lifecycle plugin which prevents users
from creating resources in non-existent namespaces.
*Note*:
-For loading in-cluster configuration [visit](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/client-go/examples/in-cluster-client-configuration/main.go)
- For loading the configuration directly from a file [visit](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/client-go/examples/out-of-cluster-client-configuration/main.go)
-
+For loading in-cluster configuration [visit](https://git.k8s.io/kubernetes/staging/src/k8s.io/client-go/examples/in-cluster-client-configuration/main.go)
+ For loading the configuration directly from a file [visit](https://git.k8s.io/kubernetes/staging/src/k8s.io/client-go/examples/out-of-cluster-client-configuration/main.go)
+
## How to add an admission plugin ?
- At this point adding an admission plugin is very simple and boils down to performing the
+ At this point adding an admission plugin is very simple and boils down to performing the
following series of steps:
1. Write an admission plugin
- 2. Register the plugin
+ 2. Register the plugin
3. Reference the plugin in the admission chain
## An example
-The sample apiserver provides an example admission plugin that makes meaningful use of the "standard" plugin initializer.
+The sample apiserver provides an example admission plugin that makes meaningful use of the "standard" plugin initializer.
The admission plugin ensures that a resource name is not on the list of banned names.
-The source code of the plugin can be found [here](https://github.com/kubernetes/kubernetes/blob/2f00e6d72c9d58fe3edc3488a91948cf4bfcc6d9/staging/src/k8s.io/sample-apiserver/pkg/admission/plugin/banflunder/admission.go).
+The source code of the plugin can be found [here](https://github.com/kubernetes/kubernetes/blob/2f00e6d72c9d58fe3edc3488a91948cf4bfcc6d9/staging/src/k8s.io/sample-apiserver/pkg/admission/plugin/banflunder/admission.go).
Having the plugin, the next step is the registration. [AdmissionOptions](https://github.com/kubernetes/kubernetes/blob/2f00e6d72c9d58fe3edc3488a91948cf4bfcc6d9/staging/src/k8s.io/apiserver/pkg/server/options/admission.go)
-provides two important things. Firstly it exposes [a register](https://github.com/kubernetes/kubernetes/blob/2f00e6d72c9d58fe3edc3488a91948cf4bfcc6d9/staging/src/k8s.io/apiserver/pkg/server/options/admission.go#L43)
-under which all admission plugins are registered. In fact, that's exactly what the [Register](https://github.com/kubernetes/kubernetes/blob/2f00e6d72c9d58fe3edc3488a91948cf4bfcc6d9/staging/src/k8s.io/sample-apiserver/pkg/admission/plugin/banflunder/admission.go#L33)
+provides two important things. Firstly it exposes [a register](https://github.com/kubernetes/kubernetes/blob/2f00e6d72c9d58fe3edc3488a91948cf4bfcc6d9/staging/src/k8s.io/apiserver/pkg/server/options/admission.go#L43)
+under which all admission plugins are registered. In fact, that's exactly what the [Register](https://github.com/kubernetes/kubernetes/blob/2f00e6d72c9d58fe3edc3488a91948cf4bfcc6d9/staging/src/k8s.io/sample-apiserver/pkg/admission/plugin/banflunder/admission.go#L33)
method does from our example admission plugin. It accepts a global registry as a parameter and then simply registers itself in that registry.
Secondly, it adds an admission chain to the server configuration via [ApplyTo](https://github.com/kubernetes/kubernetes/blob/2f00e6d72c9d58fe3edc3488a91948cf4bfcc6d9/staging/src/k8s.io/apiserver/pkg/server/options/admission.go#L66) method.
-The method accepts optional parameters in the form of `pluginInitalizers`. This is useful when admission plugins need custom configuration that is not provided by the generic initializer.
+The method accepts optional parameters in the form of `pluginInitalizers`. This is useful when admission plugins need custom configuration that is not provided by the generic initializer.
The following code has been extracted from the sample server and illustrates how to register and wire an admission plugin:
@@ -74,7 +74,7 @@ The following code has been extracted from the sample server and illustrates how
// create custom plugin initializer
informerFactory := informers.NewSharedInformerFactory(client, serverConfig.LoopbackClientConfig.Timeout)
admissionInitializer, _ := wardleinitializer.New(informerFactory)
-
+
// add admission chain to the server configuration
o.Admission.ApplyTo(serverConfig, admissionInitializer)
```
diff --git a/contributors/design-proposals/api-machinery/apiserver-watch.md b/contributors/design-proposals/api-machinery/apiserver-watch.md
index 7e90d9b6..7d509e4d 100644
--- a/contributors/design-proposals/api-machinery/apiserver-watch.md
+++ b/contributors/design-proposals/api-machinery/apiserver-watch.md
@@ -132,7 +132,7 @@ the same time, we can introduce an additional etcd event type: EtcdResync
Thus, we need to create the EtcdResync event, extend watch.Interface and
its implementations to support it and handle those events appropriately
in places like
- [Reflector](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/client-go/tools/cache/reflector.go)
+ [Reflector](https://git.k8s.io/kubernetes/staging/src/k8s.io/client-go/tools/cache/reflector.go)
However, this might turn out to be unnecessary optimization if apiserver
will always keep up (which is possible in the new design). We will work
diff --git a/contributors/design-proposals/api-machinery/auditing.md b/contributors/design-proposals/api-machinery/auditing.md
index c3f978d9..b4def584 100644
--- a/contributors/design-proposals/api-machinery/auditing.md
+++ b/contributors/design-proposals/api-machinery/auditing.md
@@ -94,7 +94,7 @@ In the following, the second approach is described without a proxy. At which po
1. as one of the REST handlers (as in [#27087](https://github.com/kubernetes/kubernetes/pull/27087)),
2. as an admission controller.
-The former approach (currently implemented) was picked over the other one, due to the need to be able to get information about both the user submitting the request and the impersonated user (and group), which is being overridden inside the [impersonation filter](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go). Additionally admission controller does not have access to the response and runs after authorization which will prevent logging failed authorization. All of that resulted in continuing the solution started in [#27087](https://github.com/kubernetes/kubernetes/pull/27087), which implements auditing as one of the REST handlers
+The former approach (currently implemented) was picked over the other one, due to the need to be able to get information about both the user submitting the request and the impersonated user (and group), which is being overridden inside the [impersonation filter](https://git.k8s.io/kubernetes/staging/src/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go). Additionally admission controller does not have access to the response and runs after authorization which will prevent logging failed authorization. All of that resulted in continuing the solution started in [#27087](https://github.com/kubernetes/kubernetes/pull/27087), which implements auditing as one of the REST handlers
after authentication, but before impersonation and authorization.
## Proposed Design
diff --git a/contributors/design-proposals/api-machinery/csi-new-client-library-procedure.md b/contributors/design-proposals/api-machinery/csi-new-client-library-procedure.md
index e702a92c..dba0b0fb 100644
--- a/contributors/design-proposals/api-machinery/csi-new-client-library-procedure.md
+++ b/contributors/design-proposals/api-machinery/csi-new-client-library-procedure.md
@@ -24,7 +24,7 @@ Development would be based on a generated client using OpenAPI and [swagger-code
### Client Capabilities
-* Bronze Requirements [![Client Capabilities](https://img.shields.io/badge/Kubernetes%20client-Bronze-blue.svg?style=plastic&colorB=cd7f32&colorA=306CE8)](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/csi-new-client-library-procedure.md#client-capabilities)
+* Bronze Requirements [![Client Capabilities](https://img.shields.io/badge/Kubernetes%20client-Bronze-blue.svg?style=plastic&colorB=cd7f32&colorA=306CE8)](/contributors/design-proposals/api-machinery/csi-new-client-library-procedure.md#client-capabilities)
* Support loading config from kube config file
@@ -40,11 +40,11 @@ Development would be based on a generated client using OpenAPI and [swagger-code
* Works from within the cluster environment.
-* Silver Requirements [![Client Capabilities](https://img.shields.io/badge/Kubernetes%20client-Silver-blue.svg?style=plastic&colorB=C0C0C0&colorA=306CE8)](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/csi-new-client-library-procedure.md#client-capabilities)
+* Silver Requirements [![Client Capabilities](https://img.shields.io/badge/Kubernetes%20client-Silver-blue.svg?style=plastic&colorB=C0C0C0&colorA=306CE8)](/contributors/design-proposals/api-machinery/csi-new-client-library-procedure.md#client-capabilities)
* Support watch calls
-* Gold Requirements [![Client Capabilities](https://img.shields.io/badge/Kubernetes%20client-Gold-blue.svg?style=plastic&colorB=FFD700&colorA=306CE8)](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/csi-new-client-library-procedure.md#client-capabilities)
+* Gold Requirements [![Client Capabilities](https://img.shields.io/badge/Kubernetes%20client-Gold-blue.svg?style=plastic&colorB=FFD700&colorA=306CE8)](/contributors/design-proposals/api-machinery/csi-new-client-library-procedure.md#client-capabilities)
* Support exec, attach, port-forward calls (these are not normally supported out of the box from [swagger-codegen](https://github.com/swagger-api/swagger-codegen))
@@ -54,11 +54,11 @@ Development would be based on a generated client using OpenAPI and [swagger-code
### Client Support Level
-* Alpha [![Client Support Level](https://img.shields.io/badge/kubernetes%20client-alpha-green.svg?style=plastic&colorA=306CE8)](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/csi-new-client-library-procedure.md#client-support-level)
+* Alpha [![Client Support Level](https://img.shields.io/badge/kubernetes%20client-alpha-green.svg?style=plastic&colorA=306CE8)](/contributors/design-proposals/api-machinery/csi-new-client-library-procedure.md#client-support-level)
* Clients don’t even have to meet bronze requirements
-* Beta [![Client Support Level](https://img.shields.io/badge/kubernetes%20client-beta-green.svg?style=plastic&colorA=306CE8)](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/csi-new-client-library-procedure.md#client-support-level)
+* Beta [![Client Support Level](https://img.shields.io/badge/kubernetes%20client-beta-green.svg?style=plastic&colorA=306CE8)](/contributors/design-proposals/api-machinery/csi-new-client-library-procedure.md#client-support-level)
* Client at least meets bronze standards
@@ -68,7 +68,7 @@ Development would be based on a generated client using OpenAPI and [swagger-code
* 2+ individual maintainers/owners of the repository
-* Stable [![Client Support Level](https://img.shields.io/badge/kubernetes%20client-stable-green.svg?style=plastic&colorA=306CE8)](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/csi-new-client-library-procedure.md#client-support-level)
+* Stable [![Client Support Level](https://img.shields.io/badge/kubernetes%20client-stable-green.svg?style=plastic&colorA=306CE8)](/contributors/design-proposals/api-machinery/csi-new-client-library-procedure.md#client-support-level)
* Support level documented per-platform
@@ -96,5 +96,5 @@ For each client language, we’ll make a client-[lang]-base and client-[lang] re
# Support
-These clients will be supported by the Kubernetes [API Machinery special interest group](https://github.com/kubernetes/community/tree/master/sig-api-machinery); however, individual owner(s) will be needed for each client language for them to be considered stable; the SIG won’t be able to handle the support load otherwise. If the generated clients prove as easy to maintain as we hope, then a few individuals may be able to own multiple clients.
+These clients will be supported by the Kubernetes [API Machinery special interest group](/sig-api-machinery); however, individual owner(s) will be needed for each client language for them to be considered stable; the SIG won’t be able to handle the support load otherwise. If the generated clients prove as easy to maintain as we hope, then a few individuals may be able to own multiple clients.
diff --git a/contributors/design-proposals/api-machinery/event_compression.md b/contributors/design-proposals/api-machinery/event_compression.md
index 9d6acf42..258adbb3 100644
--- a/contributors/design-proposals/api-machinery/event_compression.md
+++ b/contributors/design-proposals/api-machinery/event_compression.md
@@ -53,7 +53,7 @@ Each binary that generates events:
* Maintains a historical record of previously generated events:
* Implemented with
["Least Recently Used Cache"](https://github.com/golang/groupcache/blob/master/lru/lru.go)
-in [`pkg/client/record/events_cache.go`](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/client-go/tools/record/events_cache.go).
+in [`pkg/client/record/events_cache.go`](https://git.k8s.io/kubernetes/staging/src/k8s.io/client-go/tools/record/events_cache.go).
* Implemented behind an `EventCorrelator` that manages two subcomponents:
`EventAggregator` and `EventLogger`.
* The `EventCorrelator` observes all incoming events and lets each
@@ -98,7 +98,7 @@ of time and generates tons of unique events, the previously generated events
cache will not grow unchecked in memory. Instead, after 4096 unique events are
generated, the oldest events are evicted from the cache.
* When an event is generated, the previously generated events cache is checked
-(see [`pkg/client/unversioned/record/event.go`](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/client-go/tools/record/event.go)).
+(see [`pkg/client/unversioned/record/event.go`](https://git.k8s.io/kubernetes/staging/src/k8s.io/client-go/tools/record/event.go)).
* If the key for the new event matches the key for a previously generated
event (meaning all of the above fields match between the new event and some
previously generated event), then the event is considered to be a duplicate and
diff --git a/contributors/design-proposals/architecture/architecture.md b/contributors/design-proposals/architecture/architecture.md
index edd9133e..8387c886 100644
--- a/contributors/design-proposals/architecture/architecture.md
+++ b/contributors/design-proposals/architecture/architecture.md
@@ -237,9 +237,9 @@ Service endpoints are found primarily via [DNS](https://kubernetes.io/docs/conce
### Add-ons and other dependencies
-A number of components, called [*add-ons*](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons) typically run on Kubernetes
+A number of components, called [*add-ons*](https://git.k8s.io/kubernetes/cluster/addons) typically run on Kubernetes
itself:
-* [DNS](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns)
+* [DNS](https://git.k8s.io/kubernetes/cluster/addons/dns)
* [Ingress controller](https://github.com/kubernetes/ingress-gce)
* [Heapster](https://github.com/kubernetes/heapster/) (resource monitoring)
* [Dashboard](https://github.com/kubernetes/dashboard/) (GUI)
diff --git a/contributors/design-proposals/cli/multi-fields-merge-key.md b/contributors/design-proposals/cli/multi-fields-merge-key.md
index 229a1021..9db3d549 100644
--- a/contributors/design-proposals/cli/multi-fields-merge-key.md
+++ b/contributors/design-proposals/cli/multi-fields-merge-key.md
@@ -6,13 +6,13 @@ Support multi-fields merge key in Strategic Merge Patch.
## Background
-Strategic Merge Patch is covered in this [doc](https://github.com/kubernetes/community/blob/master/contributors/devel/strategic-merge-patch.md).
+Strategic Merge Patch is covered in this [doc](/contributors/devel/strategic-merge-patch.md).
In Strategic Merge Patch, we use Merge Key to identify the entries in the list of non-primitive types.
It must always be present and unique to perform the merge on the list of non-primitive types,
and will be preserved.
The merge key exists in the struct tag (e.g. in [types.go](https://github.com/kubernetes/kubernetes/blob/5a9759b0b41d5e9bbd90d5a8f3a4e0a6c0b23b47/pkg/api/v1/types.go#L2831))
-and the [OpenAPI spec](https://github.com/kubernetes/kubernetes/blob/master/api/openapi-spec/swagger.json).
+and the [OpenAPI spec](https://git.k8s.io/kubernetes/api/openapi-spec/swagger.json).
## Motivation
diff --git a/contributors/design-proposals/cloud-provider/cloudprovider-storage-metrics.md b/contributors/design-proposals/cloud-provider/cloudprovider-storage-metrics.md
index f6825604..838c7e43 100644
--- a/contributors/design-proposals/cloud-provider/cloudprovider-storage-metrics.md
+++ b/contributors/design-proposals/cloud-provider/cloudprovider-storage-metrics.md
@@ -29,7 +29,7 @@ but we only focus on storage API calls here.
### Metric format and collection
Metrics emitted from cloud provider will fall under category of service metrics
-as defined in [Kubernetes Monitoring Architecture](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/monitoring_architecture.md).
+as defined in [Kubernetes Monitoring Architecture](/contributors/design-proposals/instrumentation/monitoring_architecture.md).
The metrics will be emitted using [Prometheus format](https://prometheus.io/docs/instrumenting/exposition_formats/) and available for collection
@@ -40,7 +40,7 @@ metrics on `/metrics` HTTP endpoint. This proposal merely extends available metr
Any collector which can parse Prometheus metric format should be able to collect
metrics from these endpoints.
-A more detailed description of monitoring pipeline can be found in [Monitoring architecture] (https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/monitoring_architecture.md#monitoring-pipeline) document.
+A more detailed description of monitoring pipeline can be found in [Monitoring architecture] (/contributors/design-proposals/instrumentation/monitoring_architecture.md#monitoring-pipeline) document.
#### Metric Types
diff --git a/contributors/design-proposals/cluster-lifecycle/bootstrap-discovery.md b/contributors/design-proposals/cluster-lifecycle/bootstrap-discovery.md
index e9dad621..f481e02d 100644
--- a/contributors/design-proposals/cluster-lifecycle/bootstrap-discovery.md
+++ b/contributors/design-proposals/cluster-lifecycle/bootstrap-discovery.md
@@ -221,7 +221,7 @@ Only one of `--discovery-file` or `--discovery-token` can be set. If more than
Our documentations (and output from `kubeadm`) should stress to users that when the token is configured for authentication and used for TLS bootstrap is a pretty powerful credential due to that any person with access to it can claim to be a node.
The highest risk regarding being able to claim a credential in the `system:nodes` group is that it can read all Secrets in the cluster, which may compromise the cluster.
-The [Node Authorizer](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/kubelet-authorizer.md) locks this down a bit, but an untrusted person could still try to
+The [Node Authorizer](/contributors/design-proposals/node/kubelet-authorizer.md) locks this down a bit, but an untrusted person could still try to
guess a node's name, get such a credential, guess the name of the Secret and be able to get that.
Users should set a TTL on the token to limit the above mentioned risk. `kubeadm` sets a 24h TTL on the node bootstrap token by default in v1.8.
diff --git a/contributors/design-proposals/cluster-lifecycle/draft-20171020-bootstrap-checkpointing.md b/contributors/design-proposals/cluster-lifecycle/draft-20171020-bootstrap-checkpointing.md
index ab22c627..5c133a0a 100644
--- a/contributors/design-proposals/cluster-lifecycle/draft-20171020-bootstrap-checkpointing.md
+++ b/contributors/design-proposals/cluster-lifecycle/draft-20171020-bootstrap-checkpointing.md
@@ -140,7 +140,7 @@ Testing of this feature will occur in three parts.
* None at this time.
-[0]: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/cluster-lifecycle/self-hosted-kubernetes.md
+[0]: /contributors/design-proposals/cluster-lifecycle/self-hosted-kubernetes.md
[1]: https://github.com/kubernetes/community/pull/825
[2]: https://docs.google.com/document/d/1hhrCa_nv0Sg4O_zJYOnelE8a5ClieyewEsQM6c7-5-o/edit?ts=5988fba8#
[3]: https://docs.google.com/document/d/1qmK0Iq4fqxnd8COBFZHpip27fT-qSPkOgy1x2QqjYaQ/edit?ts=599b797c#
diff --git a/contributors/design-proposals/cluster-lifecycle/dramatically-simplify-cluster-creation.md b/contributors/design-proposals/cluster-lifecycle/dramatically-simplify-cluster-creation.md
index a76e11da..3472115d 100644
--- a/contributors/design-proposals/cluster-lifecycle/dramatically-simplify-cluster-creation.md
+++ b/contributors/design-proposals/cluster-lifecycle/dramatically-simplify-cluster-creation.md
@@ -3,7 +3,7 @@
> ***Please note: this proposal doesn't reflect final implementation, it's here for the purpose of capturing the original ideas.***
> ***You should probably [read `kubeadm` docs](http://kubernetes.io/docs/getting-started-guides/kubeadm/), to understand the end-result of this effor.***
-Luke Marsden & many others in [SIG-cluster-lifecycle](https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle).
+Luke Marsden & many others in [SIG-cluster-lifecycle](/sig-cluster-lifecycle).
17th August 2016
diff --git a/contributors/design-proposals/cluster-lifecycle/self-hosted-kubernetes.md b/contributors/design-proposals/cluster-lifecycle/self-hosted-kubernetes.md
index c570b2f0..9152f251 100644
--- a/contributors/design-proposals/cluster-lifecycle/self-hosted-kubernetes.md
+++ b/contributors/design-proposals/cluster-lifecycle/self-hosted-kubernetes.md
@@ -100,4 +100,4 @@ Kubernetes self-hosted is working today. Bootkube is an implementation of the "t
- [Health check endpoints for components don't work correctly](https://github.com/kubernetes-incubator/bootkube/issues/64#issuecomment-228144345)
- [kubeadm does do self-hosted, but isn't tested yet](https://github.com/kubernetes/kubernetes/pull/40075)
-- The Kubernetes [versioning policy](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/release/versioning.md) allows for version skew of kubelet and control plane but not skew between control plane components themselves. We must add testing and validation to Kubernetes that this skew works. Otherwise the work to make Kubernetes HA is rather pointless if it can't be upgraded in an HA manner as well.
+- The Kubernetes [versioning policy](/contributors/design-proposals/release/versioning.md) allows for version skew of kubelet and control plane but not skew between control plane components themselves. We must add testing and validation to Kubernetes that this skew works. Otherwise the work to make Kubernetes HA is rather pointless if it can't be upgraded in an HA manner as well.
diff --git a/contributors/design-proposals/instrumentation/core-metrics-pipeline.md b/contributors/design-proposals/instrumentation/core-metrics-pipeline.md
index 433c97e8..1c9d9f70 100644
--- a/contributors/design-proposals/instrumentation/core-metrics-pipeline.md
+++ b/contributors/design-proposals/instrumentation/core-metrics-pipeline.md
@@ -28,22 +28,22 @@ This document proposes a design for the set of metrics included in an eventual C
### Definitions
"Kubelet": The daemon that runs on every kubernetes node and controls pod and container lifecycle, among many other things.
["cAdvisor":](https://github.com/google/cadvisor) An open source container monitoring solution which only monitors containers, and has no concept of kubernetes constructs like pods or volumes.
-["Summary API":](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/apis/stats/v1alpha1/types.go) A kubelet API which currently exposes node metrics for use by both system components and monitoring systems.
-["CRI":](https://github.com/kubernetes/community/blob/master/contributors/devel/container-runtime-interface.md) The Container Runtime Interface designed to provide an abstraction over runtimes (docker, rkt, etc).
-"Core Metrics": A set of metrics described in the [Monitoring Architecture](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/monitoring_architecture.md) whose purpose is to provide metrics for first-class resource isolation and utilization features, including [resource feasibility checking](https://github.com/eBay/Kubernetes/blob/master/docs/design/resources.md#the-resource-model) and node resource management.
+["Summary API":](https://git.k8s.io/kubernetes/pkg/kubelet/apis/stats/v1alpha1/types.go) A kubelet API which currently exposes node metrics for use by both system components and monitoring systems.
+["CRI":](/contributors/devel/container-runtime-interface.md) The Container Runtime Interface designed to provide an abstraction over runtimes (docker, rkt, etc).
+"Core Metrics": A set of metrics described in the [Monitoring Architecture](/contributors/design-proposals/instrumentation/monitoring_architecture.md) whose purpose is to provide metrics for first-class resource isolation and utilization features, including [resource feasibility checking](https://github.com/eBay/Kubernetes/blob/master/docs/design/resources.md#the-resource-model) and node resource management.
"Resource": A consumable element of a node (e.g. memory, disk space, CPU time, etc).
"First-class Resource": A resource critical for scheduling, whose requests and limits can be (or soon will be) set via the Pod/Container Spec.
"Metric": A measure of consumption of a Resource.
### Background
-The [Monitoring Architecture](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/monitoring_architecture.md) proposal contains a blueprint for a set of metrics referred to as "Core Metrics". The purpose of this proposal is to specify what those metrics are, to enable work relating to the collection, by the kubelet, of the metrics.
+The [Monitoring Architecture](/contributors/design-proposals/instrumentation/monitoring_architecture.md) proposal contains a blueprint for a set of metrics referred to as "Core Metrics". The purpose of this proposal is to specify what those metrics are, to enable work relating to the collection, by the kubelet, of the metrics.
-Kubernetes vendors cAdvisor into its codebase, and the kubelet uses cAdvisor as a library that enables it to collect metrics on containers. The kubelet can then combine container-level metrics from cAdvisor with the kubelet's knowledge of kubernetes constructs (e.g. pods) to produce the kubelet Summary statistics, which provides metrics for use by the kubelet, or by users through the [Summary API](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/apis/stats/v1alpha1/types.go). cAdvisor works by collecting metrics at an interval (10 seconds, by default), and the kubelet then simply queries these cached metrics whenever it has a need for them.
+Kubernetes vendors cAdvisor into its codebase, and the kubelet uses cAdvisor as a library that enables it to collect metrics on containers. The kubelet can then combine container-level metrics from cAdvisor with the kubelet's knowledge of kubernetes constructs (e.g. pods) to produce the kubelet Summary statistics, which provides metrics for use by the kubelet, or by users through the [Summary API](https://git.k8s.io/kubernetes/pkg/kubelet/apis/stats/v1alpha1/types.go). cAdvisor works by collecting metrics at an interval (10 seconds, by default), and the kubelet then simply queries these cached metrics whenever it has a need for them.
-Currently, cAdvisor collects a large number of metrics related to system and container performance. However, only some of these metrics are consumed by the kubelet summary API, and many are not used. The kubelet [Summary API](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/apis/stats/v1alpha1/types.go) is published to the kubelet summary API endpoint (stats/summary). Some of the metrics provided by the summary API are consumed by kubernetes system components, but many are included for the sole purpose of providing metrics for monitoring.
+Currently, cAdvisor collects a large number of metrics related to system and container performance. However, only some of these metrics are consumed by the kubelet summary API, and many are not used. The kubelet [Summary API](https://git.k8s.io/kubernetes/pkg/kubelet/apis/stats/v1alpha1/types.go) is published to the kubelet summary API endpoint (stats/summary). Some of the metrics provided by the summary API are consumed by kubernetes system components, but many are included for the sole purpose of providing metrics for monitoring.
### Motivations
-The [Monitoring Architecture](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/monitoring_architecture.md) proposal explains why a separate monitoring pipeline is required.
+The [Monitoring Architecture](/contributors/design-proposals/instrumentation/monitoring_architecture.md) proposal explains why a separate monitoring pipeline is required.
By publishing core metrics, the kubelet is relieved of its responsibility to provide metrics for monitoring.
The third party monitoring pipeline also is relieved of any responsibility to provide these metrics to system components.
@@ -56,7 +56,7 @@ This proposal is to use this set of core metrics, collected by the kubelet, and
The target "Users" of this set of metrics are kubernetes components (though not necessarily directly). This set of metrics itself is not designed to be user-facing, but is designed to be general enough to support user-facing components.
### Non Goals
-Everything covered in the [Monitoring Architecture](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/monitoring_architecture.md) design doc will not be covered in this proposal. This includes the third party metrics pipeline, and the methods by which the metrics found in this proposal are provided to other kubernetes components.
+Everything covered in the [Monitoring Architecture](/contributors/design-proposals/instrumentation/monitoring_architecture.md) design doc will not be covered in this proposal. This includes the third party metrics pipeline, and the methods by which the metrics found in this proposal are provided to other kubernetes components.
Integration with CRI will not be covered in this proposal. In future proposals, integrating with CRI may provide a better abstraction of information required by the core metrics pipeline to collect metrics.
@@ -82,7 +82,7 @@ Metrics requirements for "First Class Resource Isolation and Utilization Feature
- Kubelet
- Node-level usage metrics for Filesystems, CPU, and Memory
- Pod-level usage metrics for Filesystems and Memory
- - Metrics Server (outlined in [Monitoring Architecture](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/monitoring_architecture.md)), which exposes the [Resource Metrics API](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/resource-metrics-api.md) to the following system components:
+ - Metrics Server (outlined in [Monitoring Architecture](/contributors/design-proposals/instrumentation/monitoring_architecture.md)), which exposes the [Resource Metrics API](/contributors/design-proposals/instrumentation/resource-metrics-api.md) to the following system components:
- Scheduler
- Node-level usage metrics for Filesystems, CPU, and Memory
- Pod-level usage metrics for Filesystems, CPU, and Memory
diff --git a/contributors/design-proposals/instrumentation/metrics-server.md b/contributors/design-proposals/instrumentation/metrics-server.md
index 022f8989..344addf6 100644
--- a/contributors/design-proposals/instrumentation/metrics-server.md
+++ b/contributors/design-proposals/instrumentation/metrics-server.md
@@ -5,7 +5,7 @@ Resource Metrics API is an effort to provide a first-class Kubernetes API
(stable, versioned, discoverable, available through apiserver and with client support)
that serves resource usage metrics for pods and nodes. The use cases were discussed
and the API was proposed a while ago in
-[another proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/resource-metrics-api.md).
+[another proposal](/contributors/design-proposals/instrumentation/resource-metrics-api.md).
This document describes the architecture and the design of the second part of this effort:
making the mentioned API available in the same way as the other Kubernetes APIs.
@@ -43,18 +43,18 @@ Previously metrics server was blocked on this dependency.
### Design ###
Metrics server will be implemented in line with
-[Kubernetes monitoring architecture](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/monitoring_architecture.md)
+[Kubernetes monitoring architecture](/contributors/design-proposals/instrumentation/monitoring_architecture.md)
and inspired by [Heapster](https://github.com/kubernetes/heapster).
It will be a cluster level component which periodically scrapes metrics from all Kubernetes nodes
served by Kubelet through Summary API. Then metrics will be aggregated,
stored in memory (see Scalability limitations) and served in
-[Metrics API](https://github.com/kubernetes/metrics/blob/master/pkg/apis/metrics/v1alpha1/types.go) format.
+[Metrics API](https://git.k8s.io/metrics/pkg/apis/metrics/v1alpha1/types.go) format.
Metrics server will use apiserver library to implement http server functionality.
The library offers common Kubernetes functionality like authorization/authentication,
versioning, support for auto-generated client. To store data in memory we will replace
the default storage layer (etcd) by introducing in-memory store which will implement
-[Storage interface](https://github.com/kubernetes/apiserver/blob/master/pkg/registry/rest/rest.go).
+[Storage interface](https://git.k8s.io/apiserver/pkg/registry/rest/rest.go).
Only the most recent value of each metric will be remembered. If a user needs an access
to historical data they should either use 3rd party monitoring solution or
@@ -71,13 +71,13 @@ due to security reasons (our policy allows only connection in the opposite direc
There will be only one instance of metrics server running in each cluster. In order to handle
high metrics volume, metrics server will be vertically autoscaled by
-[addon-resizer](https://github.com/kubernetes/contrib/tree/master/addon-resizer).
+[addon-resizer](https://git.k8s.io/contrib/addon-resizer).
We will measure its resource usage characteristic. Our experience from profiling Heapster shows
that it scales vertically effectively. If we hit performance limits we will consider scaling it
horizontally, though it’s rather complicated and is out of the scope of this doc.
Metrics server will be Kubernetes addon, create by kube-up script and managed by
-[addon-manager](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/addon-manager).
+[addon-manager](https://git.k8s.io/kubernetes/cluster/addons/addon-manager).
Since there is a number of dependant components, it will be marked as a critical addon.
In the future when the priority/preemption feature is introduced we will migrate to use this
proper mechanism for marking it as a high-priority, system component.
diff --git a/contributors/design-proposals/instrumentation/performance-related-monitoring.md b/contributors/design-proposals/instrumentation/performance-related-monitoring.md
index 91b799f1..f2b75813 100644
--- a/contributors/design-proposals/instrumentation/performance-related-monitoring.md
+++ b/contributors/design-proposals/instrumentation/performance-related-monitoring.md
@@ -57,7 +57,7 @@ Basic ideas:
### REST call monitoring
We do measure REST call duration in the Density test, but we need an API server monitoring as well, to avoid false failures caused e.g. by the network traffic. We already have
-some metrics in place (https://github.com/kubernetes/kubernetes/blob/master/pkg/apiserver/metrics/metrics.go), but we need to revisit the list and add some more.
+some metrics in place (https://git.k8s.io/kubernetes/pkg/apiserver/metrics/metrics.go), but we need to revisit the list and add some more.
Basic ideas:
- number of calls per verb, client, resource type
diff --git a/contributors/design-proposals/multi-platform.md b/contributors/design-proposals/multi-platform.md
index fcaa0484..923472e6 100644
--- a/contributors/design-proposals/multi-platform.md
+++ b/contributors/design-proposals/multi-platform.md
@@ -73,7 +73,7 @@ This is a fairly long topic. If you're interested how to cross-compile, see [det
The easiest way of running Kubernetes on another architecture at the time of writing is probably by using the docker-multinode deployment. Of course, you may choose whatever deployment you want, the binaries are easily downloadable from the URL above.
-[docker-multinode](https://github.com/kubernetes/kube-deploy/tree/master/docker-multinode) is intended to be a "kick-the-tires" multi-platform solution with Docker as the only real dependency (but it's not production ready)
+[docker-multinode](https://git.k8s.io/kube-deploy/docker-multinode) is intended to be a "kick-the-tires" multi-platform solution with Docker as the only real dependency (but it's not production ready)
But when we (`sig-cluster-lifecycle`) have standardized the deployments to about three and made them production ready; at least one deployment should support **all platforms**.
@@ -377,7 +377,7 @@ In order to dynamically compile a go binary with `cgo`, we need `gcc` installed
The only Kubernetes binary that is using C code is the `kubelet`, or in fact `cAdvisor` on which `kubelet` depends. `hyperkube` is also dynamically linked as long as `kubelet` is. We should aim to make `kubelet` statically linked.
-The normal `x86_64-linux-gnu` can't cross-compile binaries, so we have to install gcc cross-compilers for every platform. We do this in the [`kube-cross`](https://github.com/kubernetes/kubernetes/blob/master/build/build-image/cross/Dockerfile) image,
+The normal `x86_64-linux-gnu` can't cross-compile binaries, so we have to install gcc cross-compilers for every platform. We do this in the [`kube-cross`](https://git.k8s.io/kubernetes/build/build-image/cross/Dockerfile) image,
and depend on the [`emdebian.org` repository](https://wiki.debian.org/CrossToolchains). Depending on `emdebian` isn't ideal, so we should consider using the latest `gcc` cross-compiler packages from the `ubuntu` main repositories in the future.
Here's an example when cross-compiling plain C code:
diff --git a/contributors/design-proposals/multicluster/cluster-registry/api-design.md b/contributors/design-proposals/multicluster/cluster-registry/api-design.md
index c9c8614d..2133f499 100644
--- a/contributors/design-proposals/multicluster/cluster-registry/api-design.md
+++ b/contributors/design-proposals/multicluster/cluster-registry/api-design.md
@@ -63,7 +63,7 @@ use cases.
## API
This document defines the cluster registry API. It is an evolution of the
-[current Federation cluster API](https://github.com/kubernetes/federation/blob/master/apis/federation/types.go#L99),
+[current Federation cluster API](https://git.k8s.io/federation/apis/federation/types.go#L99),
and is designed more specifically for the "cluster registry" use case in
contrast to the Federation `Cluster` object, which was made for the
active-control-plane Federation.
@@ -84,7 +84,7 @@ Optional API operations:
support WATCH for this API. Implementations can choose to support or not
support this operation. An implementation that does not support the
operation should return HTTP error 405, StatusMethodNotAllowed, per the
- [relevant Kubernetes API conventions](https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#error-codes).
+ [relevant Kubernetes API conventions](/contributors/devel/api-conventions.md#error-codes).
We also intend to support a use case where the server returns a file that can be
stored for later use. We expect this to be doable with the standard API
@@ -92,7 +92,7 @@ machinery; and if the API is implemented not using the Kubernetes API machinery,
that the returned file must be interoperable with the response from a Kubernetes
API server.
-[The API](https://github.com/kubernetes/cluster-registry/blob/master/pkg/apis/clusterregistry/v1alpha1/types.go)
+[The API](https://git.k8s.io/cluster-registry/pkg/apis/clusterregistry/v1alpha1/types.go)
is defined in the cluster registry repo, and is not replicated here in order to
avoid mismatches.
@@ -107,7 +107,7 @@ objects that contain a value for the `ClusterName` field. The `Cluster` object's
of namespace scoped.
The `Cluster` object will have `Spec` and `Status` fields, following the
-[Kubernetes API conventions](https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#spec-and-status).
+[Kubernetes API conventions](/contributors/devel/api-conventions.md#spec-and-status).
There was argument in favor of a `State` field instead of `Spec` and `Status`
fields, since the `Cluster` in the registry does not necessarily hold a user's
intent about the cluster being represented, but instead may hold descriptive
@@ -141,7 +141,7 @@ extended appropriately.
The cluster registry API will not provide strongly-typed objects for returning
auth info. Instead, it will provide a generic type that clients can use as they
see fit. This is intended to mirror what `kubectl` does with its
-[AuthProviderConfig](https://github.com/kubernetes/client-go/blob/master/tools/clientcmd/api/types.go#L144).
+[AuthProviderConfig](https://git.k8s.io/client-go/tools/clientcmd/api/types.go#L144).
As open standards are developed for cluster auth, the API can be extended to
provide first-class support for these. We want to avoid baking non-open
standards into the API, and so having to support potentially a multiplicity of
diff --git a/contributors/design-proposals/multicluster/federated-placement-policy.md b/contributors/design-proposals/multicluster/federated-placement-policy.md
index d613422d..c30374ea 100644
--- a/contributors/design-proposals/multicluster/federated-placement-policy.md
+++ b/contributors/design-proposals/multicluster/federated-placement-policy.md
@@ -28,7 +28,7 @@ A simple example of a placement policy is
> compliance.
The [Kubernetes Cluster
-Federation](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/multicluster/federation.md#policy-engine-and-migrationreplication-controllers)
+Federation](/contributors/design-proposals/multicluster/federation.md#policy-engine-and-migrationreplication-controllers)
design proposal includes a pluggable policy engine component that decides how
applications/resources are placed across federated clusters.
@@ -283,7 +283,7 @@ When the remediator component (in the sidecar) receives the notification it
sends a PATCH request to the federation-apiserver to update the affected
resource. This way, the actual rebalancing of ReplicaSets is still handled by
the [Rescheduling
-Algorithm](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/multicluster/federated-replicasets.md)
+Algorithm](/contributors/design-proposals/multicluster/federated-replicasets.md)
in the Federated ReplicaSet controller.
The remediator component must be deployed with a kubeconfig for the
diff --git a/contributors/design-proposals/multicluster/federation-clusterselector.md b/contributors/design-proposals/multicluster/federation-clusterselector.md
index 154412c7..c12e4233 100644
--- a/contributors/design-proposals/multicluster/federation-clusterselector.md
+++ b/contributors/design-proposals/multicluster/federation-clusterselector.md
@@ -34,7 +34,7 @@ Carrying forward the examples from above...
## Design
-The proposed design uses a ClusterSelector annotation that has a value that is parsed into a struct definition that follows the same design as the [NodeSelector type used w/ nodeAffinity](https://github.com/kubernetes/kubernetes/blob/master/pkg/api/types.go#L1972) and will also use the [Matches function](https://github.com/kubernetes/apimachinery/blob/master/pkg/labels/selector.go#L172) of the apimachinery project to determine if an object should be sent on to federated clusters or not.
+The proposed design uses a ClusterSelector annotation that has a value that is parsed into a struct definition that follows the same design as the [NodeSelector type used w/ nodeAffinity](https://git.k8s.io/kubernetes/pkg/api/types.go#L1972) and will also use the [Matches function](https://git.k8s.io/apimachinery/pkg/labels/selector.go#L172) of the apimachinery project to determine if an object should be sent on to federated clusters or not.
In situations where objects are not to be forwarded to federated clusters, instead a delete api call will be made using the object definition. If the object does not exist it will be ignored.
diff --git a/contributors/design-proposals/network/coredns.md b/contributors/design-proposals/network/coredns.md
index 78af9b1e..50217366 100644
--- a/contributors/design-proposals/network/coredns.md
+++ b/contributors/design-proposals/network/coredns.md
@@ -10,7 +10,7 @@ Implementation Owner: @johnbelamaric
CoreDNS is another CNCF project and is the successor to SkyDNS, which kube-dns is based on. It is a flexible, extensible
authoritative DNS server and directly integrates with the Kubernetes API. It can serve as cluster DNS,
-complying with the [dns spec](https://github.com/kubernetes/dns/blob/master/docs/specification.md).
+complying with the [dns spec](https://git.k8s.io/dns/docs/specification.md).
CoreDNS has fewer moving parts than kube-dns, since it is a single executable and single process. It is written in Go so
it is memory-safe (kube-dns includes dnsmasq which is not). It supports a number of use cases that kube-dns does not
@@ -80,7 +80,7 @@ of the lines within `{ }` represent individual plugins:
* `cache 30` enables [caching](https://coredns.io/plugins/cache/) of positive and negative responses for 30 seconds
* `health` opens an HTTP port to allow [health checks](https://coredns.io/plugins/health) from Kubernetes
* `prometheus` enables Prometheus [metrics](https://coredns.io/plugins/metrics)
- * `kubernetes 10.0.0.0/8 cluster.local` connects to the Kubernetes API and [serves records](https://coredns.io/plugins/kubernetes/) for the `cluster.local` domain and reverse DNS for 10.0.0.0/8 per the [spec](https://github.com/kubernetes/dns/blob/master/docs/specification.md)
+ * `kubernetes 10.0.0.0/8 cluster.local` connects to the Kubernetes API and [serves records](https://coredns.io/plugins/kubernetes/) for the `cluster.local` domain and reverse DNS for 10.0.0.0/8 per the [spec](https://git.k8s.io/dns/docs/specification.md)
* `proxy . /etc/resolv.conf` [forwards](https://coredns.io/plugins/proxy) any queries not handled by other plugins (the `.` means the root domain) to the nameservers configured in `/etc/resolv.conf`
### Configuring Stub Domains
diff --git a/contributors/design-proposals/network/pod-resolv-conf.md b/contributors/design-proposals/network/pod-resolv-conf.md
index 04a97df4..ed6e090f 100644
--- a/contributors/design-proposals/network/pod-resolv-conf.md
+++ b/contributors/design-proposals/network/pod-resolv-conf.md
@@ -206,5 +206,5 @@ The follow configurations will result in an invalid Pod spec:
# References
-* [Kubernetes DNS name specification](https://github.com/kubernetes/dns/blob/master/docs/specification.md)
+* [Kubernetes DNS name specification](https://git.k8s.io/dns/docs/specification.md)
* [`/etc/resolv.conf manpage`](http://manpages.ubuntu.com/manpages/zesty/man5/resolv.conf.5.html)
diff --git a/contributors/design-proposals/node/accelerator-monitoring.md b/contributors/design-proposals/node/accelerator-monitoring.md
index 122a6ffb..984ce656 100644
--- a/contributors/design-proposals/node/accelerator-monitoring.md
+++ b/contributors/design-proposals/node/accelerator-monitoring.md
@@ -83,7 +83,7 @@ From the summary API, they will flow to heapster and stackdriver.
- Performance/Utilization testing: impact on cAdvisor/kubelet resource usage. Impact on GPU performance when we collect metrics.
## Alternatives Rejected
-Why collect GPU metrics in cAdvisor? Why not collect them in [device plugins](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/resource-management/device-plugin.md)? The path forward if we collected GPU metrics in device plugin is not clear and may take a lot of time to get finalized.
+Why collect GPU metrics in cAdvisor? Why not collect them in [device plugins](/contributors/design-proposals/resource-management/device-plugin.md)? The path forward if we collected GPU metrics in device plugin is not clear and may take a lot of time to get finalized.
Here’s a rough sketch of how things could work:
diff --git a/contributors/design-proposals/node/cpu-manager.md b/contributors/design-proposals/node/cpu-manager.md
index 4d6366d4..2dde3b6f 100644
--- a/contributors/design-proposals/node/cpu-manager.md
+++ b/contributors/design-proposals/node/cpu-manager.md
@@ -418,7 +418,7 @@ func (p *dynamicPolicy) RemoveContainer(s State, containerID string) error {
[cpuset-files]: http://man7.org/linux/man-pages/man7/cpuset.7.html#FILES
[ht]: http://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/hyper-threading-technology.html
[hwloc]: https://www.open-mpi.org/projects/hwloc
-[node-allocatable]: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/node-allocatable.md#phase-2---enforce-allocatable-on-pods
+[node-allocatable]: /contributors/design-proposals/node/node-allocatable.md#phase-2---enforce-allocatable-on-pods
[procfs]: http://man7.org/linux/man-pages/man5/proc.5.html
-[qos]: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/resource-qos.md
+[qos]: /contributors/design-proposals/node/resource-qos.md
[topo]: http://github.com/intelsdi-x/swan/tree/master/pkg/isolation/topo
diff --git a/contributors/design-proposals/node/kubelet-authorizer.md b/contributors/design-proposals/node/kubelet-authorizer.md
index 065c8aa0..0352ea94 100644
--- a/contributors/design-proposals/node/kubelet-authorizer.md
+++ b/contributors/design-proposals/node/kubelet-authorizer.md
@@ -180,5 +180,5 @@ Future work could further limit a kubelet's API access:
Features that expand or modify the APIs or objects accessed by the kubelet will need to involve the node authorizer.
Known features in the design or development stages that might modify kubelet API access are:
* [Dynamic kubelet configuration](https://github.com/kubernetes/features/issues/281)
-* [Local storage management](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/local-storage-overview.md)
+* [Local storage management](/contributors/design-proposals/storage/local-storage-overview.md)
* [Bulk watch of secrets/configmaps](https://github.com/kubernetes/community/pull/443)
diff --git a/contributors/design-proposals/node/kubelet-eviction.md b/contributors/design-proposals/node/kubelet-eviction.md
index c777e7c7..a96702cc 100644
--- a/contributors/design-proposals/node/kubelet-eviction.md
+++ b/contributors/design-proposals/node/kubelet-eviction.md
@@ -242,7 +242,7 @@ the `kubelet` will select a subsequent pod.
## Eviction Strategy
The `kubelet` will implement an eviction strategy oriented around
-[Priority](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/pod-priority-api.md)
+[Priority](/contributors/design-proposals/scheduling/pod-priority-api.md)
and pod usage relative to requests. It will target pods that are the lowest
Priority, and are the largest consumers of the starved resource relative to
their scheduling request.
diff --git a/contributors/design-proposals/node/kubelet-systemd.md b/contributors/design-proposals/node/kubelet-systemd.md
index 8c69dc8f..cef68d2a 100644
--- a/contributors/design-proposals/node/kubelet-systemd.md
+++ b/contributors/design-proposals/node/kubelet-systemd.md
@@ -135,7 +135,7 @@ The `kubelet` should associate node bootstrapping semantics to the configured
### Node allocatable
The proposal makes no changes to the definition as presented here:
-https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/node-allocatable.md
+https://git.k8s.io/kubernetes/docs/proposals/node-allocatable.md
The node will report a set of allocatable compute resources defined as follows:
diff --git a/contributors/design-proposals/node/runtime-pod-cache.md b/contributors/design-proposals/node/runtime-pod-cache.md
index 49236ba0..752741f1 100644
--- a/contributors/design-proposals/node/runtime-pod-cache.md
+++ b/contributors/design-proposals/node/runtime-pod-cache.md
@@ -28,7 +28,7 @@ pod cache, we can further improve Kubelet's CPU usage by
need to inspect containers with no state changes.
***Don't we already have a [container runtime cache]
-(https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/container/runtime_cache.go)?***
+(https://git.k8s.io/kubernetes/pkg/kubelet/container/runtime_cache.go)?***
The runtime cache is an optimization that reduces the number of `GetPods()`
calls from the workers. However,
diff --git a/contributors/design-proposals/node/sysctl.md b/contributors/design-proposals/node/sysctl.md
index 5c79a736..8ab61b8c 100644
--- a/contributors/design-proposals/node/sysctl.md
+++ b/contributors/design-proposals/node/sysctl.md
@@ -124,7 +124,7 @@ Some real-world examples for the use of sysctls:
- a containerized IPv6 routing daemon requires e.g. `/proc/sys/net/ipv6/conf/all/forwarding` and
`/proc/sys/net/ipv6/conf/all/accept_redirects` (compare
[docker#4717](https://github.com/docker/docker/issues/4717#issuecomment-98653017))
-- the [nginx ingress controller in kubernetes/contrib](https://github.com/kubernetes/contrib/blob/master/ingress/controllers/nginx/examples/sysctl/change-proc-values-rc.yaml#L80)
+- the [nginx ingress controller in kubernetes/contrib](https://git.k8s.io/contrib/ingress/controllers/nginx/examples/sysctl/change-proc-values-rc.yaml#L80)
uses a privileged sidekick container to set `net.core.somaxconn` and `net.ipv4.ip_local_port_range`.
- a huge software-as-a-service provider uses shared memory (`kernel.shm*`) and message queues (`kernel.msg*`) to
communicate between containers of their web-serving pods, configuring up to 20 GB of shared memory.
@@ -251,7 +251,7 @@ Issues:
## Design Alternatives and Considerations
- Each pod has its own network stack that is shared among its containers.
- A privileged side-kick or init container (compare https://github.com/kubernetes/contrib/blob/master/ingress/controllers/nginx/examples/sysctl/change-proc-values-rc.yaml#L80)
+ A privileged side-kick or init container (compare https://git.k8s.io/contrib/ingress/controllers/nginx/examples/sysctl/change-proc-values-rc.yaml#L80)
is able to set `net.*` sysctls.
Clearly, this is completely uncontrolled by the kubelet, but is a usable work-around if privileged
diff --git a/contributors/design-proposals/node/troubleshoot-running-pods.md b/contributors/design-proposals/node/troubleshoot-running-pods.md
index 7399e8ef..b72102f3 100644
--- a/contributors/design-proposals/node/troubleshoot-running-pods.md
+++ b/contributors/design-proposals/node/troubleshoot-running-pods.md
@@ -730,7 +730,7 @@ coupling it with container images.
* [CRI Tracking Issue](https://issues.k8s.io/28789)
* [CRI: expose optional runtime features](https://issues.k8s.io/32803)
* [Resource QoS in
- Kubernetes](https://github.com/kubernetes/kubernetes/blob/master/docs/design/resource-qos.md)
+ Kubernetes](https://git.k8s.io/kubernetes/docs/design/resource-qos.md)
* Related Features
* [#1615](https://issues.k8s.io/1615) - Shared PID Namespace across
containers in a pod
diff --git a/contributors/design-proposals/scheduling/pod-preemption.md b/contributors/design-proposals/scheduling/pod-preemption.md
index 85ad47fb..46843fce 100644
--- a/contributors/design-proposals/scheduling/pod-preemption.md
+++ b/contributors/design-proposals/scheduling/pod-preemption.md
@@ -75,7 +75,7 @@ When scheduling a pending pod, scheduler tries to place the pod on a node that d
#### Important notes
-- When ordering the pods from lowest to highest priority for considering which pod(s) to preempt, among pods with equal priority the pods are ordered by their [QoS class](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/resource-qos.md#qos-classes): Best Effort, Burstable, Guaranteed.
+- When ordering the pods from lowest to highest priority for considering which pod(s) to preempt, among pods with equal priority the pods are ordered by their [QoS class](/contributors/design-proposals/node/resource-qos.md#qos-classes): Best Effort, Burstable, Guaranteed.
- Scheduler respects pods' disruption budget when considering them for preemption.
- Scheduler will try to minimize the number of preempted pods. As a result, it may preempt a pod while leaving lower priority pods running if preemption of those lower priority pods is not enough to schedule the pending pod while preemption of the higher priority pod(s) is enough to schedule the pending pod. For example, if node capacity is 10, and pending pod is priority 10 and requires 5 units of resource, and the running pods are {priority 0 request 3, priority 1 request 1, priority 2 request 5, priority 3 request 1}, scheduler will preempt the priority 2 pod only and leaves priority 1 and priority 0 running.
- Scheduler does not have the knowledge of resource usage of pods. It makes scheduling decisions based on the requested resources ("requests") of the pods and when it considers a pod for preemption, it assumes the "requests" to be freed on the node.
@@ -183,6 +183,6 @@ To solve the problem, the user might try running his web server as Guaranteed, b
# References
-- [Controlled Rescheduling in Kubernetes](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/rescheduling.md)
+- [Controlled Rescheduling in Kubernetes](/contributors/design-proposals/scheduling/rescheduling.md)
- [Resource sharing architecture for batch and serving workloads in Kubernetes](https://docs.google.com/document/d/1-H2hnZap7gQivcSU-9j4ZrJ8wE_WwcfOkTeAGjzUyLA)
- [Design proposal for adding priority to Kubernetes API](https://github.com/kubernetes/community/pull/604/files) \ No newline at end of file
diff --git a/contributors/design-proposals/scheduling/pod-priority-api.md b/contributors/design-proposals/scheduling/pod-priority-api.md
index 785a9d62..8b5d7219 100644
--- a/contributors/design-proposals/scheduling/pod-priority-api.md
+++ b/contributors/design-proposals/scheduling/pod-priority-api.md
@@ -233,7 +233,7 @@ absolutely needed. Changing priority classes has the following disadvantages:
### Priority and QoS classes
Kubernetes has [three QoS
-classes](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/resource-qos.md#qos-classes)
+classes](/contributors/design-proposals/node/resource-qos.md#qos-classes)
which are derived from request and limit of pods. Priority is introduced as an
independent concept; meaning that any QoS class may have any valid priority.
When a node is out of resources and pods needs to be preempted, we give
diff --git a/contributors/design-proposals/scheduling/podaffinity.md b/contributors/design-proposals/scheduling/podaffinity.md
index 666caaea..89752150 100644
--- a/contributors/design-proposals/scheduling/podaffinity.md
+++ b/contributors/design-proposals/scheduling/podaffinity.md
@@ -313,7 +313,7 @@ scheduler to not put more than one pod from S in the same zone, and thus by
definition it will not put more than one pod from S on the same node, assuming
each node is in one zone. This rule is more useful as PreferredDuringScheduling
anti-affinity, e.g. one might expect it to be common in
-[Cluster Federation](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/multicluster/federation.md) clusters.)
+[Cluster Federation](/contributors/design-proposals/multicluster/federation.md) clusters.)
* **Don't co-locate pods of this service with pods from service "evilService"**:
`{LabelSelector: selector that matches evilService's pods, TopologyKey: "node"}`
diff --git a/contributors/design-proposals/scheduling/scheduler_extender.md b/contributors/design-proposals/scheduling/scheduler_extender.md
index 6a1ca16a..de7a6259 100644
--- a/contributors/design-proposals/scheduling/scheduler_extender.md
+++ b/contributors/design-proposals/scheduling/scheduler_extender.md
@@ -2,7 +2,7 @@
There are three ways to add new scheduling rules (predicates and priority
functions) to Kubernetes: (1) by adding these rules to the scheduler and
-recompiling, [described here](https://github.com/kubernetes/community/blob/master/contributors/devel/scheduler.md),
+recompiling, [described here](/contributors/devel/scheduler.md),
(2) implementing your own scheduler process that runs instead of, or alongside
of, the standard Kubernetes scheduler, (3) implementing a "scheduler extender"
process that the standard Kubernetes scheduler calls out to as a final pass when
diff --git a/contributors/design-proposals/storage/container-storage-interface.md b/contributors/design-proposals/storage/container-storage-interface.md
index 133930a4..c72d1528 100644
--- a/contributors/design-proposals/storage/container-storage-interface.md
+++ b/contributors/design-proposals/storage/container-storage-interface.md
@@ -29,7 +29,7 @@ Kubernetes volume plugins are currently “in-tree” meaning they are linked, c
4. Volume plugins get full privileges of kubernetes components (kubelet and kube-controller-manager).
5. Plugin developers are forced to make plugin source code available, and can not choose to release just a binary.
-The existing [Flex Volume](https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md) plugin attempted to address this by exposing an exec based API for mount/unmount/attach/detach. Although it enables third party storage vendors to write drivers out-of-tree, it requires access to the root filesystem of node and master machines in order to deploy the third party driver files.
+The existing [Flex Volume](/contributors/devel/flexvolume.md) plugin attempted to address this by exposing an exec based API for mount/unmount/attach/detach. Although it enables third party storage vendors to write drivers out-of-tree, it requires access to the root filesystem of node and master machines in order to deploy the third party driver files.
Additionally, it doesn’t address another pain of in-tree volumes plugins: dependencies. Volume plugins tend to have many external requirements: dependencies on mount and filesystem tools, for example. These dependencies are assumed to be available on the underlying host OS, which often is not the case, and installing them requires direct machine access. There are efforts underway, for example https://github.com/kubernetes/community/pull/589, that are hoping to address this for in-tree volume plugins. But, enabling volume plugins to be completely containerized will make dependency management much easier.
@@ -56,7 +56,7 @@ The objective of this document is to document all the requirements for enabling
* Recommend deployment process for Kubernetes compatible, third-party CSI Volume drivers on a Kubernetes cluster.
## Non-Goals
-* Replace [Flex Volume plugin](https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md)
+* Replace [Flex Volume plugin](/contributors/devel/flexvolume.md)
* The Flex volume plugin exists as an exec based mechanism to create “out-of-tree” volume plugins.
* Because Flex drivers exist and depend on the Flex interface, it will continue to be supported with a stable API.
* The CSI Volume plugin will co-exist with Flex volume plugin.
@@ -85,9 +85,9 @@ This document recommends a standard mechanism for deploying an arbitrary contain
Kubelet (responsible for mount and unmount) will communicate with an external “CSI volume driver” running on the same host machine (whether containerized or not) via a Unix Domain Socket.
-CSI volume drivers should create a socket at the following path on the node machine: `/var/lib/kubelet/plugins/[SanitizedCSIDriverName]/csi.sock`. For alpha, kubelet will assume this is the location for the Unix Domain Socket to talk to the CSI volume driver. For the beta implementation, we can consider using the [Device Plugin Unix Domain Socket Registration](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/resource-management/device-plugin.md#unix-socket) mechanism to register the Unix Domain Socket with kubelet. This mechanism would need to be extended to support registration of both CSI volume drivers and device plugins independently.
+CSI volume drivers should create a socket at the following path on the node machine: `/var/lib/kubelet/plugins/[SanitizedCSIDriverName]/csi.sock`. For alpha, kubelet will assume this is the location for the Unix Domain Socket to talk to the CSI volume driver. For the beta implementation, we can consider using the [Device Plugin Unix Domain Socket Registration](/contributors/design-proposals/resource-management/device-plugin.md#unix-socket) mechanism to register the Unix Domain Socket with kubelet. This mechanism would need to be extended to support registration of both CSI volume drivers and device plugins independently.
-`Sanitized CSIDriverName` is CSI driver name that does not contain dangerous character and can be used as annotation name. It can follow the same pattern that we use for [volume plugins](https://github.com/kubernetes/kubernetes/blob/master/pkg/util/strings/escape.go#L27). Too long or too ugly driver names can be rejected, i.e. all components described in this document will report an error and won't talk to this CSI driver. Exact sanitization method is implementation detail (SHA in the worst case).
+`Sanitized CSIDriverName` is CSI driver name that does not contain dangerous character and can be used as annotation name. It can follow the same pattern that we use for [volume plugins](https://git.k8s.io/kubernetes/pkg/util/strings/escape.go#L27). Too long or too ugly driver names can be rejected, i.e. all components described in this document will report an error and won't talk to this CSI driver. Exact sanitization method is implementation detail (SHA in the worst case).
Upon initialization of the external “CSI volume driver”, some external component must call the CSI method `GetNodeId` to get the mapping from Kubernetes Node names to CSI driver NodeID. It must then add the CSI driver NodeID to the `csi.volume.kubernetes.io/nodeid` annotation on the Kubernetes Node API object. The key of the annotation must be `csi.volume.kubernetes.io/nodeid`. The value of the annotation is a JSON blob, containing key/value pairs for each CSI driver.
@@ -385,7 +385,7 @@ To deploy a containerized third-party CSI volume driver, it is recommended that
* This is the primary means of communication between Kubelet and the “CSI volume driver” container (gRPC over UDS).
* Have cluster admins deploy the above `StatefulSet` and `DaemonSet` to aded support for the storage system in their Kubernetes cluster.
-Alternatively, deployment could be simplified by having all components (including external-provisioner and external-attacher) in the same pod (DaemonSet). Doing so, however, would consume more resources, and require a leader election protocol (likely https://github.com/kubernetes/contrib/tree/master/election) in the `external-provisioner` and `external-attacher` components.
+Alternatively, deployment could be simplified by having all components (including external-provisioner and external-attacher) in the same pod (DaemonSet). Doing so, however, would consume more resources, and require a leader election protocol (likely https://git.k8s.io/contrib/election) in the `external-provisioner` and `external-attacher` components.
### Example Walkthrough
@@ -477,7 +477,7 @@ Because the kubelet would be responsible for fetching and passing the mount secr
### Extending PersistentVolume Object
-Instead of creating a new `VolumeAttachment` object, another option we considered was extending the exiting `PersistentVolume` object.
+Instead of creating a new `VolumeAttachment` object, another option we considered was extending the exiting `PersistentVolume` object.
`PersistentVolumeSpec` would be extended to include:
* List of nodes to attach the volume to (initially empty).
@@ -485,4 +485,4 @@ Instead of creating a new `VolumeAttachment` object, another option we considere
`PersistentVolumeStatus` would be extended to include:
* List of nodes the volume was successfully attached to.
-We dismissed this approach because having attach/detach triggered by the creation/deletion of an object is much easier to manage (for both external-attacher and Kubernetes) and more robust (fewer corner cases to worry about). \ No newline at end of file
+We dismissed this approach because having attach/detach triggered by the creation/deletion of an object is much easier to manage (for both external-attacher and Kubernetes) and more robust (fewer corner cases to worry about).
diff --git a/contributors/design-proposals/storage/flexvolume-deployment.md b/contributors/design-proposals/storage/flexvolume-deployment.md
index 07f75904..0b40748b 100644
--- a/contributors/design-proposals/storage/flexvolume-deployment.md
+++ b/contributors/design-proposals/storage/flexvolume-deployment.md
@@ -10,7 +10,7 @@ Beginning in version 1.8, the Kubernetes Storage SIG is putting a stop to accept
[CSI](https://github.com/container-storage-interface/spec/blob/master/spec.md) provides a single interface that storage vendors can implement in order for their storage solutions to work across many different container orchestrators, and volume plugins are out-of-tree by design. This is a large effort, the full implementation of CSI is several quarters away, and there is a need for an immediate solution for storage vendors to continue adding volume plugins.
-[Flexvolume](https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md) is an in-tree plugin that has the ability to run any storage solution by executing volume commands against a user-provided driver on the Kubernetes host, and this currently exists today. However, the process of setting up Flexvolume is very manual, pushing it out of consideration for many users. Problems include having to copy the driver to a specific location in each node, manually restarting kubelet, and user's limited access to machines.
+[Flexvolume](/contributors/devel/flexvolume.md) is an in-tree plugin that has the ability to run any storage solution by executing volume commands against a user-provided driver on the Kubernetes host, and this currently exists today. However, the process of setting up Flexvolume is very manual, pushing it out of consideration for many users. Problems include having to copy the driver to a specific location in each node, manually restarting kubelet, and user's limited access to machines.
An automated deployment technique is discussed in [Recommended Driver Deployment Method](#recommended-driver-deployment-method). The crucial change required to enable this method is allowing kubelet and controller manager to dynamically discover plugin changes.
diff --git a/contributors/design-proposals/storage/volume-metrics.md b/contributors/design-proposals/storage/volume-metrics.md
index 59c15fb9..4bea9645 100644
--- a/contributors/design-proposals/storage/volume-metrics.md
+++ b/contributors/design-proposals/storage/volume-metrics.md
@@ -17,7 +17,7 @@ higher than individual volume plugins.
### Metric format and collection
Volume metrics emitted will fall under category of service metrics
-as defined in [Kubernetes Monitoring Architecture](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/monitoring_architecture.md).
+as defined in [Kubernetes Monitoring Architecture](/contributors/design-proposals/instrumentation/monitoring_architecture.md).
The metrics will be emitted using [Prometheus format](https://prometheus.io/docs/instrumenting/exposition_formats/) and available for collection
@@ -27,7 +27,7 @@ from `/metrics` HTTP endpoint of kubelet and controller-manager.
Any collector which can parse Prometheus metric format should be able to collect
metrics from these endpoints.
-A more detailed description of monitoring pipeline can be found in [Monitoring architecture](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/monitoring_architecture.md#monitoring-pipeline) document.
+A more detailed description of monitoring pipeline can be found in [Monitoring architecture](/contributors/design-proposals/instrumentation/monitoring_architecture.md#monitoring-pipeline) document.
### Metric Types
diff --git a/contributors/devel/README.md b/contributors/devel/README.md
index 9678912f..8650ad51 100644
--- a/contributors/devel/README.md
+++ b/contributors/devel/README.md
@@ -70,7 +70,7 @@ Guide](http://kubernetes.io/docs/admin/).
Authorization applies to all HTTP requests on the main apiserver port.
This doc explains the available authorization implementations.
-* **Admission Control Plugins** ([admission_control](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/admission_control.md))
+* **Admission Control Plugins** ([admission_control](/contributors/design-proposals/api-machinery/admission_control.md))
## Building releases
diff --git a/contributors/devel/api_changes.md b/contributors/devel/api_changes.md
index 293d2fe9..8104946c 100644
--- a/contributors/devel/api_changes.md
+++ b/contributors/devel/api_changes.md
@@ -495,7 +495,7 @@ The generators that create go code have a `--go-header-file` flag
which should be a file that contains the header that should be
included. This header is the copyright that should be present at the
top of the generated file and should be checked with the
-[`repo-infra/verify/verify-boilerplane.sh`](https://github.com/kubernetes/repo-infra/blob/master/verify/verify-boilerplate.sh)
+[`repo-infra/verify/verify-boilerplane.sh`](https://git.k8s.io/repo-infra/verify/verify-boilerplate.sh)
script at a later stage of the build.
To invoke these generators, you can run `make update`, which runs a bunch of
@@ -829,7 +829,7 @@ The preferred approach adds an alpha field to the existing object, and ensures i
1. Add a feature gate to the API server to control enablement of the new field (and associated function):
- In [staging/src/k8s.io/apiserver/pkg/features/kube_features.go](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/features/kube_features.go):
+ In [staging/src/k8s.io/apiserver/pkg/features/kube_features.go](https://git.k8s.io/kubernetes/staging/src/k8s.io/apiserver/pkg/features/kube_features.go):
```go
// owner: @you
diff --git a/contributors/devel/architectural-roadmap.md b/contributors/devel/architectural-roadmap.md
index 9d8a6f31..afe37b1a 100644
--- a/contributors/devel/architectural-roadmap.md
+++ b/contributors/devel/architectural-roadmap.md
@@ -252,7 +252,7 @@ Kubernetes cannot function without this basic API machinery and semantics, inclu
factor out functionality from existing components in running
clusters. At its core would be a pull-based declarative reconciler,
as provided by the [current add-on
- manager](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/addon-manager)
+ manager](https://git.k8s.io/kubernetes/cluster/addons/addon-manager)
and as described in the [whitebox app management
doc](https://docs.google.com/document/d/1S3l2F40LCwFKg6WG0srR6056IiZJBwDmDvzHWRffTWk/edit#heading=h.gh6cf96u8mlr). This
would be easier once we have [apply support in the
@@ -528,7 +528,7 @@ routing APIs and functions include:
(NIY)
* Service DNS. DNS, using the [official Kubernetes
- schema](https://github.com/kubernetes/dns/blob/master/docs/specification.md),
+ schema](https://git.k8s.io/dns/docs/specification.md),
is required.
The application layer may depend on:
@@ -601,7 +601,7 @@ Automation APIs and functions:
* NIY: The vertical pod autoscaling API(s)
* [Cluster autoscaling and/or node
- provisioning](https://github.com/kubernetes/contrib/tree/master/cluster-autoscaler)
+ provisioning](https://git.k8s.io/contrib/cluster-autoscaler)
* The PodDisruptionBudget API
@@ -649,7 +649,7 @@ The management layer may depend on:
* Replacement and/or additional horizontal and vertical pod
autoscalers
-* [Cluster autoscaler and/or node provisioner](https://github.com/kubernetes/contrib/tree/master/cluster-autoscaler)
+* [Cluster autoscaler and/or node provisioner](https://git.k8s.io/contrib/cluster-autoscaler)
* Dynamic volume provisioners
@@ -880,7 +880,7 @@ LoadBalancer API is present.
Extensions and their options should be registered via FooClass
resources, similar to
-[StorageClass](https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/storage/v1beta1/types.go#L31),
+[StorageClass](https://git.k8s.io/kubernetes/pkg/apis/storage/v1beta1/types.go#L31),
but with parameter descriptions, types (e.g., integer vs string),
constraints (e.g., range or regexp) for validation, and default
values, with a reference to fooClassName from the extended API. These
diff --git a/contributors/devel/automation.md b/contributors/devel/automation.md
index 1d019f2b..e46ff9a5 100644
--- a/contributors/devel/automation.md
+++ b/contributors/devel/automation.md
@@ -14,9 +14,9 @@ In an effort to
* maintain end-to-end test stability
* load test github's label feature
-We have added an automated [submit-queue](https://github.com/kubernetes/test-infra/tree/master/mungegithub/submit-queue)
+We have added an automated [submit-queue](https://git.k8s.io/test-infra/mungegithub/submit-queue)
to the
-[github "munger"](https://github.com/kubernetes/test-infra/tree/master/mungegithub)
+[github "munger"](https://git.k8s.io/test-infra/mungegithub)
for kubernetes.
The submit-queue does the following:
@@ -48,7 +48,7 @@ If these tests pass a second time, the PR will be merged when this PR finishes r
## Github Munger
-We run [github "mungers"](https://github.com/kubernetes/test-infra/tree/master/mungegithub).
+We run [github "mungers"](https://git.k8s.io/test-infra/mungegithub).
This runs repeatedly over github pulls and issues and runs modular "mungers".
The mungers include the "submit-queue" referenced above along
diff --git a/contributors/devel/bazel.md b/contributors/devel/bazel.md
index 21b57ff6..de80b4b2 100644
--- a/contributors/devel/bazel.md
+++ b/contributors/devel/bazel.md
@@ -3,7 +3,7 @@
Building and testing Kubernetes with Bazel is supported but not yet default.
Go rules are managed by the [`gazelle`](https://github.com/bazelbuild/rules_go/tree/master/go/tools/gazelle)
-tool, with some additional rules managed by the [`kazel`](https://github.com/kubernetes/repo-infra/tree/master/kazel) tool.
+tool, with some additional rules managed by the [`kazel`](https://git.k8s.io/repo-infra/kazel) tool.
These tools are called via the `hack/update-bazel.sh` script.
Instructions for installing Bazel
@@ -26,7 +26,7 @@ $ bazel test //pkg/kubectl/...
## Planter
If you don't want to install Bazel, you can instead try using the unofficial
-[Planter](https://github.com/kubernetes/test-infra/tree/master/planter) tool,
+[Planter](https://git.k8s.io/test-infra/planter) tool,
which runs Bazel inside a Docker container.
For example, you can run
diff --git a/contributors/devel/cherry-picks.md b/contributors/devel/cherry-picks.md
index 4d9e65f0..232b2a0b 100644
--- a/contributors/devel/cherry-picks.md
+++ b/contributors/devel/cherry-picks.md
@@ -15,7 +15,7 @@ depending on the point in the release cycle.
to set the same label to confirm that no release note is needed.
1. `release-note` labeled PRs generate a release note using the PR title by
default OR the release-note block in the PR template if filled in.
- * See the [PR template](https://github.com/kubernetes/kubernetes/blob/master/.github/PULL_REQUEST_TEMPLATE.md) for more details.
+ * See the [PR template](https://git.k8s.io/kubernetes/.github/PULL_REQUEST_TEMPLATE.md) for more details.
* PR titles and body comments are mutable and can be modified at any time
prior to the release to reflect a release note friendly message.
diff --git a/contributors/devel/container-runtime-interface.md b/contributors/devel/container-runtime-interface.md
index 96f5f10d..29e96ed2 100644
--- a/contributors/devel/container-runtime-interface.md
+++ b/contributors/devel/container-runtime-interface.md
@@ -3,9 +3,9 @@
## What is CRI?
CRI (_Container Runtime Interface_) consists of a
-[protobuf API](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/apis/cri/v1alpha1/runtime/api.proto),
+[protobuf API](https://git.k8s.io/kubernetes/pkg/kubelet/apis/cri/v1alpha1/runtime/api.proto),
specifications/requirements (to-be-added),
-and [libraries](https://github.com/kubernetes/kubernetes/tree/master/pkg/kubelet/server/streaming)
+and [libraries](https://git.k8s.io/kubernetes/pkg/kubelet/server/streaming)
for container runtimes to integrate with kubelet on a node. CRI is currently in Alpha.
In the future, we plan to add more developer tools such as the CRI validation
@@ -59,8 +59,8 @@ Below is a mixed list of CRI specifications/requirements, design docs and
proposals. We are working on adding more documentation for the API.
- [Original proposal](https://github.com/kubernetes/kubernetes/blob/release-1.5/docs/proposals/container-runtime-interface-v1.md)
- - [Networking](https://github.com/kubernetes/community/blob/master/contributors/devel/kubelet-cri-networking.md)
- - [Container metrics](https://github.com/kubernetes/community/blob/master/contributors/devel/cri-container-stats.md)
+ - [Networking](/contributors/devel/kubelet-cri-networking.md)
+ - [Container metrics](/contributors/devel/cri-container-stats.md)
- [Exec/attach/port-forward streaming requests](https://docs.google.com/document/d/1OE_QoInPlVCK9rMAx9aybRmgFiVjHpJCHI9LrfdNM_s/edit?usp=sharing)
- [Container stdout/stderr logs](https://github.com/kubernetes/kubernetes/blob/release-1.5/docs/proposals/kubelet-cri-logging.md)
diff --git a/contributors/devel/contributor-cheatsheet.md b/contributors/devel/contributor-cheatsheet.md
index f8506d16..5ec5a3f3 100644
--- a/contributors/devel/contributor-cheatsheet.md
+++ b/contributors/devel/contributor-cheatsheet.md
@@ -13,14 +13,14 @@ A list of common resources when contributing to Kubernetes.
- [Gubernator Dashboard - k8s.reviews](https://k8s-gubernator.appspot.com/pr)
- [reviewable.kubernetes.io](https://reviewable.kubernetes.io/reviews#-)
- [Submit Queue](https://submit-queue.k8s.io)
-- [Bot commands](https://github.com/kubernetes/test-infra/blob/master/commands.md)
+- [Bot commands](https://git.k8s.io/test-infra/commands.md)
- [Release Buckets](http://gcsweb.k8s.io/gcs/kubernetes-release/)
- Developer Guide
- - [Cherry Picking Guide](https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md) - [Queue](http://cherrypick.k8s.io/#/queue)
+ - [Cherry Picking Guide](/contributors/devel/cherry-picks.md) - [Queue](http://cherrypick.k8s.io/#/queue)
## SIGs and Working Groups
-- [Master SIG list](https://github.com/kubernetes/community/blob/master/sig-list.md#master-sig-list)
+- [Master SIG list](/sig-list.md#master-sig-list)
## Community
diff --git a/contributors/devel/controllers.md b/contributors/devel/controllers.md
index 0764f63a..50dada02 100644
--- a/contributors/devel/controllers.md
+++ b/contributors/devel/controllers.md
@@ -32,7 +32,7 @@ When you're writing controllers, there are few guidelines that will help make su
1. Use `SharedInformers`. `SharedInformers` provide hooks to receive notifications of adds, updates, and deletes for a particular resource. They also provide convenience functions for accessing shared caches and determining when a cache is primed.
- Use the factory methods down in https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/client-go/informers/factory.go to ensure that you are sharing the same instance of the cache as everyone else.
+ Use the factory methods down in https://git.k8s.io/kubernetes/staging/src/k8s.io/client-go/informers/factory.go to ensure that you are sharing the same instance of the cache as everyone else.
This saves us connections against the API server, duplicate serialization costs server-side, duplicate deserialization costs controller-side, and duplicate caching costs controller-side.
@@ -62,7 +62,7 @@ When you're writing controllers, there are few guidelines that will help make su
This lets clients know that the controller has processed a resource. Make sure that your controller is the main controller that is responsible for that resource, otherwise if you need to communicate observation via your own controller, you will need to create a different kind of ObservedGeneration in the Status of the resource.
-1. Consider using owner references for resources that result in the creation of other resources (eg. a ReplicaSet results in creating Pods). Thus you ensure that children resources are going to be garbage-collected once a resource managed by your controller is deleted. For more information on owner references, read more [here](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/controller-ref.md).
+1. Consider using owner references for resources that result in the creation of other resources (eg. a ReplicaSet results in creating Pods). Thus you ensure that children resources are going to be garbage-collected once a resource managed by your controller is deleted. For more information on owner references, read more [here](/contributors/design-proposals/api-machinery/controller-ref.md).
Pay special attention in the way you are doing adoption. You shouldn't adopt children for a resource when either the parent or the children are marked for deletion. If you are using a cache for your resources, you will likely need to bypass it with a direct API read in case you observe that an owner reference has been updated for one of the children. Thus, you ensure your controller is not racing with the garbage collector.
diff --git a/contributors/devel/cri-container-stats.md b/contributors/devel/cri-container-stats.md
index a2352fb5..c1176f05 100644
--- a/contributors/devel/cri-container-stats.md
+++ b/contributors/devel/cri-container-stats.md
@@ -1,7 +1,7 @@
# Container Runtime Interface: Container Metrics
[Container runtime interface
-(CRI)](https://github.com/kubernetes/community/blob/master/contributors/devel/container-runtime-interface.md)
+(CRI)](/contributors/devel/container-runtime-interface.md)
provides an abstraction for container runtimes to integrate with Kubernetes.
CRI expects the runtime to provide resource usage statistics for the
containers.
@@ -12,7 +12,7 @@ Historically Kubelet relied on the [cAdvisor](https://github.com/google/cadvisor
library, an open-source project hosted in a separate repository, to retrieve
container metrics such as CPU and memory usage. These metrics are then aggregated
and exposed through Kubelet's [Summary
-API](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/apis/stats/v1alpha1/types.go)
+API](https://git.k8s.io/kubernetes/pkg/kubelet/apis/stats/v1alpha1/types.go)
for the monitoring pipeline (and other components) to consume. Any container
runtime (e.g., Docker and Rkt) integrated with Kubernetes needed to add a
corresponding package in cAdvisor to support tracking container and image file
@@ -23,9 +23,9 @@ progression to augment CRI to serve container metrics to eliminate a separate
integration point.
*See the [core metrics design
-proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/core-metrics-pipeline.md)
+proposal](/contributors/design-proposals/instrumentation/core-metrics-pipeline.md)
for more information on metrics exposed by Kubelet, and [monitoring
-architecture](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/monitoring_architecture.md)
+architecture](/contributors/design-proposals/instrumentation/monitoring_architecture.md)
for the evolving monitoring pipeline in Kubernetes.*
# Container Metrics
diff --git a/contributors/devel/development.md b/contributors/devel/development.md
index a3264c05..60ff877c 100644
--- a/contributors/devel/development.md
+++ b/contributors/devel/development.md
@@ -293,13 +293,13 @@ make test
make test WHAT=./pkg/api/helper GOFLAGS=-v
# Run integration tests, requires etcd
-# For more info, visit https://github.com/kubernetes/community/blob/master/contributors/devel/testing.md#integration-tests
+# For more info, visit https://git.k8s.io/community/contributors/devel/testing.md#integration-tests
make test-integration
# Run e2e tests by building test binaries, turn up a test cluster, run all tests, and tear the cluster down
# Equivalent to: go run hack/e2e.go -- -v --build --up --test --down
# Note: running all e2e tests takes a LONG time! To run specific e2e tests, visit:
-# https://github.com/kubernetes/community/blob/master/contributors/devel/e2e-tests.md#building-kubernetes-and-running-the-tests
+# https://git.k8s.io/community/contributors/devel/e2e-tests.md#building-kubernetes-and-running-the-tests
make test-e2e
```
@@ -398,9 +398,9 @@ masse. This makes reviews easier.
[OS X GNU tools]: https://www.topbug.net/blog/2013/04/14/install-and-use-gnu-command-line-tools-in-mac-os-x
-[build/build-image/cross]: https://github.com/kubernetes/kubernetes/blob/master/build/build-image/cross
-[build/common.sh]: https://github.com/kubernetes/kubernetes/blob/master/build/common.sh
-[e2e-image]: https://github.com/kubernetes/test-infra/tree/master/jenkins/e2e-image
+[build/build-image/cross]: https://git.k8s.io/kubernetes/build/build-image/cross
+[build/common.sh]: https://git.k8s.io/kubernetes/build/common.sh
+[e2e-image]: https://git.k8s.io/test-infra/jenkins/e2e-image
[etcd-latest]: https://coreos.com/etcd/docs/latest
[etcd-install]: testing.md#install-etcd-dependency
<!-- https://github.com/coreos/etcd/releases -->
@@ -409,5 +409,5 @@ masse. This makes reviews easier.
[kubectl user guide]: https://kubernetes.io/docs/user-guide/kubectl
[kubernetes.io]: https://kubernetes.io
[mercurial]: http://mercurial.selenic.com/wiki/Download
-[test-image]: https://github.com/kubernetes/test-infra/tree/master/jenkins/test-image
+[test-image]: https://git.k8s.io/test-infra/jenkins/test-image
[Build with Bazel]: bazel.md
diff --git a/contributors/devel/e2e-node-tests.md b/contributors/devel/e2e-node-tests.md
index 0dda84a3..4f3327cb 100644
--- a/contributors/devel/e2e-node-tests.md
+++ b/contributors/devel/e2e-node-tests.md
@@ -137,7 +137,7 @@ make test-e2e-node REMOTE=true IMAGE_PROJECT="<name-of-project-with-images>" IMA
```
Setting up your own host image may require additional steps such as installing etcd or docker. See
-[setup_host.sh](https://github.com/kubernetes/kubernetes/tree/master/test/e2e_node/environment/setup_host.sh) for common steps to setup hosts to run node tests.
+[setup_host.sh](https://git.k8s.io/kubernetes/test/e2e_node/environment/setup_host.sh) for common steps to setup hosts to run node tests.
## Create instances using a different instance name prefix
@@ -223,7 +223,7 @@ the bottom of the comments section. To re-run just the node e2e tests from the
`@k8s-bot node e2e test this issue: #<Flake-Issue-Number or IGNORE>` and **include a link to the test
failure logs if caused by a flake.**
-The PR builder runs tests against the images listed in [jenkins-pull.properties](https://github.com/kubernetes/kubernetes/tree/master/test/e2e_node/jenkins/jenkins-pull.properties)
+The PR builder runs tests against the images listed in [jenkins-pull.properties](https://git.k8s.io/kubernetes/test/e2e_node/jenkins/jenkins-pull.properties)
-The post submit tests run against the images listed in [jenkins-ci.properties](https://github.com/kubernetes/kubernetes/tree/master/test/e2e_node/jenkins/jenkins-ci.properties)
+The post submit tests run against the images listed in [jenkins-ci.properties](https://git.k8s.io/kubernetes/test/e2e_node/jenkins/jenkins-ci.properties)
diff --git a/contributors/devel/e2e-tests.md b/contributors/devel/e2e-tests.md
index 1ee22022..cf4127d0 100644
--- a/contributors/devel/e2e-tests.md
+++ b/contributors/devel/e2e-tests.md
@@ -146,7 +146,7 @@ go run hack/e2e.go -- -v --down
The logic in `e2e.go` moved out of the main kubernetes repo to test-infra.
The remaining code in `hack/e2e.go` installs `kubetest` and sends it flags.
-It now lives in [kubernetes/test-infra/kubetest](https://github.com/kubernetes/test-infra/tree/master/kubetest).
+It now lives in [kubernetes/test-infra/kubetest](https://git.k8s.io/test-infra/kubetest).
By default `hack/e2e.go` updates and installs `kubetest` once per day.
Control the updater behavior with the `--get` and `--old` flags:
The `--` flag separates updater and kubetest flags (kubetest flags on the right).
@@ -446,7 +446,7 @@ similarly enough to older versions. The general strategy is to cover the follow
same version (e.g. a cluster upgraded to v1.3 passes the same v1.3 tests as
a newly-created v1.3 cluster).
-[hack/e2e-runner.sh](https://github.com/kubernetes/test-infra/blob/master/jenkins/e2e-image/e2e-runner.sh) is
+[hack/e2e-runner.sh](https://git.k8s.io/test-infra/jenkins/e2e-image/e2e-runner.sh) is
the authoritative source on how to run version-skewed tests, but below is a
quick-and-dirty tutorial.
@@ -569,7 +569,7 @@ breaking changes, it does *not* block the merge-queue, and thus should run in
some separate test suites owned by the feature owner(s)
(see [Continuous Integration](#continuous-integration) below).
-Every test should be owned by a [SIG](https://github.com/kubernetes/community/blob/master/sig-list.md),
+Every test should be owned by a [SIG](/sig-list.md),
and have a corresponding `[sig-<name>]` label.
### Viper configuration and hierarchichal test parameters.
@@ -582,7 +582,7 @@ To use viper, rather than flags, to configure your tests:
- Just add "e2e.json" to the current directory you are in, and define parameters in it... i.e. `"kubeconfig":"/tmp/x"`.
-Note that advanced testing parameters, and hierarchichally defined parameters, are only defined in viper, to see what they are, you can dive into [TestContextType](https://github.com/kubernetes/kubernetes/blob/master/test/e2e/framework/test_context.go).
+Note that advanced testing parameters, and hierarchichally defined parameters, are only defined in viper, to see what they are, you can dive into [TestContextType](https://git.k8s.io/kubernetes/test/e2e/framework/test_context.go).
In time, it is our intent to add or autogenerate a sample viper configuration that includes all e2e parameters, to ship with kubernetes.
@@ -656,7 +656,7 @@ A quick overview of how we run e2e CI on Kubernetes.
We run a battery of `e2e` tests against `HEAD` of the master branch on a
continuous basis, and block merges via the [submit
queue](http://submit-queue.k8s.io/) on a subset of those tests if they fail (the
-subset is defined in the [munger config](https://github.com/kubernetes/test-infra/tree/master/mungegithub/mungers/submit-queue.go)
+subset is defined in the [munger config](https://git.k8s.io/test-infra/mungegithub/mungers/submit-queue.go)
via the `jenkins-jobs` flag; note we also block on `kubernetes-build` and
`kubernetes-test-go` jobs for build and unit and integration tests).
@@ -732,7 +732,7 @@ label, and will be incorporated into our core suites. If tests are not expected
to pass by default, (e.g. they require a special environment such as added
quota,) they should remain with the `[Feature:.+]` label, and the suites that
run them should be incorporated into the
-[munger config](https://github.com/kubernetes/test-infra/tree/master/mungegithub/mungers/submit-queue.go)
+[munger config](https://git.k8s.io/test-infra/mungegithub/mungers/submit-queue.go)
via the `jenkins-jobs` flag.
Occasionally, we'll want to add tests to better exercise features that are
diff --git a/contributors/devel/flexvolume.md b/contributors/devel/flexvolume.md
index 5fc518a4..52d42ccf 100644
--- a/contributors/devel/flexvolume.md
+++ b/contributors/devel/flexvolume.md
@@ -14,10 +14,10 @@ The vendor and driver names must match flexVolume.driver in the volume spec, wit
## Dynamic Plugin Discovery
Beginning in v1.8, Flexvolume supports the ability to detect drivers on the fly. Instead of requiring drivers to exist at system initialization time or having to restart kubelet or controller manager, drivers can be installed, upgraded/downgraded, and uninstalled while the system is running.
-For more information, please refer to the [design document](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/flexvolume-deployment.md).
+For more information, please refer to the [design document](/contributors/design-proposals/storage/flexvolume-deployment.md).
## Automated Plugin Installation/Upgrade
-One possible way to install and upgrade your Flexvolume drivers is by using a DaemonSet. See [Recommended Driver Deployment Method](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/flexvolume-deployment.md#recommended-driver-deployment-method) for details.
+One possible way to install and upgrade your Flexvolume drivers is by using a DaemonSet. See [Recommended Driver Deployment Method](/contributors/design-proposals/storage/flexvolume-deployment.md#recommended-driver-deployment-method) for details.
## Plugin details
The plugin expects the following call-outs are implemented for the backend drivers. Some call-outs are optional. Call-outs are invoked from the Kubelet & the Controller manager nodes.
@@ -50,7 +50,7 @@ Detach the volume from the Kubelet node. Nodename param is only valid/relevant i
```
#### Wait for attach:
-Wait for the volume to be attached on the remote node. On success, the path to the device is returned. Called from both Kubelet & Controller manager. The timeout should be 10m (based on https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/volumemanager/volume_manager.go#L88 )
+Wait for the volume to be attached on the remote node. On success, the path to the device is returned. Called from both Kubelet & Controller manager. The timeout should be 10m (based on https://git.k8s.io/kubernetes/pkg/kubelet/volumemanager/volume_manager.go#L88 )
```
<driver executable> waitforattach <mount device> <json options>
@@ -132,7 +132,7 @@ Note: Secrets are passed only to "mount/unmount" call-outs.
See [nginx.yaml] & [nginx-nfs.yaml] for a quick example on how to use Flexvolume in a pod.
-[lvm]: https://github.com/kubernetes/kubernetes/blob/master/examples/volumes/flexvolume/lvm
-[nfs]: https://github.com/kubernetes/kubernetes/blob/master/examples/volumes/flexvolume/nfs
-[nginx.yaml]: https://github.com/kubernetes/kubernetes/blob/master/examples/volumes/flexvolume/nginx.yaml
-[nginx-nfs.yaml]: https://github.com/kubernetes/kubernetes/blob/master/examples/volumes/flexvolume/nginx-nfs.yaml
+[lvm]: https://git.k8s.io/kubernetes/examples/volumes/flexvolume/lvm
+[nfs]: https://git.k8s.io/kubernetes/examples/volumes/flexvolume/nfs
+[nginx.yaml]: https://git.k8s.io/kubernetes/examples/volumes/flexvolume/nginx.yaml
+[nginx-nfs.yaml]: https://git.k8s.io/kubernetes/examples/volumes/flexvolume/nginx-nfs.yaml
diff --git a/contributors/devel/generating-clientset.md b/contributors/devel/generating-clientset.md
index 2ef0ddd5..7a47aeb8 100644
--- a/contributors/devel/generating-clientset.md
+++ b/contributors/devel/generating-clientset.md
@@ -33,7 +33,7 @@ In addition, the following optional tags influence the client generation:
$ client-gen --input="api/v1,extensions/v1beta1" --clientset-name="my_release"
```
-**3.** ***Adding expansion methods***: client-gen only generates the common methods, such as CRUD. You can manually add additional methods through the expansion interface. For example, this [file](https://github.com/kubernetes/kubernetes/blob/master/pkg/client/clientset_generated/internalclientset/typed/core/internalversion/pod_expansion.go) adds additional methods to Pod's client. As a convention, we put the expansion interface and its methods in file ${TYPE}_expansion.go. In most cases, you don't want to remove existing expansion files. So to make life easier, instead of creating a new clientset from scratch, ***you can copy and rename an existing clientset (so that all the expansion files are copied)***, and then run client-gen.
+**3.** ***Adding expansion methods***: client-gen only generates the common methods, such as CRUD. You can manually add additional methods through the expansion interface. For example, this [file](https://git.k8s.io/kubernetes/pkg/client/clientset_generated/internalclientset/typed/core/internalversion/pod_expansion.go) adds additional methods to Pod's client. As a convention, we put the expansion interface and its methods in file ${TYPE}_expansion.go. In most cases, you don't want to remove existing expansion files. So to make life easier, instead of creating a new clientset from scratch, ***you can copy and rename an existing clientset (so that all the expansion files are copied)***, and then run client-gen.
## Output of client-gen
@@ -43,7 +43,7 @@ $ client-gen --input="api/v1,extensions/v1beta1" --clientset-name="my_release"
## Released clientsets
-If you are contributing code to k8s.io/kubernetes, try to use the generated clientset [here](https://github.com/kubernetes/kubernetes/tree/master/pkg/client/clientset_generated/internalclientset).
+If you are contributing code to k8s.io/kubernetes, try to use the generated clientset [here](https://git.k8s.io/kubernetes/pkg/client/clientset_generated/internalclientset).
If you need a stable Go client to build your own project, please refer to the [client-go repository](https://github.com/kubernetes/client-go).
diff --git a/contributors/devel/gubernator.md b/contributors/devel/gubernator.md
index 9e3855ff..2a25ddd7 100644
--- a/contributors/devel/gubernator.md
+++ b/contributors/devel/gubernator.md
@@ -113,7 +113,7 @@ k8s-gubernator.appspot.com/build/yourusername-g8r-logs/logs/e2e-node/timestamp
Gubernator provides a framework for debugging failures and introduces useful features.
There is still a lot of room for more features and growth to make the debugging process more efficient.
-How to contribute (see https://github.com/kubernetes/test-infra/blob/master/gubernator/README.md)
+How to contribute (see https://git.k8s.io/test-infra/gubernator/README.md)
* Extend GUBERNATOR flag to all local tests
diff --git a/contributors/devel/issues.md b/contributors/devel/issues.md
index 387bd987..575eddff 100644
--- a/contributors/devel/issues.md
+++ b/contributors/devel/issues.md
@@ -33,7 +33,7 @@ for other github repositories related to Kubernetes is TBD.
Most people can leave comments and open issues. They don't have the ability to
set labels, change milestones and close other peoples issues. For that we use
a bot to manage labelling and triaging. The bot has a set of
-[commands and permissions](https://github.com/kubernetes/test-infra/blob/master/commands.md)
+[commands and permissions](https://git.k8s.io/test-infra/commands.md)
and this document will cover the basic ones.
## Determine if it’s a support request
@@ -93,7 +93,7 @@ The Kubernetes Team
```
## Find the right SIG(s)
-Components are divided among [Special Interest Groups (SIGs)](https://github.com/kubernetes/community/blob/master/sig-list.md). Find a proper SIG for the ownership of the issue using the bot:
+Components are divided among [Special Interest Groups (SIGs)](/sig-list.md). Find a proper SIG for the ownership of the issue using the bot:
* Typing `/sig network` in a comment should add the sig/network label, for
example.
diff --git a/contributors/devel/kubectl-conventions.md b/contributors/devel/kubectl-conventions.md
index 127546c9..5b009657 100644
--- a/contributors/devel/kubectl-conventions.md
+++ b/contributors/devel/kubectl-conventions.md
@@ -372,7 +372,7 @@ and as noted in [command conventions](#command-conventions), ideally that logic
should exist server-side so any client could take advantage of it. Notice that
this is not a mandatory structure and not every command is implemented this way,
but this is a nice convention so try to be compliant with it. As an example,
-have a look at how [kubectl logs](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/cmd/logs.go) is implemented.
+have a look at how [kubectl logs](https://git.k8s.io/kubernetes/pkg/kubectl/cmd/logs.go) is implemented.
## Exit code conventions
diff --git a/contributors/devel/node-performance-testing.md b/contributors/devel/node-performance-testing.md
index 4afa8d25..d43737a8 100644
--- a/contributors/devel/node-performance-testing.md
+++ b/contributors/devel/node-performance-testing.md
@@ -26,7 +26,7 @@ Heapster will hide the performance cost of serving those stats in the Kubelet.
Disabling addons is simple. Just ssh into the Kubernetes master and move the
addon from `/etc/kubernetes/addons/` to a backup location. More details
-[here](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/).
+[here](https://git.k8s.io/kubernetes/cluster/addons/).
### Which / how many pods?
@@ -57,7 +57,7 @@ sampling.
## E2E Performance Test
There is an end-to-end test for collecting overall resource usage of node
-components: [kubelet_perf.go](https://github.com/kubernetes/kubernetes/blob/master/test/e2e/node/kubelet_perf.go). To
+components: [kubelet_perf.go](https://git.k8s.io/kubernetes/test/e2e/node/kubelet_perf.go). To
run the test, simply make sure you have an e2e cluster running (`go run
hack/e2e.go -- -up`) and [set up](#cluster-set-up) correctly.
diff --git a/contributors/devel/on-call-federation-build-cop.md b/contributors/devel/on-call-federation-build-cop.md
index 69c2d973..708c854a 100644
--- a/contributors/devel/on-call-federation-build-cop.md
+++ b/contributors/devel/on-call-federation-build-cop.md
@@ -24,10 +24,10 @@ Federation CI e2e job names are as below:
Search for the above job names in various configuration files as below:
-* Prow config: https://github.com/kubernetes/test-infra/blob/master/prow/config.yaml
-* Test job/bootstrap config: https://github.com/kubernetes/test-infra/blob/master/jobs/config.json
-* Test grid config: https://github.com/kubernetes/test-infra/blob/master/testgrid/config/config.yaml
-* Job specific config: https://github.com/kubernetes/test-infra/tree/master/jobs/env
+* Prow config: https://git.k8s.io/test-infra/prow/config.yaml
+* Test job/bootstrap config: https://git.k8s.io/test-infra/jobs/config.json
+* Test grid config: https://git.k8s.io/test-infra/testgrid/config/config.yaml
+* Job specific config: https://git.k8s.io/test-infra/jobs/env
### Results
@@ -73,10 +73,10 @@ Federation pre-submit jobs have following names.
Search for the above job names in various configuration files as below:
-* Prow config: https://github.com/kubernetes/test-infra/blob/master/prow/config.yaml
-* Test job/bootstrap config: https://github.com/kubernetes/test-infra/blob/master/jobs/config.json
-* Test grid config: https://github.com/kubernetes/test-infra/blob/master/testgrid/config/config.yaml
-* Job specific config: https://github.com/kubernetes/test-infra/tree/master/jobs/env
+* Prow config: https://git.k8s.io/test-infra/prow/config.yaml
+* Test job/bootstrap config: https://git.k8s.io/test-infra/jobs/config.json
+* Test grid config: https://git.k8s.io/test-infra/testgrid/config/config.yaml
+* Job specific config: https://git.k8s.io/test-infra/jobs/env
### Results
@@ -91,7 +91,7 @@ We track the flakiness metrics of all the pre-submit jobs and
individual tests that run against PRs in
[kubernetes/federation](https://github.com/kubernetes/federation).
-* The metrics that we track are documented in https://github.com/kubernetes/test-infra/blob/master/metrics/README.md#metrics.
+* The metrics that we track are documented in https://git.k8s.io/test-infra/metrics/README.md#metrics.
* Job-level metrics are available in http://storage.googleapis.com/k8s-metrics/job-flakes-latest.json.
### Playbook
diff --git a/contributors/devel/owners.md b/contributors/devel/owners.md
index aea8579e..489cf309 100644
--- a/contributors/devel/owners.md
+++ b/contributors/devel/owners.md
@@ -16,7 +16,7 @@ of OWNERS files
## OWNERS spec
The [mungegithub gitrepos
-feature](https://github.com/kubernetes/test-infra/blob/master/mungegithub/features/repo-updates.go)
+feature](https://git.k8s.io/test-infra/mungegithub/features/repo-updates.go)
is the main consumer of OWNERS files. If this page is out of date, look there.
Each directory that contains a unit of independent code or content may also contain an OWNERS file.
@@ -72,7 +72,7 @@ GitHub usernames and aliases listed in OWNERS files are case-insensitive.
## Code Review Process
This is a simplified description of our [full PR testing and merge
-workflow](https://github.com/kubernetes/community/blob/master/contributors/devel/pull-requests.md#the-testing-and-merge-workflow)
+workflow](/contributors/devel/pull-requests.md#the-testing-and-merge-workflow)
that conveniently forgets about the existence of tests, to focus solely on the roles driven by
OWNERS files.
@@ -158,13 +158,13 @@ is the state of today.
## Implementation
-### [`mungegithub`](https://github.com/kubernetes/test-infra/tree/master/mungegithub)
+### [`mungegithub`](https://git.k8s.io/test-infra/mungegithub)
Mungegithub polls GitHub, and "munges" things it finds, including issues and pull requests. It is
stateful, in that restarting it means it loses track of which things it has munged at what time.
- [feature:
- gitrepos](https://github.com/kubernetes/test-infra/blob/master/mungegithub/features/repo-updates.go)
+ gitrepos](https://git.k8s.io/test-infra/mungegithub/features/repo-updates.go)
- responsible for parsing OWNERS and OWNERS_ALIAS files
- if its `use-reviewers` flag is set to false, **approvers** will also be **reviewers**
- if its `enable-md-yaml` flag is set, `.md` files will also be parsed to see if they have
@@ -172,14 +172,14 @@ stateful, in that restarting it means it loses track of which things it has mung
[kubernetes.github.io](https://github.com/kubernetes/kubernetes.github.io/))
- used by other mungers to get the set of **reviewers** or **approvers** for a given path
- [munger:
- blunderbuss](https://github.com/kubernetes/test-infra/blob/master/mungegithub/mungers/blunderbuss.go)
+ blunderbuss](https://git.k8s.io/test-infra/mungegithub/mungers/blunderbuss.go)
- responsible for determining **reviewers** and assigning to them
- chooses from people in the deepest/closest OWNERS files to the code being changed
- weights its choice based on the magnitude of lines changed for each file
- randomly chooses to ensure the same people aren't chosen every time
- if its `blunderbuss-number-assignees` flag is unset, it will default to 2 assignees
- [munger:
- approval-handler](https://github.com/kubernetes/test-infra/blob/master/mungegithub/mungers/approval-handler.go)
+ approval-handler](https://git.k8s.io/test-infra/mungegithub/mungers/approval-handler.go)
- responsible for adding the `approved` label once an **approver** for each of the required
OWNERS files has `/approve`'d
- responsible for commenting as required OWNERS files are satisfied
@@ -187,19 +187,19 @@ stateful, in that restarting it means it loses track of which things it has mung
- [full description of the
algorithm](https://github.com/kubernetes/test-infra/blob/6f5df70c29528db89d07106a8156411068518cbc/mungegithub/mungers/approval-handler.go#L99-L111)
- [munger:
- submit-queue](https://github.com/kubernetes/test-infra/blob/master/mungegithub/mungers/submit-queue.go)
+ submit-queue](https://git.k8s.io/test-infra/mungegithub/mungers/submit-queue.go)
- responsible for merging PR's
- responsible for updating a GitHub status check explaining why a PR can't be merged (eg: a
missing `lgtm` or `approved` label)
-### [`prow`](https://github.com/kubernetes/test-infra/tree/master/prow)
+### [`prow`](https://git.k8s.io/test-infra/prow)
Prow receives events from GitHub, and reacts to them. It is effectively stateless.
-- [plugin: lgtm](https://github.com/kubernetes/test-infra/tree/master/prow/plugins/lgtm)
+- [plugin: lgtm](https://git.k8s.io/test-infra/prow/plugins/lgtm)
- responsible for adding the `lgtm` label when a **reviewer** comments `/lgtm` on a PR
- the **PR author** may not `/lgtm` their own PR
-- [plugin: assign](https://github.com/kubernetes/test-infra/tree/master/prow/plugins/assign)
+- [plugin: assign](https://git.k8s.io/test-infra/prow/plugins/assign)
- responsible for assigning GitHub users in response to `/assign` comments on a PR
- responsible for unassigning GitHub users in response to `/unassign` comments on a PR
diff --git a/contributors/devel/pull-requests.md b/contributors/devel/pull-requests.md
index 40ec31c0..50e457a2 100644
--- a/contributors/devel/pull-requests.md
+++ b/contributors/devel/pull-requests.md
@@ -44,7 +44,7 @@ pass or fail of continuous integration.
## Sign the CLA
-You must sign the CLA before your first contribution. [Read more about the CLA.](https://github.com/kubernetes/community/blob/master/CLA.md)
+You must sign the CLA before your first contribution. [Read more about the CLA.](/CLA.md)
If you haven't signed the Contributor License Agreement (CLA) before making a PR,
the `@k8s-ci-robot` will leave a comment with instructions on how to sign the CLA.
@@ -92,7 +92,7 @@ For PRs that don't need to be mentioned at release time, just write "NONE" (case
The `/release-note-none` comment command can still be used as an alternative to writing "NONE" in the release-note block if it is left empty.
-To see how to format your release notes, view the [PR template](https://github.com/kubernetes/kubernetes/blob/master/.github/PULL_REQUEST_TEMPLATE.md) for a brief example. PR titles and body comments can be modified at any time prior to the release to make them friendly for release notes.
+To see how to format your release notes, view the [PR template](https://git.k8s.io/kubernetes/.github/PULL_REQUEST_TEMPLATE.md) for a brief example. PR titles and body comments can be modified at any time prior to the release to make them friendly for release notes.
Release notes apply to PRs on the master branch. For cherry-pick PRs, see the [cherry-pick instructions](cherry-picks.md). The only exception to these rules is when a PR is not a cherry-pick and is targeted directly to the non-master branch. In this case, a `release-note-*` label is required for that non-master PR.
@@ -127,7 +127,7 @@ If you are a member, or a member comments `/ok-to-test`, the PR will be consider
Once the tests pass, all failures are commented as flakes, or the reviewer adds the labels `lgtm` and `approved`, the PR enters the final merge queue. The merge queue is needed to make sure no incompatible changes have been introduced by other PRs since the tests were last run on your PR.
-Either the [on call contributor](on-call-rotations.md) will manage the merge queue manually, or the [GitHub "munger"](https://github.com/kubernetes/test-infra/tree/master/mungegithub) submit-queue plugin will manage the merge queue automatically.
+Either the [on call contributor](on-call-rotations.md) will manage the merge queue manually, or the [GitHub "munger"](https://git.k8s.io/test-infra/mungegithub) submit-queue plugin will manage the merge queue automatically.
1. The PR enters the merge queue ([http://submit-queue.k8s.io](http://submit-queue.k8s.io))
1. The merge queue triggers a test re-run with the comment `/test all [submit-queue is verifying that this PR is safe to merge]`
@@ -151,7 +151,7 @@ The GitHub robots will add and remove the `do-not-merge/hold` label as you use t
## Comment Commands Reference
-[The commands doc](https://github.com/kubernetes/test-infra/blob/master/commands.md) contains a reference for all comment commands.
+[The commands doc](https://git.k8s.io/test-infra/commands.md) contains a reference for all comment commands.
## Automation
@@ -220,8 +220,8 @@ Are you sure Feature-X is something the Kubernetes team wants or will accept? Is
It's better to get confirmation beforehand. There are two ways to do this:
-- Make a proposal doc (in docs/proposals; for example [the QoS proposal](http://prs.k8s.io/11713)), or reach out to the affected special interest group (SIG). Here's a [list of SIGs](https://github.com/kubernetes/community/blob/master/sig-list.md)
-- Coordinate your effort with [SIG Docs](https://github.com/kubernetes/community/tree/master/sig-docs) ahead of time
+- Make a proposal doc (in docs/proposals; for example [the QoS proposal](http://prs.k8s.io/11713)), or reach out to the affected special interest group (SIG). Here's a [list of SIGs](/sig-list.md)
+- Coordinate your effort with [SIG Docs](/sig-docs) ahead of time
- Make a sketch PR (e.g., just the API or Go interface). Write or code up just enough to express the idea and the design and why you made those choices
Or, do all of the above.
diff --git a/contributors/devel/scalability-good-practices.md b/contributors/devel/scalability-good-practices.md
index 5769d248..2b941a75 100644
--- a/contributors/devel/scalability-good-practices.md
+++ b/contributors/devel/scalability-good-practices.md
@@ -108,7 +108,7 @@ This looks fine-ish if you don't know that LIST are very expensive calls. Object
`Informer` is our library that provides a read interface of the store - it's a read-only cache that provides you a local copy of the store that will contain only object that you're interested in (matching given selector). From it you can GET, LIST, or do whatever read operations you want. `Informer` also allows you to register functions that will be called when an object is created, modified or deleted, which is what most people want.
-The magic behind `Informers` is that they are populated by the WATCH, so they don't stress API server too much. Code for Informer is [here](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/client-go/tools/cache/shared_informer.go).
+The magic behind `Informers` is that they are populated by the WATCH, so they don't stress API server too much. Code for Informer is [here](https://git.k8s.io/kubernetes/staging/src/k8s.io/client-go/tools/cache/shared_informer.go).
In general: use `Informers` - if we were able to rewrite most vanilla controllers to them, you'll be able to do it as well. If you don't you may dramatically increase CPU requirements of the API server which will starve it and make it too slow to meet our SLOs.
diff --git a/contributors/devel/strategic-merge-patch.md b/contributors/devel/strategic-merge-patch.md
index 82a2fd48..c1d69c1a 100644
--- a/contributors/devel/strategic-merge-patch.md
+++ b/contributors/devel/strategic-merge-patch.md
@@ -216,7 +216,7 @@ item that has duplicates will delete all matching items.
`setElementOrder` directive provides a way to specify the order of a list.
The relative order specified in this directive will be retained.
-Please refer to [proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/cli/preserve-order-in-strategic-merge-patch.md) for more information.
+Please refer to [proposal](/contributors/design-proposals/cli/preserve-order-in-strategic-merge-patch.md) for more information.
### Syntax
@@ -295,7 +295,7 @@ containers:
`retainKeys` directive provides a mechanism for union types to clear mutual exclusive fields.
When this directive is present in the patch, all the fields not in this directive will be cleared.
-Please refer to [proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/add-new-patchStrategy-to-clear-fields-not-present-in-patch.md) for more information.
+Please refer to [proposal](/contributors/design-proposals/api-machinery/add-new-patchStrategy-to-clear-fields-not-present-in-patch.md) for more information.
### Syntax
diff --git a/contributors/devel/testing.md b/contributors/devel/testing.md
index d3adf0ed..6a7c7be6 100644
--- a/contributors/devel/testing.md
+++ b/contributors/devel/testing.md
@@ -159,7 +159,7 @@ See `go help test` and `go help testflag` for additional info.
is [table driven testing](https://github.com/golang/go/wiki/TableDrivenTests)
- Example: [TestNamespaceAuthorization](https://git.k8s.io/kubernetes/test/integration/auth/auth_test.go)
* Each test should create its own master, httpserver and config.
- - Example: [TestPodUpdateActiveDeadlineSeconds](https://github.com/kubernetes/kubernetes/blob/master/test/integration/pods/pods_test.go)
+ - Example: [TestPodUpdateActiveDeadlineSeconds](https://git.k8s.io/kubernetes/test/integration/pods/pods_test.go)
* See [coding conventions](coding-conventions.md).
### Install etcd dependency
@@ -201,7 +201,7 @@ make test-integration # Run all integration tests.
```
This script runs the golang tests in package
-[`test/integration`](https://github.com/kubernetes/kubernetes/tree/master/test/integration).
+[`test/integration`](https://git.k8s.io/kubernetes/test/integration).
### Run a specific integration test
diff --git a/contributors/devel/vagrant.md b/contributors/devel/vagrant.md
index 1ecd8157..98d150ac 100644
--- a/contributors/devel/vagrant.md
+++ b/contributors/devel/vagrant.md
@@ -227,7 +227,7 @@ my-nginx 3 3 3 3 1m
We did not start any Services, hence there are none listed. But we see three
replicas displayed properly. Check the
-[guestbook](https://github.com/kubernetes/examples/tree/master/guestbook)
+[guestbook](https://git.k8s.io/examples/guestbook)
application to learn how to create a Service. You can already play with scaling
the replicas with:
diff --git a/contributors/devel/writing-good-e2e-tests.md b/contributors/devel/writing-good-e2e-tests.md
index dd782d50..0658aad2 100644
--- a/contributors/devel/writing-good-e2e-tests.md
+++ b/contributors/devel/writing-good-e2e-tests.md
@@ -146,7 +146,7 @@ right thing.
Here are a few pointers:
-+ [E2e Framework](https://github.com/kubernetes/kubernetes/blob/master/test/e2e/framework/framework.go):
++ [E2e Framework](https://git.k8s.io/kubernetes/test/e2e/framework/framework.go):
Familiarise yourself with this test framework and how to use it.
Amongst others, it automatically creates uniquely named namespaces
within which your tests can run to avoid name clashes, and reliably
@@ -160,7 +160,7 @@ Here are a few pointers:
should always use this framework. Trying other home-grown
approaches to avoiding name clashes and resource leaks has proven
to be a very bad idea.
-+ [E2e utils library](https://github.com/kubernetes/kubernetes/blob/master/test/e2e/framework/util.go):
++ [E2e utils library](https://git.k8s.io/kubernetes/test/e2e/framework/util.go):
This handy library provides tons of reusable code for a host of
commonly needed test functionality, including waiting for resources
to enter specified states, safely and consistently retrying failed
@@ -178,9 +178,9 @@ Here are a few pointers:
+ **Follow the examples of stable, well-written tests:** Some of our
existing end-to-end tests are better written and more reliable than
others. A few examples of well-written tests include:
- [Replication Controllers](https://github.com/kubernetes/kubernetes/blob/master/test/e2e/apps/rc.go),
- [Services](https://github.com/kubernetes/kubernetes/blob/master/test/e2e/network/service.go),
- [Reboot](https://github.com/kubernetes/kubernetes/blob/master/test/e2e/lifecycle/reboot.go).
+ [Replication Controllers](https://git.k8s.io/kubernetes/test/e2e/apps/rc.go),
+ [Services](https://git.k8s.io/kubernetes/test/e2e/network/service.go),
+ [Reboot](https://git.k8s.io/kubernetes/test/e2e/lifecycle/reboot.go).
+ [Ginkgo Test Framework](https://github.com/onsi/ginkgo): This is the
test library and runner upon which our e2e tests are built. Before
you write or refactor a test, read the docs and make sure that you
diff --git a/contributors/guide/README.md b/contributors/guide/README.md
index fe33b536..0eda6d9c 100644
--- a/contributors/guide/README.md
+++ b/contributors/guide/README.md
@@ -5,7 +5,7 @@ sig-contributor-experience
## Disclaimer
Hello! This is the starting point for our brand new contributor guide, currently underway as per [issue#6102](https://github.com/kubernetes/website/issues/6102) and in need of help. Please be patient, or fix a section below that needs improvement, and submit a pull request!
-Many of the links below should lead to relevant documents scattered across the community repository. Often, the linked instructions need to be updated or cleaned up.
+Many of the links below should lead to relevant documents scattered across the community repository. Often, the linked instructions need to be updated or cleaned up.
* If you do so, please move the relevant file from its previous location to the community/contributors/guide folder, and delete its previous location.
* Our goal is that all contributor guide specific files live in this folder.
@@ -17,9 +17,9 @@ For example:
_Improvements needed_
* kubernetes/kubernetes/contributing.md -> point to this guide
-* kubernetes/community/CONTRIBUTING.md -> Needs a rewrite
+* kubernetes/community/CONTRIBUTING.md -> Needs a rewrite
-* kubernetes/community/README.md -> Needs a rewrite
+* kubernetes/community/README.md -> Needs a rewrite
* Individual SIG contributing documents -> add a link to this guide
@@ -69,7 +69,7 @@ _Improvements needed_
* RyanJ from Red Hat is working on this
-## Community Expectations
+## Community Expectations
Kubernetes is a community project. Consequently, it is wholly dependent on its community to provide a productive, friendly and collaborative environment.
@@ -99,7 +99,7 @@ You get the idea - if you ever see something you think should be fixed, you shou
### Find a good first topic
There are multiple repositories within the Kubernetes community and a full list of repositories can be found [here](https://github.com/kubernetes/).
-Each repository in the Kubernetes organization has beginner-friendly issues that provide a good first issue. For example, [kubernetes/kubernetes](https://git.k8s.io/kubernetes) has [help-wanted issues](https://issues.k8s.io/?q=is%3Aopen+is%3Aissue+label%3A%22help%20wanted%22) that should not need deep knowledge of the system.
+Each repository in the Kubernetes organization has beginner-friendly issues that provide a good first issue. For example, [kubernetes/kubernetes](https://git.k8s.io/kubernetes) has [help wanted issues](https://go.k8s.io/help-wanted) that should not need deep knowledge of the system.
Another good strategy is to find a documentation improvement, such as a missing/broken link, which will give you exposure to the code submission/review process without the added complication of technical depth. Please see [Contributing](#Contributing) below for the workflow.
### Learn about SIGs
@@ -111,7 +111,7 @@ SIGs also have their own CONTRIBUTING.md files, which may contain extra informat
Like everything else in Kubernetes, a SIG is an open, community, effort. Anybody is welcome to jump into a SIG and begin fixing issues, critiquing design proposals and reviewing code. SIGs have regular [video meetings](https://kubernetes.io/community/) which everyone is welcome to. Each SIG has a kubernetes slack channel that you can join as well.
-There is an entire SIG ([sig-contributor-experience](../../sig-contributor-experience/README.md)) devoted to improving your experience as a contributor.
+There is an entire SIG ([sig-contributor-experience](/sig-contributor-experience/README.md)) devoted to improving your experience as a contributor.
Contributing to Kubernetes should be easy. If you find a rough edge, let us know! Better yet, help us fix it by joining the SIG; just
show up to one of the [bi-weekly meetings](https://docs.google.com/document/d/1qf-02B7EOrItQgwXFxgqZ5qjW0mtfu5qkYIF1Hl4ZLI/edit).
@@ -119,23 +119,23 @@ show up to one of the [bi-weekly meetings](https://docs.google.com/document/d/1q
Finding the appropriate SIG for your contribution will help you ask questions in the correct place and give your contribution higher visibility and a faster community response.
-For Pull Requests, the automatically assigned reviewer will add a SIG label if you haven't done so. See [Open A Pull Request](#open-a-pull-request) below.
+For Pull Requests, the automatically assigned reviewer will add a SIG label if you haven't done so. See [Open A Pull Request](#open-a-pull-request) below.
-For Issues we are still working on a more automated workflow. Since SIGs do not directly map onto Kubernetes subrepositories, it may be difficult to find which SIG your contribution belongs in. Here is the [list of SIGs](/sig-list.md). Determine which is most likely related to your contribution.
+For Issues we are still working on a more automated workflow. Since SIGs do not directly map onto Kubernetes subrepositories, it may be difficult to find which SIG your contribution belongs in. Here is the [list of SIGs](/sig-list.md). Determine which is most likely related to your contribution.
-*Example:* if you are filing a cni issue, you should choose SIG-networking.
+*Example:* if you are filing a cni issue, you should choose SIG-networking.
Follow the link in the SIG name column to reach each SIGs README. Most SIGs will have a set of GitHub Teams with tags that can be mentioned in a comment on issues and pull requests for higher visibility. If you are not sure about the correct SIG for an issue, you can try SIG-contributor-experience [here](/sig-contributor-experience#github-teams), or [ask in Slack](http://slack.k8s.io/).
-_Improvements needed_
+_Improvements needed_
-* Open pull requests with all applicable SIGs to not have duplicate information in their CONTRIBUTING.md and instead link here. Keep it light, keep it clean, have only one source of truth.
+* Open pull requests with all applicable SIGs to not have duplicate information in their CONTRIBUTING.md and instead link here. Keep it light, keep it clean, have only one source of truth.
### File an Issue
-Not ready to contribute code, but see something that needs work? While the community encourages everyone to contribute code, it is also appreciated when someone reports an issue (aka problem). Issues should be filed under the appropriate Kubernetes subrepository.
+Not ready to contribute code, but see something that needs work? While the community encourages everyone to contribute code, it is also appreciated when someone reports an issue (aka problem). Issues should be filed under the appropriate Kubernetes subrepository.
-*Example:* a documentation issue should be opened to [kubernetes/website](https://github.com/kubernetes/website/issues).
+*Example:* a documentation issue should be opened to [kubernetes/website](https://github.com/kubernetes/website/issues).
Make sure to adhere to the prompted submission guidelines while opening an issue.
@@ -159,7 +159,7 @@ For questions and troubleshooting, please feel free to use any of the methods of
To check out code to work on, please refer to [this guide](/contributors/devel/development.md#workflow).
-_Improvements needed_
+_Improvements needed_
* move github workflow into its own file in this folder.
@@ -251,4 +251,3 @@ _Improvements needed_
_Improvements needed_
* Link and mini description for Kubernetes Pilots should go here.
-