diff options
| author | Antoine Pelisse <apelisse@google.com> | 2017-06-21 11:03:15 -0700 |
|---|---|---|
| committer | Antoine Pelisse <apelisse@google.com> | 2017-06-21 11:03:15 -0700 |
| commit | f1cb8f8f754ec2ffd8b479124123b5a254366ae4 (patch) | |
| tree | b16cea5fd75ca4b3fbde7e2dff9bf41a0b3c20eb /community/developer-summit-2016 | |
| parent | febbf0c0f74faf895c79adbad143dd0a095fa64a (diff) | |
Move developer-summit-2016 to 2016-events
Diffstat (limited to 'community/developer-summit-2016')
| -rw-r--r-- | community/developer-summit-2016/KubDevSummitVoting.md | 33 | ||||
| -rw-r--r-- | community/developer-summit-2016/Kubernetes_Dev_Summit.md | 96 | ||||
| -rw-r--r-- | community/developer-summit-2016/application_service_definition_notes.md | 48 | ||||
| -rw-r--r-- | community/developer-summit-2016/cluster_federation_notes.md | 21 | ||||
| -rw-r--r-- | community/developer-summit-2016/cluster_lifecycle_notes.md | 132 | ||||
| -rw-r--r-- | community/developer-summit-2016/k8sDevSummitSchedule.pdf | bin | 54169 -> 0 bytes | |||
| -rw-r--r-- | community/developer-summit-2016/statefulset_notes.md | 38 |
7 files changed, 0 insertions, 368 deletions
diff --git a/community/developer-summit-2016/KubDevSummitVoting.md b/community/developer-summit-2016/KubDevSummitVoting.md deleted file mode 100644 index 46909c02..00000000 --- a/community/developer-summit-2016/KubDevSummitVoting.md +++ /dev/null @@ -1,33 +0,0 @@ -###Kubernetes Developer's Summit Discussion Topics Voting
-###
-A voting poll for discussion topic proposals has been created, and the link to the poll can be found [here][poll].
-
-The poll will close on 10/07/16 at 23:59:59 PDT.
-
-#### How Does it Work?
-####
-The voting uses the Condorcet method, which relies on relative rankings to pick winners. You can read more about the Condorcet method and the voting service we're using on the [CIVS website][civs].
-
-There are 27 topics to choose from, and you will rank them from 1 (favorite) to 27 (least favorite). You can also mark "no opinion" on topics that you don't wish to include in the ranking.
-
-The poll on CIVS has just the topic titles for ease of viewing. For topic descriptions, please see this [spreadsheet][topics]. The topic order on the voting service should mirror the order on the spreadsheet.
-
-You will note the message saying "*Only the 15 favorite choices will win the poll*". CIVS requires a number be selected for winners, and we have arbitrarily chosen 15. The final schedule may include more or less than 15 of the submitted topics.
-
-##### A Small Request
-#####
-
-In order to make the poll accessible via URL, it has to be made "public". This means than any unique IP address can vote, which can be easily exploited for multiple votes.
-
-We fully expect the community to behave with sportsmanship and only vote once, and as such we almost didn't bring this concern up to begin with. However, we have chosen to explicitly address it, in order to reiterate the importance of everyone's voice receiving equal weight in a community-driven event.
-
-#### After the Poll
-####
-A schedule will be made from the winning topics with
-some editorial license, and the schedule will be announced to the group
-at least a week before the event.
-
-[//]: # (Reference Links)
- [civs]: <http://civs.cs.cornell.edu/>
- [poll]: <http://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_9ef4ac5e58c4cab1&akey=7cc2652f9715b525>
- [topics]: <https://docs.google.com/spreadsheets/d/1bmp6uLG8H32MVz02-zsieeKKZDZITw-8B6UjOaNTXOA/edit?usp=sharing>
\ No newline at end of file diff --git a/community/developer-summit-2016/Kubernetes_Dev_Summit.md b/community/developer-summit-2016/Kubernetes_Dev_Summit.md deleted file mode 100644 index f66df508..00000000 --- a/community/developer-summit-2016/Kubernetes_Dev_Summit.md +++ /dev/null @@ -1,96 +0,0 @@ -# Kubernetes Dev Summit -# - -## Edit - Event Location -## -The event is on the 4th Floor of the *Union Street Tower* of the Sheraton Seattle Hotel. - -# About the Event -# -The Kubernetes Developers' Summit provides an avenue for Kubernetes -developers to connect face to face and mindshare about future community -development and community governance endeavors. - -In some sense, the summit is a real-life extension of the community -meetings and SIG meetings. - -## Event Format -## -The Dev Summit is a "loosely-structured [unconference][uncf]". Rather -than speakers and presentations, we will have moderators/facilitators -and discussion topics, alongside all-day completely unstructured -hacking. - -The discussion sessions will be in an open fishbowl format — rings of -chairs, with inner rings driving the discussion — where anyone can -contribute. The various sessions will be proposed and voted on by the -community in the weeks leading up to the event. This allows the -community to motivate the events of the day without dedicating precious -day-of time to choosing the sessions and making a schedule. - -There will be 3 rooms dedicated to these sessions running in parallel -all day. Each session should last between 45 minutes and an hour. - -Then, there will be 2 smaller rooms for hacking / unstructured -discussion all day. - -#### Who Should Go? -#### -The target audience is the Kubernetes developer community. The group -will be relatively small (~120-150 attendees), to improve communication -and facilitate easier decision-making. The majority of the attendees -will be selected from key company team and SIG leaders, power-users, and -the most active contributors. An additional pool of tickets will be -awarded via lottery. **Any interested party** should enter the lottery via -[this form][lotfrm]. Invitees will receive an invitation on or before October 12th. -RSVP information will be available in the invitation email. Tickets are -not transferrable. - -Please note that this Summit is not the right environment for people to -*start* learning about the Kubernetes project. There are plenty of -[meetups][mtp] organized by global user groups where one can get -involved initially. - -#### Call for Proposals -#### -Proposals for discussion topics can be submitted through [this -form][propfrm] by September 30, 23:59 PT. If you propose a session topic, -please be prepared to attend and facilitate the session if it gets -chosen. Other members will help with moderating, either as volunteer -co-facilitators or as members of the larger discussion group. - -Suggestions for session topic themes: - -* Hashing out technical issues -* Long term component / SIG planning - -In early October, proposal topics will be posted to the kubernetes-dev -mailing list and voted on via [CIVS][civs], the Condorcet Internet -Voting Service. A schedule will be made from the winning topics with -some editorial license, and the schedule will be announced to the group -at least a week before the event. - -## When & Where? -## -The Dev Summit will follow [Kubecon][kbc] on November 10th, 2016. -Fortunately for those who attend Kubecon, the Dev Summit will be at the -same venue, [the Sheraton Seattle Hotel][sher]. As of now, the day's -activities should run from breakfast being served at 8 AM to closing -remarks ending around 3:30 PM, with an external happy hour to follow. - -## Desired outcomes -## -* Generate notes from the sessions to feed the project's documentation -and knowledge base, and also to keep non-attendees plugged in -* Make (and document) recommendations and decisions for the near-term and -mid-term future of the project -* Come up with upcoming action items, as well as leaders for those action items, for the various topics that we discuss - -[//]: # (Reference Links) - [uncf]: <https://en.wikipedia.org/wiki/Unconference> - [mtp]: <http://www.meetup.com/topics/kubernetes/all/> - [lotfrm]: <https://docs.google.com/forms/d/e/1FAIpQLSe8t6pvRjh1OeF6xrXKbXmzHGMhQ4c-MbZ6QUr9APJNjpgAzA/viewform> - [propfrm]: <https://docs.google.com/forms/d/e/1FAIpQLSf30x18OGCv_Und7qah4y5Zs3Z-0YoBCo964ZsmhtbxBjMzxA/viewform> - [civs]: <http://civs.cs.cornell.edu/> - [kbc]: <http://events.linuxfoundation.org/events/kubecon> - [sher]: <http://www.sheratonseattle.com/> diff --git a/community/developer-summit-2016/application_service_definition_notes.md b/community/developer-summit-2016/application_service_definition_notes.md deleted file mode 100644 index e8f4c0c5..00000000 --- a/community/developer-summit-2016/application_service_definition_notes.md +++ /dev/null @@ -1,48 +0,0 @@ -# Service/Application Definition - -We think we need to help out the developers in how do we organize our services and how do I define them nicely and deploy on our orchestrator of choice. Writing the Kube files is a steep learning curve. So can we have something which is a little bit easier? - -Helm solves one purpose for this. - -Helm contrib: one of the things folks as us is they start from a dockerfile, and they want to have a workflow where they go from dockerfile-->imagebuild-->registry-->resource def. - -There are different ways to package applications. There's the potential for a lot of fragmentation in multi-pod application definitions. Can we create standards here? - -We want to build and generate manifests with one tool. We want "fun in five" that is have it up and running in five minutes or less. - -Another issue is testing mode; currently production-quality Helm charts don't really work on minikube,. There's some issues around this which we know about. We need dummy PVCs, LoadBalancer, etc. Also DNS and Ingress. - -We need the 80% case, Fabric8 is a good example of this. We need a good set of boundary conditions so that the new definition doesn't get bigger than the Kube implementation. Affinity/placement is a good example of "other 20%". - -We also need to look at how to get developer feedback on this so that we're building what they need. Pradeepto did a comparison of Kompose vs. Docker Compose for simplicity/usability. - -One of the things we're discussing the Kompose API. We want to get rid of this and supply something which people can use directly with kuberntes. A bunch of shops only have developers. Someone asked though what's so complicated with Kube definitions. Have we identified what gives people trouble with this? We push too many concepts on developers too quickly. We want some high-level abstract types which represent the 95% use case. Then we could decompose these to the real types. - -What's the gap between compose files and the goal? As an example, say you want to run a webserver pod. You have to deal with ingress, and service, and replication controller, and a bunch of other things. What's the equivalent of "docker run" which is easy to get. The critical thing is how fast you can learn it. - -We also need to have reversability so that if you use compose you don't have to edit the kube config after deployment, you can still use the simple concepts. The context of the chart needs to not be lost. - -There was discussion of templating applications. Person argued that it's really a type system. Erin suggested that it's more like a personal template, like the car seat configuration. - -There's a need to let developers work on "their machine" using the same spec. Looking through docker-compose, it's about what developers want, not what kubernetes wants. This needs to focus on what developers know, not the kube objects. - -Someone argued that if we use deployments it's really not that complex. We probably use too much complexity in our examples. But if we want to do better than docker-compose, what does it look like? Having difficulty imagining what that is. - -Maybe the best approach is to create a list of what we need for "what is my app" and compare it with current deployment files. - -There was a lot of discussion of what this looks like. - -Is this different from what the PAASes already do? It's not that different, we want something to work with core kubernetes, and also PAASes are opinionated in different ways. - -Being able to view an application as a single unifying concept is a major desire. Want to click "my app" and see all of the objects associated with it. It would be an overlay on top of Kubernetes, not something in core. - -One pending feature is that you can't look up different types of controllers in the API, that's going to be fixed. Another one is that we can't trace the dependencies; helm doesn't label all of the components deployed with the app. - -Need to identify things which are missing in core kubernetes, if there are any. - -## Action Items: - -* Reduce the verbosity of injecting configmaps. We want to simplify the main kubernetes API. For example, there should be a way to map all variables to ENV as one statement. -* Document where things are hard to understand with deployments. -* Document where things don't work with minikube and deployments. -* Document what's the path from minecraft.jar to running it on a kubernetes cluster? diff --git a/community/developer-summit-2016/cluster_federation_notes.md b/community/developer-summit-2016/cluster_federation_notes.md deleted file mode 100644 index 362cbafe..00000000 --- a/community/developer-summit-2016/cluster_federation_notes.md +++ /dev/null @@ -1,21 +0,0 @@ -# Cluster Federation - -There's a whole bunch of reasons why federation is interesting. There's HA, there's geographic locality, there's just managing very large clusters. Use cases: - -* HA -* Hybrid Cloud -* Geo/latency -* Scalability (many large clusters instead of one gigantic one) -* visibility of multiple clusters - -You don't actually need federation for geo-location now, but it helps. The mental model for this is kind of like Amazon AZ or Google zones. Sometimes we don't care where a resource is but sometimes we do. Sometimes you want specific policy control, like regulatory constraints about what can run where. - -From the enterprise point of view, central IT is in control and knowledge of where stuff gets deployed. Bob thinks it would be a very bad idea for us to try to solve complex policy ideas and enable them, it's a tar pit. We should just have the primitives of having different regions and be able to say what goes where. - -Currently, you either do node labelling which ends up being complex and dependent on discipline. Or you have different clusters and you don't have common namespaces. Some discussion of Intel proposal for cluster metadata. - -Bob's mental model is AWS regions and AZs. For example, if we're building a big cassandra cluster, and you want to make sure that nodes aren't all in the same zone. - -Quinton went over a WIP implementation for applying policies, with a tool which applies policy before resource requests go to the scheduler. It uses an open-source policy language, and labels on the request. - -Notes interrupted here, hopefully other members will fill in. diff --git a/community/developer-summit-2016/cluster_lifecycle_notes.md b/community/developer-summit-2016/cluster_lifecycle_notes.md deleted file mode 100644 index f42df9f6..00000000 --- a/community/developer-summit-2016/cluster_lifecycle_notes.md +++ /dev/null @@ -1,132 +0,0 @@ -# Cluster Lifecycle Deployment & Upgrade Roadmap - -Moderator: Mike Danese - -Note taker: Robert Bailey - -Date: 2016-11-10 - -## Goals - -Discuss HA, upgrades, and config management beyond kubeadm/kops & try to identify things that are currently underserved (upgrade testing, version skew policy, security upgrades) - -## Discussion - -kubeadm - not destined for production? -* Doing resource provisioning (cloud VMs) is out of scope -* Should be a toolbox that does the common parts of cluster lifecycle - * And should be able to break out just the pieces that you want -* Found a bunch of common parts of existing cluster deployment and want to build more of it into the core - -AI (luke): Create an intro guide to cluster lifecyle - -If anyone wants to work on Upgrades the Hard Way with Rob this afternoon. - -Wishlist -* HA -* Upgrades -* Config Management -* Toolbox vs. guided flow -* Documentation -* Conformance Testing -* PKI - -### HA - -* Story hasn't changed in a long time: - * People set up the cluster, run them in production - * Lack of documentation -* What haven't we been focused on? - * Some day there may be apiservers that may move, but there aren't today - * If you've misconfigure it, it's really hard to debug - * E.g. If you just launch 2 apiservers they fight over the endpoint. It requires insider knowledge to recognize this and know how to fix it - * Forces an ip address on the endpoint, which isn't compatible with AWS - * AI (claytonc): Fix the flag to take a host instead of just an ip address -* There are things that are command line flags that are going to be a pain to synchronize - * Move more configuration into etcd - -### Upgrades - -* Minor and patch upgrades, look at them separately -* Do we have skew requirements that are different for minor vs patch upgrades? - * E.g. Can you upgrade nodes before master for a patch version? -* AI: Socialize the existing version skew documentation -* AI: Clarify the documentation skew -* Do we want to support 4-part version numbers? - * Chris love: please don't do this -* Mike Rubin: Patch releases shouldn't create surprises -* Distribution of Kubernetes? -* Jordan Liggitt: Would like to see upgrade documentation on every install guide - * At least for patch releases - * AI (?): File issues against owners for the current getting started guides to add a section on upgrades -* Luke Marsden: I would like to lead an effort to support self hosting in some of the user flows (in particular a kubeadm flow) in an attempt to make it really easy to deploy a patch upgrade - * Assume single master, external etcd - * Joe Beda: The nice thing about self hosting upgrades is that there isn't anything cloud platform specific which allows us to build more general tooling - -### Distribution - -* In 1.5 we've begun experimenting with OS package management tooling -* Should we push further on this? -* The release tarball is getting bigger - * Should be ameliorated in 1.5 by breaking it into arch-specific bundles -* Jordan Liggitt: We can't tell people to script their install against the tarball because the structure of the tarball becomes an api - -### Config Management - -* Config is currently command line flags. Work is being done to convert into structured api types (outstanding PRs from ncdc@ and mtoffen@). -* How much should we use config maps vs flags vs something else? -* Joe Beda: Definitely an issue for the kubelet - specifically setting the DNS IP flag - * Vish: Unless/until the kubelet has local checkpointing it's dangerous to use the apiserver for checkpointing configuration since the apiserver may become unavailable -* Need to figure out how we deal with config that points to other files (e.g. certs) -* Rob: We may want to split the discussion about configuring the kubelet vs the control plane -* Mike Rubin: The kubelet will eventually need to be able to run standalone. Need to think about packaging and configuration as distinct. - * The Kubelet has a lot of value if it can work without an apiserver - * The node effort takes Docker + Linux and productizes it -* Mike Danese: The same type of config could also benefit other components -* Jordan Liggitt: Have client cert bootstrap stuff in Kubelet - -### Toolbox vs guided flow - -* [mostly skipped for time] -* Chris Love: Need to document our compartmentelizing each thing -* Luke: This isn't a "vs" it's an "and" - -### Documentation - -* What is lacking? -* Joe Beda: The fact that Kelsey had to write "Kubernetes the hard way" shows that we don't have documentation -* Chris Love: HA Upgrades -* Jordan Liggitt: Docs should look like a tree - * Start at the high level, if you want more detail, then you can drill down into each piece - * If you expand to all of the leaves, then you end up back at k8s the hard way -* Mike Rubin: Questions from support/users are less about setting up and more about tearing down - * What will still be around after destroying a cluster - * E.g. Deleting a namespace, deleting a cluster from a federation, deleting a node from the cluster -* Need an introduction to the SIG (Luke already volunteered to write one) -* Mike Rubin: Rollbacks and rollback documentation - * When you add a new feature (say in 1.4) and we roll back to an earlier version what happens to those resources - * Chris Love: elephant is what happens when you roll back from etcd3 → etcd2 - -### Conformance Testing - -* What can we do in 2017 to make progress on this? -* Jordan Liggitt: Need to categorize conformance tests into ones that you could run against a production cluster vs those that you shouldn't -* Lucas: Three levels of validation: node, k8s standard base, deep/destructive testing. Want to make these all easy through kubeadm -* Is performance testing out of scope? - * Clayton: Misconfiguration is often caught through performance testing so we shouldn't remove it from scope - -### PKI - -* Jordan Liggitt: Have client cert bootstrap stuff in Kubelet -* Chris Love: Need to loop in sig-auth. Need to use TLS certs for etcd clusters. -* Aaron Levy: Plan to add CSR into the etcd operation similar to what is going into the k8s api. -* Joe Beda: Two modes right now: can have the apiserver act as a CA; many serious users will want to use their own CA -* Jordan Liggitt: Things that need certs should be able to take them or components should be able to generate (if appropriate) - * Rotation depends on whether we are using the built-in CA or an external CA -* Mike Rubin: Why not do both rotation and revocation - * Rob: Many applications don't respect revocation so it's generally considered weaker -* Clayton: If you can rotate then you may not need to revoke -* Jordan Liggitt: Tied to config management -* Joe Beda: Part of the discovery info is the root CA and many people don't realize that it can be a bundle instead of a single CA — this enables rotation -* Clayton: In a secured cluster, etcd is the core. Have to think about it as the inner circle of security that the apiserver is outside of. If you are extremely cautious then you should use client certs in the apiserver. You can collapse the rings if you want. - diff --git a/community/developer-summit-2016/k8sDevSummitSchedule.pdf b/community/developer-summit-2016/k8sDevSummitSchedule.pdf Binary files differdeleted file mode 100644 index 6cd8d01b..00000000 --- a/community/developer-summit-2016/k8sDevSummitSchedule.pdf +++ /dev/null diff --git a/community/developer-summit-2016/statefulset_notes.md b/community/developer-summit-2016/statefulset_notes.md deleted file mode 100644 index d019d256..00000000 --- a/community/developer-summit-2016/statefulset_notes.md +++ /dev/null @@ -1,38 +0,0 @@ -# StatefulSets Session - -Topics to talk about: -* local volumes -* requests for the storage sig -* reclaim policies -* Filtering APIs for scheduler -* Data locality -* State of the StateFulSet -* Portable IPs -* Sticky Regions -* Renaming Pods - -## State of the StatefulSet - -1.5 will come out soon, we'll go beta for StatefulSets in that one. One of the questions is what are the next steps for Statefulsets? One thing is a long beta, so that we know we can trust statefulsets and they're safe. - -Missed some discussion here about force deletion. - -The pod isn't done until the kubelet says it's done. The issue is what happens when we have a netsplit, because the master doesn't know what's happening with the pods. In the future we'll maybe add some kind of fencer to make sure that they can't rejoin. Fencing is probably a topic for the Bare-Metal Sig. - -Are we going to sacrifice availability for consistency? We won't explicitly take actions which aren't safe automatically. Question: should the kubelet delete automatically if it can't contact the master? No, because it can't contact the master to say it did it. - -When are we going to finish the rename from PetSet to StatefulSet? The PR is merged for renaming, but the documentation changes aren't. - -Storage provisioning? The assumption is that you will be able to preallocate a lot of storage for dynamic storage so that you can stamp out PVCs. If dynamic volumes aren't simple to use this is a lot more annoying. - -Building initial quorums issue? - -It would be great to have a developer storage class which ties back to a fake NFS. For testing and dev. The idea behind local volumes is that it should be easy to create throwaway storage on local disk. So that you can write things which run on every kube cluster. - -Will there be a API for the application? To communicate members joining and leaving. Answer today is that's what the KubeAPI is for. - -The hard problem is configchange. You can't do config change unless you bootstrap it correctly. If kube is changing things under me I can't maintant quorum (as an app). This happens when expanding the set of nodes. You need to figure out who's in and who's out. - -Where does the glue software which relates the statefulset to the application? But different applications handle things like consensus and quorum very differently. What about notifying the service that you're available for traffic. Example for this with etcd with readiness vs. membership service. You can have two states, one where the node is ready, and one where the application is ready. Readiness vs. liveness check could differentiate? - -Is rapid spin-up a real issue? Nobody thinks so, |
