Core Workloads API GA

DaemonSet, Deployment, ReplicaSet, and StatefulSet are GAEditor’s Note: We’re happy to announce that the Core Workloads API is GA in Kubernetes 1.9! This blog post from Kenneth Owens reviews how Core Workloads got to GA from its origins, reveals changes in 1.9, and talks about what you can expect going forward.In the Beginning …There were Pods, tightly coupled containers that share resource requirements, networking, storage, and a lifecycle. Pods were useful, but, as it turns out, users wanted to seamlessly, reproducibly, and automatically create many identical replicas of the same Pod, so we created ReplicationController.Replication was a step forward, but what users really needed was higher level orchestration of their replicated Pods. They wanted rolling updates, roll backs, and roll overs. So the OpenShift team created DeploymentConfig. DeploymentConfigs were also useful, and OpenShift users were happy. In order to allow all OSS Kubernetes uses to share in the elation, and to take advantage of set-based label selectors, ReplicaSet and Deployment were added to the extensions/v1beta1 group version providing rolling updates, roll backs, and roll overs for all Kubernetes users.That mostly solved the problem of orchestrating containerized 12 factor apps on Kubernetes, so the community turned its attention to a different problem. Replicating a Pod <n> times isn’t the right hammer for every nail in your cluster. Sometimes, you need to run a Pod on every Node, or on a subset of Nodes (for example, shared side cars like log shippers and metrics collectors, Kubernetes add-ons, and Distributed File Systems). The state of the art was Pods combined with NodeSelectors, or static Pods, but this is unwieldy. After having grown used to the ease of automation provided by Deployments, users demanded the same features for this category of application, so DaemonSet was added to extension/v1beta1 as well.For a time, users were content, until they decided that Kubernetes needed to be able to orchestrate more than just 12 factor apps and cluster infrastructure. Whether your architecture is N-tier, service oriented, or micro-service oriented, your 12 factor apps depend on stateful workloads (for example, RDBMSs, distributed key value stores, and messaging queues) to provide services to end users and other applications. These stateful workloads can have availability and durability requirements that can only be achieved by distributed systems, and users were ready to use Kubernetes to orchestrate the entire stack.While Deployments are great for stateless workloads, they don’t provide the right guarantees for the orchestration of distributed systems. These applications can require stable network identities, ordered, sequential deployment, updates, and deletion, and stable, durable storage. PetSet was added to the apps/v1beta1 group version to address this category of application. Unfortunately, we were less than thoughtful with its naming, and, as we always strive to be an inclusive community, we renamed the kind to StatefulSet.Finally, we were done….Or were we?Kubernetes 1.8 and apps/v1beta2Pod, ReplicationController, ReplicaSet, Deployment, DaemonSet, and StatefulSet came to collectively be known as the core workloads API. We could finally orchestrate all of the things, but the API surface was spread across three groups, had many inconsistencies, and left users wondering about the stability of each of the core workloads kinds. It was time to stop adding new features and focus on consistency and stability.Pod and ReplicationController were at GA stability, and even though you can run a workload in a Pod, it’s a nucleus primitive that belongs in core. As Deployments are the recommended way to manage your stateless apps, moving ReplicationController would serve no purpose. In Kubernetes 1.8, we moved all the other core workloads API kinds (Deployment, DaemonSet, ReplicaSet, and StatefulSet) to the apps/v1beta2 group version. This had the benefit of providing a better aggregation across the API surface, and allowing us to break backward compatibility to fix inconsistencies. Our plan was to promote this new surface to GA, wholesale and as is, when we were satisfied with its completeness. The modifications in this release, which are also implemented in apps/v1, are described below.Selector Defaulting DeprecatedIn prior versions of the apps and extensions groups, label selectors of the core workloads API kinds were, when left unspecified, defaulted to a label selector generated from the kind’s template’s labels.This was completely incompatible with strategic merge patch and kubectl apply. Moreover, we’ve found that defaulting the value of a field from the value of another field of the same object is an anti-pattern, in general, and particularly dangerous for the API objects used to orchestrate workloads.Immutable SelectorsSelector mutation, while allowing for some use cases like promotable Deployment canaries, is not handled gracefully by our workload controllers, and we have always strongly cautioned users against it. To provide a consistent, usable, and stable API, selectors were made immutable for all kinds in the workloads API.We believe that there are better ways to support features like promotable canaries and orchestrated Pod relabeling, but, if restricted selector mutation is a necessary feature for our users, we can relax immutability in the future without breaking backward compatibility.The development of features like promotable canaries, orchestrated Pod relabeling, and restricted selector mutability is driven by demand signals from our users. If you are currently modifying the selectors of your core workload API objects, please tell us about your use case via a GitHub issue, or by participating in SIG apps.Default Rolling UpdatesPrior to apps/v1beta2, some kinds defaulted their update strategy to something other than RollingUpdate (e.g. app/v1beta1/StatefulSet uses OnDelete by default). We wanted to be confident that RollingUpdate worked well prior to making it the default update strategy, and we couldn’t change the default behavior in released versions without breaking our promise with respect to backward compatibility. In apps/v1beta2 we enabled RollingUpdate for all core workloads kinds by default.CreatedBy Annotation DeprecatedThe “kubernetes.io/created-by” was a legacy hold over from the days before garbage collection. Users should use an object’s ControllerRef from its ownerReferences to determine object ownership. We deprecated this feature in 1.8 and removed it in 1.9.Scale SubresourcesA scale subresource was added to all of the applicable kinds in apps/v1beta2 (DaemonSet scales based on its node selector).Kubernetes 1.9 and apps/v1In Kubernetes 1.9, as planned, we promoted the entire core workloads API surface to GA in the apps/v1 group version. We made a few more changes to make the API consistent, but apps/v1 is mostly identical to apps/v1beta2. The reality is that most users have been treating the beta versions of the core workloads API as GA for some time now. Anyone who is still using ReplicationControllers and shying away from DaemonSets, Deployments, and StatefulSets, due to a perceived lack of stability, should plan migrate their workloads (where applicable) to apps/v1. The minor changes that were made during promotion are described below.Garbage Collection Defaults to DeletePrior to apps/v1 the default garbage collection policy for Pods in a DaemonSet, Deployment, ReplicaSet, or StatefulSet, was to orphan the Pods. That is, if you deleted one of these kinds, the Pods that they owned would not be deleted automatically unless cascading deletion was explicitly specified. If you use kubectl, you probably didn’t notice this, as these kinds are scaled to zero prior to deletion. In apps/v1 all core worloads API objects will now, by default, be deleted when their owner is deleted. For most users, this change is transparent.Status ConditionsPrior to apps/v1 only Deployment and ReplicaSet had Conditions in their Status objects. For consistency’s sake, either all of the objects or none of them should have conditions. After some debate, we decided that Conditions are useful, and we added Conditions to StatefulSetStatus and DaemonSetStatus. The StatefulSet and DaemonSet controllers currently don’t populate them, but we may choose communicate conditions to clients, via this mechanism, in the future.Scale Subresource Migrated to autoscale/v1We originally added a scale subresource to the apps group. This was the wrong direction for integration with the autoscaling, and, at some point, we would like to use custom metrics to autoscale StatefulSets. So the apps/v1 group version uses the autoscaling/v1 scale subresource.Migration and DeprecationThe question most you’re probably asking now is, “What’s my migration path onto apps/v1 and how soon should I plan on migrating?” All of the group versions prior to apps/v1 are deprecated as of Kubernetes 1.9, and all new code should be developed against apps/v1, but, as discussed above, many of our users treat extensions/v1beta1 as if it were GA. We realize this, and the minimum support timelines in our deprecation policy are just that, minimums.In future releases, before completely removing any of the group versions, we will disable them by default in the API Server. At this point, you will still be able to use the group version, but you will have to explicitly enable it. We will also provide utilities to upgrade the storage version of the API objects to apps/v1. Remember, all of the versions of the core workloads kinds are bidirectionally convertible. If you want to manually update your core workloads API objects now, you can use kubectl convert to convert manifests between group versions.What’s Next?The core workloads API surface is stable, but it’s still software, and software is never complete. We often add features to stable APIs to support new use cases, and we will likely do so for the core workloads API as well. GA stability means that any new features that we do add will be strictly backward compatible with the existing API surface. From this point forward, nothing we do will break our backwards compatibility guarantees. If you’re looking to participate in the evolution of this portion of the API, please feel free to get involved in GitHub or to participate in SIG Apps.–Kenneth Owens, Software Engineer, GoogleDownload KubernetesGet involved with the Kubernetes project on GitHub Post questions (or answer questions) on Stack Overflow Connect with the community on SlackFollow us on Twitter @Kubernetesio for latest updates
Quelle: kubernetes

Cisco Now Reselling Docker Enterprise Edition

Today we are excited to announce the expansion of our partnership with the availability of Docker Enterprise Edition (EE), our container management platform on the Cisco Global Price List (GPL) and the release of the latest Cisco Validated Design (CVD):
Cisco UCS Infrastructure with Contiv and Docker Enterprise Edition for Container Management.

Now customers can purchase Docker EE directly from Cisco and their joint resellers to jumpstart their new year’s resolution for a more modern application architecture, reduce IT costs and redirect saving to innovation projects.  And with our latest CVD for Cisco UCS compute infrastructure with secure container networking fabric, Contiv,  we’ve provided a roadmap on how to get started so customers and partners can gain a faster, more reliable and predictable implementation of Docker EE.
For enterprises looking to use Docker’s container management platform but not sure where to start, we can help you take the first step. The Migrating Traditional Applications (MTA) Program, designed for IT operations teams, helps enterprises modernize existing legacy .NET Windows or Java Linux applications without modifying source code or re-architecting the application in just five days with Docker and Cisco Advanced Services. The results have been incredible, with customers saving over 50% on infrastructure costs and using those saving to unlock innovative IT projects. What are you waiting for? Let us help you get started with Docker EE today.
For Cisco resellers looking to deliver Docker EE to your customers, we can help you get started and get trained, go to Cisco Sales Central.
For more information on Docker EE and Cisco Solutions:

Watch the webinar: Docker and Cisco – Integrated Container Solutions for the Enterprise
Contact sales to learn more about getting started with your MTA pilot

 

Cisco now reselling Docker Enterprise Edition #DockerEE #dockerClick To Tweet

The post Cisco Now Reselling Docker Enterprise Edition appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Introducing client-go version 6

The Kubernetes API server exposes a REST interface consumable by any client. client-go is the official client library for the Go programming language. It is used both internally by Kubernetes itself (for example, inside kubectl) as well as by numerous external consumers:operators like the etcd-operator or prometheus-operator; higher level frameworks like KubeLess and OpenShift; and many more.The version 6 update to client-go adds support for Kubernetes 1.9, allowing access to the latest Kubernetes features. While the changelog contains all the gory details, this blog post highlights the most prominent changes and intends to guide on how to upgrade from version 5.This blog post is one of a number of efforts to make client-go more accessible to third party consumers. Easier access is a joint effort by a number of people from numerous companies, all meeting in the #client-go-docs channel of the Kubernetes Slack. We are happy to hear feedback and ideas for further improvement, and of course appreciate anybody who wants to contribute.API group changesThe following API group promotions are part of Kubernetes 1.9:Workload objects (Deployments, DaemonSets, ReplicaSets, and StatefulSets) have been promoted to the apps/v1 API group in Kubernetes 1.9. client-go follows this transition and allows developers to use the latest version by importing the k8s.io/api/apps/v1 package instead of k8s.io/api/apps/v1beta1 and by using Clientset.AppsV1().Admission Webhook Registration has been promoted to the admissionregistration.k8s.io/v1beta1 API group in Kubernetes 1.9. The former ExternalAdmissionHookConfiguration type has been replaced by the incompatible ValidatingWebhookConfiguration and MutatingWebhookConfiguration types. Moreover, the webhook admission payload type AdmissionReview in admission.k8s.io has been promoted to v1beta1. Note that versioned objects are now passed to webhooks. Refer to the admission webhook documentation for details.Validation for CustomResourcesIn Kubernetes 1.8 we introduced CustomResourceDefinitions (CRD) pre-persistence schema validation as an alpha feature. With 1.9, the feature got promoted to beta and will be enabled by default. As a client-go user, you will find the API types at k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1.The OpenAPI v3 schema can be defined in the CRD spec as:apiVersion: apiextensions.k8s.io/v1beta1kind: CustomResourceDefinitionmetadata: …spec:  …  validation:    openAPIV3Schema:      properties:        spec:          properties:            version:                type: string                enum:                – “v1.0.0″                – “v1.0.1″            replicas:                type: integer                minimum: 1                maximum: 10The schema in the above CRD applies following validations for the instance:spec.version must be a string and must be either “v1.0.0” or “v1.0.1”.spec.replicas must be an integer and must have a minimum value of 1 and a maximum value of 10.A CustomResource with invalid values for spec.version (v1.0.2) and spec.replicas (15) will be rejected: apiVersion: mygroup.example.com/v1kind: Appmetadata:  name: example-appspec:  version: “v1.0.2″  replicas: 15 $ kubectl create -f app.yamlThe App “example-app” is invalid: []: Invalid value: map[string]interface {}{“apiVersion”:”mygroup.example.com/v1″, “kind”:”App”, “metadata”:map[string]interface {}{“creationTimestamp”:”2017-08-31T20:52:54Z”, “uid”:”5c674651-8e8e-11e7-86ad-f0761cb232d1″, “selfLink”:””, “clusterName”:””, “name”:”example-app”, “namespace”:”default”, “deletionTimestamp”:interface {}(nil), “deletionGracePeriodSeconds”:(*int64)(nil)}, “spec”:map[string]interface {}{“replicas”:15, “version”:”v1.0.2″}}:validation failure list:spec.replicas in body should be less than or equal to 10spec.version in body should be one of [v1.0.0 v1.0.1] Note that with Admission Webhooks, Kubernetes 1.9 provides another beta feature to validate objects before they are created or updated. Starting with 1.9, these webhooks also allow mutation of objects (for example, to set defaults or to inject values). Of course, webhooks work with CRDs as well. Moreover, webhooks can be used to implement validations that are not easily expressible with CRD validation. Note that webhooks are harder to implement than CRD validation, so for many purposes, CRD validation is the right tool.Creating namespaced informersOften objects in one namespace or only with certain labels are to be processed in a controller. Informers now allow you to tweak the ListOptions used to query the API server to list and watch objects. Uninitialized objects (for consumption by initializers) can be made visible by setting IncludeUnitialized to true. All this can be done using the new NewFilteredSharedInformerFactory constructor for shared informers:import “k8s.io/client-go/informers”…sharedInformers := informers.NewFilteredSharedInformerFactory( client, 30*time.Minute, “some-namespace”, func(opt *metav1.ListOptions) { opt.LabelSelector = “foo=bar” },)  Note that the corresponding lister will only know about the objects matching the namespace and the given ListOptions. Note that the same restrictions apply for a List or Watch call on a client.This production code example of a cert-manager demonstrates how namespace informers can be used in real code.Polymorphic scale clientHistorically, only types in the extensions API group would work with autogenerated Scale clients. Furthermore, different API groups use different Scale types for their /scale subresources. To remedy these issues, k8s.io/client-go/scale provides a polymorphic scale client to scale different resources in different API groups in a coherent way: import (apimeta “k8s.io/apimachinery/pkg/api/meta” discocache “k8s.io/client-go/discovery/cached” “k8s.io/client-go/discovery””k8s.io/client-go/dynamic”“k8s.io/client-go/scale”)…cachedDiscovery := discocache.NewMemCacheClient(client.Discovery())restMapper := discovery.NewDeferredDiscoveryRESTMapper(cachedDiscovery, apimeta.InterfacesForUnstructured,)scaleKindResolver := scale.NewDiscoveryScaleKindResolver(client.Discovery(),)scaleClient, err := scale.NewForConfig(client, restMapper,dynamic.LegacyAPIPathResolverFunc, scaleKindResolver,)scale, err := scaleClient.Scales(“default”).Get(groupResource, “foo”) The returned scale object is generic and is exposed as the autoscaling/v1.Scale object. It is backed by an internal Scale type, with conversions defined to and from all the special Scale types in the API groups supporting scaling. We planto extend this to CustomResources in 1.10.If you’re implementing support for the scale subresource, we recommend that you expose the autoscaling/v1.Scale object.Type-safe DeepCopyDeeply copying an object formerly required a call to Scheme.Copy(Object) with the notable disadvantage of losing type safety. A typical piece of code from client-go version 5 required type casting:newObj, err := runtime.NewScheme().Copy(node)if err != nil {    return fmt.Errorf(“failed to copy node %v: %s”, node, err)}newNode, ok := newObj.(*v1.Node)if !ok {    return fmt.Errorf(“failed to type-assert node %v”, newObj)} Thanks to k8s.io/code-generator, Copy has now been replaced by a type-safe DeepCopy method living on each object, allowing you to simplify code significantly both in terms of volume and API error surface:newNode := node.DeepCopy()No error handling is necessary: this call never fails. If and only if the node is nil does DeepCopy() return nil.To copy runtime.Objects there is an additional DeepCopyObject() method in the runtime.Object interface.With the old method gone for good, clients need to update their copy invocations accordingly.Code generation and CustomResourcesUsing client-go’s dynamic client to access CustomResources is discouraged and superseded by type-safe code using the generators in k8s.io/code-generator. Check out the Deep Dive on the Open Shift blog to learn about using code generation with client-go.Comment BlocksYou can now place tags in the comment block just above a type or function, or in the second block above. There is no distinction anymore between these two comment blocks. This used to a be a source of subtle errors when using the generators:// second block above// +k8s:some-tag// first block above// +k8s:another-tagtype Foo struct {}Custom Client MethodsYou can now use extended tag definitions to create custom verbs . This lets you expand beyond the verbs defined by HTTP. This opens the door to higher levels of customization.For example, this block leads to the generation of the method UpdateScale(s *autoscaling.Scale) (*autoscaling.Scale, error):// genclient:method=UpdateScale,verb=update,subresource=scale,input=k8s.io/kubernetes/pkg/apis/autoscaling.Scale,result=k8s.io/kubernetes/pkg/apis/autoscaling.ScaleResolving Golang Naming ConflictsIn more complex API groups it’s possible for Kinds, the group name, the Go package name, and the Go group alias name to conflict. This was not handled correctly prior to 1.9. The following tags resolve naming conflicts and make the generated code prettier:// +groupName=example2.example.com// +groupGoName=SecondExampleThese are usually in the doc.go file of an API package. The first is used as the CustomResource group name when RESTfully speaking to the API server using HTTP. The second is used in the generated Golang code (for example, in the clientset) to access the group version:clientset.SecondExampleV1()It’s finally possible to have dots in Go package names. In this section’s example, you would put the groupName snippet into the pkg/apis/example2.example.com directory of your project.Example projectsKubernetes 1.9 includes a number of example projects which can serve as a blueprint for your own projects:k8s.io/sample-apiserver is a simple user-provided API server that is integrated into a cluster via API aggregation.k8s.io/sample-controller is a full-featured controller (also called an operator) with shared informers and a workqueue to process created, changed or deleted objects. It is based on CustomResourceDefinitions and uses k8s.io/code-generator to generate deepcopy functions, typed clientsets, informers, and listers.VendoringIn order to update from the previous version 5 to version 6 of client-go, the library itself as well as certain third-party dependencies must be updated. Previously, this process had been tedious due to the fact that a lot of code got refactored or relocated within the existing package layout across releases. Fortunately, far less code had to move in the latest version, which should ease the upgrade procedure for most users.State of the published repositoriesIn the past k8s.io/client-go, k8s.io/api, and k8s.io/apimachinery were updated infrequently. Tags (for example, v4.0.0) were created quite some time after the Kubernetes releases. With the 1.9 release we resumed running a nightly bot that updates all the repositories for public consumption, even before manual tagging. This includes the branches:masterrelease-1.8 / release-5.0release-1.9 / release-6.0 Kubernetes tags (for example, v1.9.1-beta1) are also applied automatically to the published repositories, prefixed with kubernetes- (for example, kubernetes-1.9.1-beta1).These tags have limited test coverage, but can be used by early adopters of client-go and the other libraries. Moreover, they help to vendor the correct version of k8s.io/api and k8s.io/apimachinery. Note that we only create a v6.0.3-like semantic versioning tag on k8s.io/client-go. The corresponding tag for k8s.io/api and k8s.io/apimachinery is kubernetes-1.9.3.Also note that only these tags correspond to tested releases of Kubernetes. If you depend on the release branch, e.g., release-1.9, your client is running on unreleased Kubernetes code.State of vendoring of client-goIn general, the list of which dependencies to vendor is automatically generated and written to the file Godeps/Godeps.json. Only the revisions listed there are tested. This means especially that we do not and cannot test the code-base against master branches of our dependencies. This puts us in the following situation depending on the used vendoring tool:godep reads Godeps/Godeps.json by running godep restore from k8s.io/client-go in your GOPATH. Then use godep save to vendor in your project. godep will choose the correct versions from your GOPATH.glide reads Godeps/Godeps.json automatically from its dependencies including from k8s.io/client-go, both on init and on update. Hence, glide should be mostly automatic as long as there are no conflicts.dep does not currently respect Godeps/Godeps.json in a consistent way, especially not on updates. It is crucial to specify client-go dependencies manually as constraints or overrides, also for non k8s.io/* dependencies. Without those, dep simply chooses the dependency master branches, which can cause problems as they are updated frequently.The Kubernetes and golang/dep community are aware of the problems [issue #1124, issue #1236] and are working together on solutions. Until then special care must be taken.Please see client-go’s INSTALL.md for more details.Updating dependencies – golang/depEven with the deficiencies of golang/dep today, dep is slowly becoming the de-facto standard in the Go ecosystem. With the necessary care and the awareness of the missing features, dep can be (and is!) used successfully. Here’s a demonstration of how to update a project with client-go 5 to the latest version 6 using dep:(If you are still running client-go version 4 and want to play it safe by not skipping a release, now is a good time to check out this excellent blog post describing how to upgrade to version 5, put together by our friends at Heptio.)Before starting, it is important to understand that client-go depends on two other Kubernetes projects: k8s.io/apimachinery and k8s.io/api. In addition, if you are using CRDs, you probably also depend on k8s.io/apiextensions-apiserver for the CRD client. The first exposes lower-level API mechanics (such as schemes, serialization, and type conversion), the second holds API definitions, and the third provides APIs related to CustomResourceDefinitions. In order for client-go to operate correctly, it needs to have its companion libraries vendored in correspondingly matching versions. Each library repository provides a branch named release-<version> where <version> refers to a particular Kubernetes version; for client-go version 6, it is imperative to refer to the release-1.9 branch on each repository.Assuming the latest version 5 patch release of client-go being vendored through dep, the Gopkg.toml manifest file should look something like this (possibly using branches instead of versions):[[constraint]]  name = “k8s.io/api”  version = “kubernetes-1.8.1″[[constraint]]  name = “k8s.io/apimachinery”  version = “kubernetes-1.8.1″[[constraint]]  name = “k8s.io/apiextensions-apiserver”  version = “kubernetes-1.8.1″[[constraint]]  name = “k8s.io/client-go”  version = “5.0.1”Note that some of the libraries could be missing if they are not actually needed by the client.Upgrading to client-go version 6 means bumping the version and tag identifiers as following (emphasis given):[constraint]]  name = “k8s.io/api”  version = “kubernetes-1.9.0″[[constraint]]  name = “k8s.io/apimachinery”  version = “kubernetes-1.9.0″[[constraint]]  name = “k8s.io/apiextensions-apiserver”  version = “kubernetes-1.9.0″[[constraint]]  name = “k8s.io/client-go”  version = “6.0.0”The result of the upgrade can be found here.A note of caution: dep cannot capture the complete set of dependencies in a reliable and reproducible fashion as described above. This means that for a 100% future-proof project you have to add constraints (or even overrides) to many other packages listed in client-go’s Godeps/Godeps.json. Be prepared to add them if something breaks. We are working with the golang/dep community to make this an easier and more smooth experience.Finally, we need to tell dep to upgrade to the specified versions by executing dep ensure. If everything goes well, the output of the command invocation should be empty, with the only indication that it was successful being a number of updated files inside the vendor folder.If you are using CRDs, you probably also use code-generation. The following block for Gopkg.toml will add the required code-generation packages to your project:required = [  “k8s.io/code-generator/cmd/client-gen”,  “k8s.io/code-generator/cmd/conversion-gen”,  “k8s.io/code-generator/cmd/deepcopy-gen”,  “k8s.io/code-generator/cmd/defaulter-gen”,  “k8s.io/code-generator/cmd/informer-gen”,  “k8s.io/code-generator/cmd/lister-gen”,][[constraint]]  branch = “kubernetes-1.9.0″  name = “k8s.io/code-generator” Whether you would also like to prune unneeded packages (such as test files) through dep or commit the changes into the VCS at this point is up to you — but from an upgrade perspective, you should now be ready to harness all the fancy new features that Kubernetes 1.9 brings through client-go.
Quelle: kubernetes

Extensible Admission is Beta

In this post we review a feature, available in the Kubernetes API server, that allows you to implement arbitrary control decisions and which has matured considerably in Kubernetes 1.9.The admission stage of API server processing is one of the most powerful tools for securing a Kubernetes cluster by restricting the objects that can be created, but it has always been limited to compiled code. In 1.9, we promoted webhooks for admission to beta, allowing you to leverage admission from outside the API server process.What is Admission?Admission is the phase of handling an API server request that happens before a resource is persisted, but after authorization. Admission gets access to the same information as authorization (user, URL, etc) and the complete body of an API request (for most requests).The admission phase is composed of individual plugins, each of which are narrowly focused and have semantic knowledge of what they are inspecting. Examples include: PodNodeSelector (influences scheduling decisions), PodSecurityPolicy (prevents escalating containers), and ResourceQuota (enforces resource allocation per namespace).Admission is split into two phases:Mutation, which allows modification of the body content itself as well as rejection of an API request.Validation, which allows introspection queries and rejection of an API request.An admission plugin can be in both phases, but all mutation happens before validation.MutationThe mutation phase of admission allows modification of the resource content before it is persisted. Because the same field can be mutated multiple times while in the admission chain, the order of the admission plugins in the mutation matters.One example of a mutating admission plugin is the `PodNodeSelector` plugin, which uses an annotation on a namespace `namespace.annotations[“scheduler.alpha.kubernetes.io/node-selector”]` to find a label selector and add it to the `pod.spec.nodeselector` field. This positively restricts which nodes the pods in a particular namespace can land on, as opposed to taints, which provide negative restriction (also with an admission plugin).ValidationThe validation phase of admission allows the enforcement of invariants on particular API resources. The validation phase runs after all mutators finish to ensure that the resource isn’t going to change again.One example of a validation admission plugin is also the `PodNodeSelector` plugin, which ensures that all pods’ `spec.nodeSelector` fields are constrained by the node selector restrictions on the namespace. Even if a mutating admission plugin tries to change the `spec.nodeSelector` field after the PodNodeSelector runs in the mutating chain, the PodNodeSelector in the validating chain prevents the API resource from being created because it fails validation.What are admission webhooks?Admission webhooks allow a Kubernetes installer or a cluster-admin to add mutating and validating admission plugins to the admission chain of `kube-apiserver` as well as any extensions apiserver based on k8s.io/apiserver 1.9, like metrics, service-catalog, or kube-projects, without recompiling them. Both kinds of admission webhooks run at the end of their respective chains and have the same powers and limitations as compiled admission plugins.What are they good for?Webhook admission plugins allow for mutation and validation of any resource on any API server, so the possible applications are vast. Some common use-cases include:Mutation of resources like pods. Istio has talked about doing this to inject side-car containers into pods. You could also write a plugin which forcefully resolves image tags into image SHAs.Name restrictions. On multi-tenant systems, reserving namespaces has emerged as a use-case.Complex CustomResource validation. Because the entire object is visible, a clever admission plugin can perform complex validation on dependent fields (A requires B) and even external resources (compare to LimitRanges).Security response. If you forced image tags into image SHAs, you could write an admission plugin that prevents certain SHAs from running.RegistrationWebhook admission plugins of both types are registered in the API, and all API servers (kube-apiserver and all extension API servers) share a common config for them. During the registration process, a webhook admission plugin describes:How to connect to the webhook admission serverHow to verify the webhook admission server (Is it really the server I expect?)Where to send the data at that server (which URL path)Which resources and which HTTP verbs it will handleWhat an API server should do on connection failures (for example, if the admission webhook server goes down)1 apiVersion: admissionregistration.k8s.io/v1beta12 kind: ValidatingWebhookConfiguration3 metadata:4   name: namespacereservations.admission.online.openshift.io5 webhooks:6 – name: namespacereservations.admission.online.openshift.io7   clientConfig:8     service:9       namespace: default10      name: kubernetes11     path: /apis/admission.online.openshift.io/v1alpha1/namespacereservations12    caBundle: KUBE_CA_HERE13  rules:14  – operations:15    – CREATE16    apiGroups:17    – “”18    apiVersions:19    – “*”20    resources:21    – namespaces22  failurePolicy: FailLine 6: `name` – the name for the webhook itself. For mutating webhooks, these are sorted to provide ordering.Line 7: `clientConfig` – provides information about how to connect to, trust, and send data to the webhook admission server.Line 13: `rules` – describe when an API server should call this admission plugin. In this case, only for creates of namespaces. You can specify any resource here so specifying creates of `serviceinstances.servicecatalog.k8s.io` is also legal.Line 22: `failurePolicy` – says what to do if the webhook admission server is unavailable. Choices are “Ignore” (fail open) or “Fail” (fail closed). Failing open makes for unpredictable behavior for all clients.Authentication and trustBecause webhook admission plugins have a lot of power (remember, they get to see the API resource content of any request sent to them and might modify them for mutating plugins), it is important to consider:How individual API servers verify their connection to the webhook admission server How the webhook admission server authenticates precisely which API server is contacting itWhether that particular API server has authorization to make the requestThere are three major categories of connection:From kube-apiserver or extension-apiservers to externally hosted admission webhooks (webhooks not hosted in the cluster)From kube-apiserver to self-hosted admission webhooksFrom extension-apiservers to self-hosted admission webhooksTo support these categories, the webhook admission plugins accept a kubeconfig file which describes how to connect to individual servers. For interacting with externally hosted admission webhooks, there is really no alternative to configuring that file manually since the authentication/authorization and access paths are owned by the server you’re hooking to.For the self-hosted category, a cleverly built webhook admission server and topology can take advantage of the safe defaulting built into the admission plugin and have a secure, portable, zero-config topology that works from any API server.Simple, secure, portable, zero-config topologyIf you build your webhook admission server to also be an extension API server, it becomes possible to aggregate it as a normal API server. This has a number of advantages:Your webhook becomes available like any other API under default kube-apiserver service `kubernetes.default.svc` (e.g. https://kubernetes.default.svc/apis/admission.example.com/v1/mymutatingadmissionreviews). Among other benefits, you can test using `kubectl`.Your webhook automatically (without any config) makes use of the in-cluster authentication and authorization provided by kube-apiserver. You can restrict access to your webhook with normal RBAC rules.Your extension API servers and kube-apiserver automatically (without any config) make use of their in-cluster credentials to communicate with the webhook.Extension API servers do not leak their service account token to your webhook because they go through kube-apiserver, which is a secure front proxy.Source: https://drive.google.com/a/redhat.com/file/d/12nC9S2fWCbeX_P8nrmL6NgOSIha4HDNp In short: a secure topology makes use of all security mechanisms of API server aggregation and additionally requires no additional configuration.Other topologies are possible but require additional manual configuration as well as a lot of effort to create a secure setup, especially when extension API servers like service catalog come into play. The topology above is zero-config and portable to every Kubernetes cluster.How do I write a webhook admission server?Writing a full server complete with authentication and authorization can be intimidating. To make it easier, there are projects based on Kubernetes 1.9 that provide a library for building your webhook admission server in 200 lines or less. Take a look at the generic-admission-apiserver and the kubernetes-namespace-reservation projects for the library and an example of how to build your own secure and portable webhook admission server.With the admission webhooks introduced in 1.9 we’ve made Kubernetes even more adaptable to your needs. We hope this work, driven by both Red Hat and Google, will enable many more workloads and support ecosystem components. (Istio is one example.) Now is a good time to give it a try! If you’re interested in giving feedback or contributing to this area, join us in the SIG API machinery.
Quelle: kubernetes

Using Your Own Private Registry with Docker Enterprise Edition

One of the things that makes Docker really cool, particularly compared to using virtual machines, is how easy it is to move around Docker images. If you’ve already been using Docker, you’ve almost certainly pulled images from Docker Hub. Docker Hub is Docker’s cloud-based registry service and has tens of thousands of Docker images to choose from. If you’re developing your own software and creating your own Docker images though, you’ll want your own private Docker registry. This is particularly true if you have images with proprietary licenses, or if you have a complex continuous integration (CI) process for your build system.
Docker Enterprise Edition includes Docker Trusted Registry (DTR), a highly available registry with secure image management capabilities which was built to run either inside of your own data center or on your own cloud-based infrastructure. In the next few weeks, we’ll go over how DTR is a critical component of delivering a secure, repeatable and consistent software supply chain.  You can get started with it today through our free hosted demo or by downloading and installing the free 30-day trial. The steps to get started with your own installation are below.
Setting Up Docker Enterprise Edition
Docker Trusted Registry runs on top of Universal Control Plane (UCP), so to begin let’s install a single-node cluster. If you’ve already got your own UCP cluster, you can skip this step.  On your docker host, run the command:
  # Pull and install UCP
  docker run -it –rm -v /var/run/docker.sock:/var/run/docker.sock –name ucp docker/ucp:latest install
Once UCP is up and running, there are a few more things you should do before you install DTR. Open up your browser against the UCP instance you just installed. There should be a link to it at the end of your log output. If you have already have a Docker Enterprise Edition license, go ahead and upload it through the UI. If you don’t, visit the Docker Store and pick up a free, 30-day trial.
Once you’ve got licensing squared away, you’re probably going to want to change the port which UCP is running on. Since this is a single node cluster, DTR and UCP are going to want to use the same TCP ports for running their web services. If you’ve got a UCP swarm with more than one node, this probably isn’t a problem because DTR will look for a node which has the required free ports. Inside of UCP, click on Admin Settings -> Cluster Configuration and change the Controller Port to something like 5443.
Installing DTR
We’re going to install a simple, single-node instance of Docker Trusted Registry.  If you were setting up your DTR for production use, you would likely set things up in High Availability (HA) mode which would require a different type of storage such as a cloud-based object store, or NFS. Since this is a single-node instance, we’re going to stick with the default local storage.
First we need to pull the DTR bootstrap image. The bootstrap image is a tiny, self-contained installer which connects to UCP and sets up all of the containers, volumes, and logical networks required to get DTR up and running.
Use the command:
  # Pull and run the DTR bootstrapper
  docker run -it –rm docker/dtr:latest install –ucp-insecure-tls
NOTE:  Both UCP and DTR by default come with their own certs which won’t be recognized by your system.  If you’ve set up UCP with TLS certs which are trusted by your system, you can omit the `–ucp-insecure-tls` option. Alternatively, you can use the `–ucp-ca` option which will let you specify the UCP CA certificate directly.
The DTR bootstrap image should then ask you for a couple of settings, such as the URL of your UCP installation and your UCP admin username and password.  It should only take a minute or two to pull all of the DTR images and set everything up.
Keeping Everything Secure
Once everything is up and running, you’re ready to push and pull images to and from
the registry.  Before we do that step though, let’s set up our TLS certificates so that we can securely talk to DTR.
On Linux, we can use these commands (just make certain you change DTR_HOSTNAME to reflect the DTR we just set up):
  # Pull the CA certificate from DTR (you can use wget if curl is unavailable)
  DTR_HOSTNAME=<Your DTR hostname>
  curl -k https://$(DTR_HOSTNAME)/ca > $(DTR_HOSTNAME).crt
  sudo mkdir /etc/docker/certs.d/$(DTR_HOSTNAME)
  sudo cp $(DTR_HOSTNAME) /etc/docker/certs.d/$(DTR_HOSTNAME)
  # Restart the docker daemon (use `sudo service docker restart` on Ubuntu 14.04)
  sudo systemctl restart docker
On Docker for Mac and Windows, we’ll set up our client a little bit differently.  Go in to Settings -> Daemon and in the Insecure Registries section, enter in your DTR hostname.  Click Apply, and your docker daemon should restart and you should be good to go.
Pushing and Pulling Images
We now need to set up a repository to hold an image. This is a little bit different than Docker Hub which automatically creates a repository if one doesn’t exist when you do a
docker push. To create the repository, point your browser to https://<Your DTR hostname> and
then sign-in with your admin credentials when prompted. If you added a license to UCP, that
license will automatically have been picked up by DTR. If not, make certain you upload
your license now.
Once you’re in, click on the ‘New Repository` button and create a new repository.
We’ll create a repo to hold Alpine linux, so type `alpine` in the name field, and click
`Save` (it’s labelled `Create` in DTR 2.5 and newer).
Now let’s go back to our shell and type the commands:
  # Pull the latest version of Alpine Linux
  docker pull alpine:latest
  # Sign in to your new DTR instance
  docker login <Your DTR hostname>
  # Tag Alpine to be able to push it to your DTR
  docker tag alpine:latest <Your DTR hostname>/admin/alpine:latest
  # Push the image to DTR
  docker push <Your DTR hostname>/admin/alpine:latest
And that’s it!  We just pulled a copy of the latest Alpine Linux, re-tagged it so that we could store it inside of DTR, and then pushed it to our private registry.  If you want to pull that image to a different Docker engine, set up your DTR certs as shown above, and issue the command:
   # Pull the image from DTR
   docker pull <Your DTR hostname>/admin/alpine:latest
DTR has a lot of great image management features built right in such as image caching, mirroring, scanning, signing, and even automated supply chain policies.  We’ll leave these to future blog posts which we can explore in more detail.

Step-by-step instructions on how to setup and use your own private registry with #Docker Enterprise…Click To Tweet

To learn more about Docker Enterprise Edition:

Visit the website and view pricing
Read more about Docker EE customers and the benefits they’re seeing
Don’t have time to install and configure Docker EE? Register for the free hosted trial to test drive Docker EE in just a few minutes

The post Using Your Own Private Registry with Docker Enterprise Edition appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Introducing Container Storage Interface (CSI) Alpha for Kubernetes

One of the key differentiators for Kubernetes has been a powerful volume plugin system that enables many different types of storage systems to:Automatically create storage when required.Make storage available to containers wherever they’re scheduled.Automatically delete the storage when no longer needed.Adding support for new storage systems to Kubernetes, however, has been challenging. Kubernetes 1.9 introduces an alpha implementation of the Container Storage Interface (CSI) which makes installing new volume plugins as easy as deploying a pod. It also enables third-party storage providers to develop solutions without the need to add to the core Kubernetes codebase.Because the feature is alpha in 1.9, it must be explicitly enabled. Alpha features are not recommended for production usage, but are a good indication of the direction the project is headed (in this case, towards a more extensible and standards based Kubernetes storage ecosystem).Why Kubernetes CSI?Kubernetes volume plugins are currently “in-tree”, meaning they’re linked, compiled, built, and shipped with the core kubernetes binaries. Adding support for a new storage system to Kubernetes (a volume plugin) requires checking code into the core Kubernetes repository. But aligning with the Kubernetes release process is painful for many plugin developers.The existing Flex Volume plugin attempted to address this pain by exposing an exec based API for external volume plugins. Although it enables third party storage vendors to write drivers out-of-tree, in order to deploy the third party driver files it requires access to the root filesystem of node and master machines.In addition to being difficult to deploy, Flex did not address the pain of plugin dependencies: Volume plugins tend to have many external requirements (on mount and filesystem tools, for example). These dependencies are assumed to be available on the underlying host OS which is often not the case (and installing them requires access to the root filesystem of node machine).CSI addresses all of these issues by enabling storage plugins to be developed out-of-tree, containerized, deployed via standard Kubernetes primitives, and consumed through the Kubernetes storage primitives users know and love (PersistentVolumeClaims, PersistentVolumes, StorageClasses).What is CSI?The goal of CSI is to establish a standardized mechanism for Container Orchestration Systems (COs) to expose arbitrary storage systems to their containerized workloads. The CSI specification emerged from cooperation between community members from various Container Orchestration Systems (COs)–including Kubernetes, Mesos, Docker, and Cloud Foundry. The specification is developed, independent of Kubernetes, and maintained at https://github.com/container-storage-interface/spec/blob/master/spec.md.Kubernetes v1.9 exposes an alpha implementation of the CSI specification enabling CSI compatible volume drivers to be deployed on Kubernetes and consumed by Kubernetes workloads.How do I deploy a CSI driver on a Kubernetes Cluster?CSI plugin authors will provide their own instructions for deploying their plugin on Kubernetes.How do I use a CSI Volume?Assuming a CSI storage plugin is already deployed on your cluster, you can use it through the familiar Kubernetes storage primitives: PersistentVolumeClaims, PersistentVolumes, and StorageClasses.CSI is an alpha feature in Kubernetes v1.9. To enable it, set the following flags:CSI is an alpha feature in Kubernetes v1.9. To enable it, set the following flags:API server binary:–feature-gates=CSIPersistentVolume=true–runtime-config=storage.k8s.io/v1alpha1=trueAPI server binary and kubelet binaries:–feature-gates=MountPropagation=true–allow-privileged=trueDynamic ProvisioningYou can enable automatic creation/deletion of volumes for CSI Storage plugins that support dynamic provisioning by creating a StorageClass pointing to the CSI plugin.The following StorageClass, for example, enables dynamic creation of “fast-storage” volumes by a CSI volume plugin called “com.example.team/csi-driver”. kind: StorageClassapiVersion: storage.k8s.io/v1metadata:  name: fast-storageprovisioner: com.example.team/csi-driverparameters:  type: pd-ssdTo trigger dynamic provisioning, create a PersistentVolumeClaim object. The following PersistentVolumeClaim, for example, triggers dynamic provisioning using the StorageClass above.apiVersion: v1kind: PersistentVolumeClaimmetadata:  name: my-request-for-storagespec:  accessModes:  – ReadWriteOnce  resources:    requests:      storage: 5Gi  storageClassName: fast-storageWhen volume provisioning is invoked, the parameter “type: pd-ssd” is passed to the CSI plugin “com.example.team/csi-driver” via a “CreateVolume” call. In response, the external volume plugin provisions a new volume and then automatically create a PersistentVolume object to represent the new volume. Kubernetes then binds the new PersistentVolume object to the PersistentVolumeClaim, making it ready to use.If the “fast-storage” StorageClass is marked default, there is no need to include the storageClassName in the PersistentVolumeClaim, it will be used by default.Pre-Provisioned VolumesYou can always expose a pre-existing volume in Kubernetes by manually creating a PersistentVolume object to represent the existing volume. The following PersistentVolume, for example, exposes a volume with the name “existingVolumeName” belonging to a CSI storage plugin called “com.example.team/csi-driver”.apiVersion: v1kind: PersistentVolumemetadata:  name: my-manually-created-pvspec:  capacity:    storage: 5Gi  accessModes:    – ReadWriteOnce  persistentVolumeReclaimPolicy: Retain  csi:    driver: com.example.team/csi-driver    volumeHandle: existingVolumeName    readOnly: falseAttaching and MountingYou can reference a PersistentVolumeClaim that is bound to a CSI volume in any pod or pod template.kind: PodapiVersion: v1metadata:  name: my-podspec:  containers:    – name: my-frontend      image: dockerfile/nginx      volumeMounts:      – mountPath: “/var/www/html”        name: my-csi-volume  volumes:    – name: my-csi-volume      persistentVolumeClaim:        claimName: my-request-for-storageWhen the pod referencing a CSI volume is scheduled, Kubernetes will trigger the appropriate operations against the external CSI plugin (ControllerPublishVolume, NodePublishVolume, etc.) to ensure the specified volume is attached, mounted, and ready to use by the containers in the pod.For more details please see the CSI implementation design doc and documentation.How do I create a CSI driver?Kubernetes is as minimally prescriptive on the packaging and deployment of a CSI Volume Driver as possible. The minimum requirements for deploying a CSI Volume Driver on Kubernetes are documented here.The minimum requirements document also contains a section outlining the suggested mechanism for deploying an arbitrary containerized CSI driver on Kubernetes. This mechanism can be used by a Storage Provider to simplify deployment of containerized CSI compatible volume drivers on Kubernetes. As part of this recommended deployment process, the Kubernetes team provides the following sidecar (helper) containers:external-attacherSidecar container that watches Kubernetes VolumeAttachment objects and triggers ControllerPublish and ControllerUnpublish operations against a CSI endpoint.external-provisionerSidecar container that watches Kubernetes PersistentVolumeClaim objects and triggers CreateVolume and DeleteVolume operations against a CSI endpoint.driver-registrarSidecar container that registers the CSI driver with kubelet (in the future), and adds the drivers custom NodeId (retrieved via GetNodeID call against the CSI endpoint) to an annotation on the Kubernetes Node API ObjectStorage vendors can build Kubernetes deployments for their plugins using these components, while leaving their CSI driver completely unaware of Kubernetes.Where can I find CSI drivers?CSI drivers are developed and maintained by third-parties. You can find example CSI drivers here, but these are provided purely for illustrative purposes, and are not intended to be used for production workloads.What about Flex?The Flex Volume plugin exists as an exec based mechanism to create “out-of-tree” volume plugins. Although it has some drawbacks (mentioned above), the Flex volume plugin coexists with the new CSI Volume plugin. SIG Storage will continue to maintain the Flex API so that existing third-party Flex drivers (already deployed in production clusters) continue to work. In the future, new volume features will only be added to CSI, not Flex.What will happen to the in-tree volume plugins?Once CSI reaches stability, we plan to migrate most of the in-tree volume plugins to CSI. Stay tuned for more details as the Kubernetes CSI implementation approaches stable.What are the limitations of alpha?The alpha implementation of CSI has the following limitations:The credential fields in CreateVolume, NodePublishVolume, and ControllerPublishVolume calls are not supported.Block volumes are not supported; only file.Specifying filesystems is not supported, and defaults to ext4.CSI drivers must be deployed with the provided “external-attacher,” even if they don’t implement “ControllerPublishVolume”.Kubernetes scheduler topology awareness is not supported for CSI volumes: in short, sharing information about where a volume is provisioned (zone, regions, etc.) to allow k8s scheduler to make smarter scheduling decisions.What’s next?Depending on feedback and adoption, the Kubernetes team plans to push the CSI implementation to beta in either 1.10 or 1.11.How Do I Get Involved?This project, like all of Kubernetes, is the result of hard work by many contributors from diverse backgrounds working together. A huge thank you to Vladimir Vivien (vladimirvivien), Jan Šafránek (jsafrane), Chakravarthy Nelluri (chakri-nelluri), Bradley Childs (childsb), Luis Pabón (lpabon), and Saad Ali (saad-ali) for their tireless efforts in bringing CSI to life in Kubernetes.If you’re interested in getting involved with the design and development of CSI or any part of the Kubernetes Storage system, join the Kubernetes Storage Special-Interest-Group (SIG). We’re rapidly growing and always welcome new contributors.
Quelle: kubernetes

Docker for Mac with Kubernetes

You heard about it at DockerCon Europe and now it is here: we are proud to announce that Docker for Mac with beta Kubernetes support is now publicly available as part of the Edge release channel. We hope you are as excited as we are!
With this release you can now run a single node Kubernetes cluster right on your Mac and use both kubectl commands and docker commands to control your containers.
First, a few things to keep in mind:

Docker for Mac required
Kubernetes features are only accessible on macOS for now; Docker for Windows and Docker Enterprise Edition betas will follow at a later date. If you need to install a new copy of Docker for Mac you can download it from the Docker Store.
Edge channel required
Kubernetes support is still considered experimental with this release, so to enable the download and use of Kubernetes components you must be on the Edge channel. The Docker for Mac version should be 17.12.0-ce-mac45 or later after updating.
Already using other Kubernetes tools?
If you are already running a version of kubectl pointed at another environment, for example minikube, you will want to follow the activation instructions to change contexts to docker-for-desktop.

What You Can Do
Docker for Mac and Windows are the most popular way to configure a Docker dev environment and are used everyday by hundreds of thousands of developers to build, test and debug containerized apps. Developers building both docker-compose and Swarm-based apps, and apps destined for deployment on Kubernetes can now get a simple-to-use development system that takes optimal advantage of their laptop or workstation. All container tasks – build, run and push – will run on the same Docker instance with a shared set of images, volumes and containers. Docker for Mac is simple to install, so you can have Docker containers running on your Mac in just a few minutes. And Docker for Mac auto-updates so you continue getting the latest Docker product revisions.
With experimental Kubernetes support in Docker CE for Mac, Docker can provide users an end-to-end suite of container-management software and services that span from developer workstations running Docker for Mac or Windows, through test and CI/CD, through to production systems on-premises or in the cloud running Docker Enterprise Edition (EE).
The beauty of building with Docker for Mac or Windows is that you can deploy the exact same set of Docker container images on your desktop as you do with Docker Enterprise Edition (EE) on your production systems. Docker for Mac or Windows is a single node system for building, testing and preparing to ship applications; Docker EE provides the security, control, and scale needed to manage your production applications. You eliminate the “it worked on my machine” problem because you have the same Docker containers running on the same Docker engines, and the same Docker Swarm and Kubernetes orchestrators (coming soon to EE).

Things To Try
If you are new to Kubernetes and looking for some introductory exercises to try, here are a few resources:

The Docker for Mac Kubernetes page has instructions for getting an example app up and running
Follow along with Docker Developer Advocate Elton Stoneman during his short video, demonstrating activating Kubernetes and deploying an application using both Docker compose and a Kubernetes manifest.

Send Us Your Feedback
Send us your feedback, ideas for improvement, bugs, complaints and more so we can make Docker Desktop better. You can use the Docker community forums for general discussions and you can also directly file technical issues on Github.

Docker for Mac with Experimental Kubernetes support is here! #dockerformac #docker #kubernetesClick To Tweet

Call to Action

Get Docker for Mac
Activate experimental Kubernetes support in Docker for Mac
Watch our DockerCon Europe 2017 Kubernetes Announcement

The post Docker for Mac with Kubernetes appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Kubernetes v1.9 releases beta support for Windows Server Containers

With the release of Kubernetes v1.9, our mission of ensuring Kubernetes works well everywhere and for everyone takes a great step forward. We’ve advanced support for Windows Server to beta along with continued feature and functional advancements on both the Kubernetes and Windows platforms. SIG-Windows has been working since March of 2016 to open the door for many Windows-specific applications and workloads to run on Kubernetes, significantly expanding the implementation scenarios and the enterprise reach of Kubernetes.Enterprises of all sizes have made significant investments in .NET and Windows based applications. Many enterprise portfolios today contain .NET and Windows, with Gartner claiming that 80% of enterprise apps run on Windows. According to StackOverflow Insights, 40% of professional developers use the .NET programming languages (including .NET Core).But why is all this information important? It means that enterprises have both legacy and new born-in-the-cloud (microservice) applications that utilize a wide array of programming frameworks. There is a big push in the industry to modernize existing/legacy applications to containers, using an approach similar to “lift and shift”. Modernizing existing applications into containers also provides added flexibility for new functionality to be introduced in additional Windows or Linux containers. Containers are becoming the de facto standard for packaging, deploying, and managing both existing and microservice applications. IT organizations are looking for an easier and homogenous way to orchestrate and manage containers across their Linux and Windows environments. Kubernetes v1.9 now offers beta support for Windows Server containers, making it the clear choice for orchestrating containers of any kind.FeaturesAlpha support for Windows Server containers in Kubernetes was great for proof-of-concept projects and visualizing the road map for support of Windows in Kubernetes. The alpha release had significant drawbacks, however, and lacked many features, especially in networking. SIG-Windows, Microsoft, Cloudbase Solutions, Apprenda, and other community members banded together to create a comprehensive beta release, enabling Kubernetes users to start evaluating and using Windows.Some key feature improvements for Windows Server containers on Kubernetes include:Improved support for pods! Multiple Windows Server containers in a pod can now share the network namespace using network compartments in Windows Server. This feature brings the concept of a pod to parity with Linux-based containersReduced network complexity by using a single network endpoint per podKernel-Based load-balancing using the Virtual Filtering Platform (VFP) Hyper-v Switch Extension (analogous to Linux iptables)Container Runtime Interface (CRI) pod and node level statistics. Windows Server containers can now be profiled for Horizontal Pod Autoscaling using performance metrics gathered from the pod and the nodeSupport for kubeadm commands to add Windows Server nodes to a Kubernetes environment. Kubeadm simplifies the provisioning of a Kubernetes cluster, and with the support for Windows Server, you can use a single tool to deploy Kubernetes in your infrastructureSupport for ConfigMaps, Secrets, and Volumes. These are key features that allow you to separate, and in some cases secure, the configuration of the containers from the implementationThe crown jewels of Kubernetes 1.9 Windows support, however, are the networking enhancements. With the release of Windows Server 1709, Microsoft has enabled key networking capabilities in the operating system and the Windows Host Networking Service (HNS) that paved the way to produce a number of CNI plugins that work with Windows Server containers in Kubernetes. The Layer-3 routed and network overlay plugins that are supported with Kubernetes 1.9 are listed below:Upstream L3 Routing – IP routes configured in upstream ToRHost-Gateway – IP routes configured on each hostOpen vSwitch (OVS) & Open Virtual Network (OVN) with Overlay – Supports STT and Geneve tunneling typesYou can read more about each of their configuration, setup, and runtime capabilities to make an informed selection for your networking stack in Kubernetes.Even though you have to continue running the Kubernetes Control Plane and Master Components in Linux, you are now able to introduce Windows Server as a Node in Kubernetes. As a community, this is a huge milestone and achievement. We will now start seeing .NET, .NET Core, ASP.NET, IIS, Windows Services, Windows executables and many more windows-based applications in Kubernetes.What’s coming nextA lot of work went into this beta release, but the community realizes there are more areas of investment needed before we can release Windows support as GA (General Availability) for production workloads. Some keys areas of focus for the first two quarters of 2018 include:Continue to make progress in the area of networking. Additional CNI plugins are under development and nearing completionOverlay – win-overlay (vxlan or IP-in-IP encapsulation using Flannel) Win-l2bridge (host-gateway) OVN using cloud networking – without overlaysSupport for Kubernetes network policies in ovn-kubernetesSupport for Hyper-V IsolationSupport for StatefulSet functionality for stateful applicationsProduce installation artifacts and documentation that work on any infrastructure and across many public cloud providers like Microsoft Azure, Google Cloud, and Amazon AWSContinuous Integration/Continuous Delivery (CI/CD) infrastructure for SIG-WindowsScalability and Performance testingEven though we have not committed to a timeline for GA, SIG-Windows estimates a GA release in the first half of 2018.Get InvolvedAs we continue to make progress towards General Availability of this feature in Kubernetes, we welcome you to get involved, contribute code, provide feedback, deploy Windows Server containers to your Kubernetes cluster, or simply join our community.If you want to get started on deploying Windows Server containers in Kubernetes, read our getting started guide at https://kubernetes.io/docs/getting-started-guides/windows/ We meet every other Tuesday at 12:30 Eastern Standard Time (EST) at https://zoom.us/my/sigwindows. All our meetings are recorded on youtube and referenced at https://www.youtube.com/playlist?list=PL69nYSiGNLP2OH9InCcNkWNu2bl-gmIU4Chat with us on Slack at https://kubernetes.slack.com/messages/sig-windows Find us on GitHub at https://github.com/kubernetes/community/tree/master/sig-windowsThank you,Michael Michael (@michmike77)SIG-Windows LeadSenior Director of Product Management, Apprenda
Quelle: kubernetes

PaddlePaddle Fluid: Elastic Deep Learning on Kubernetes

Editor’s note: Today’s post is a joint post from the deep learning team at Baidu and the etcd team at CoreOS.PaddlePaddle Fluid: Elastic Deep Learning on KubernetesTwo open source communities—PaddlePaddle, the deep learning framework originated in Baidu, and Kubernetes®, the most famous containerized application scheduler—are announcing the Elastic Deep Learning (EDL) feature in PaddlePaddle’s new release codenamed Fluid.Fluid EDL includes a Kubernetes controller, PaddlePaddle auto-scaler, which changes the number of processes of distributed jobs according to the idle hardware resource in the cluster, and a new fault-tolerable architecture as described in the PaddlePaddle design doc.Industrial deep learning requires significant computation power. Research labs and companies often build GPU clusters managed by SLURM, MPI, or SGE. These clusters either run a submitted job if it requires less than the idle resource, or pend the job for an unpredictably long time. This approach has its drawbacks: in an example with 99 available nodes and a submitted job that requires 100, the job has to wait without using any of the available nodes. Fluid works with Kubernetes to power elastic deep learning jobs, which often lack optimal resources, by helping to expose potential algorithmic problems as early as possible.Another challenge is that industrial users tend to run deep learning jobs as a subset stage of the complete data pipeline, including the web server and log collector. Such general-purpose clusters require priority-based elastic scheduling. This makes it possible to run more processes in the web server job and less in deep learning during periods of high web traffic, then prioritize deep learning when web traffic is low. Fluid talks to Kubernetes’ API server to understand the global picture and orchestrate the number of processes affiliated with various jobs.In both scenarios, PaddlePaddle jobs are tolerant to a process spikes and decreases. We achieved this by implementing the new design, which introduces a master process in addition to the old PaddlePaddle architecture as described in a previous blog post. In the new design, as long as there are three processes left in a job, it continues. In extreme cases where all processes are killed, the job can be restored and resume.We tested Fluid EDL for two use cases: 1) the Kubernetes cluster runs only PaddlePaddle jobs; and 2) the cluster runs PaddlePaddle and Nginx jobs.In the first test, we started up to 20 PaddlePaddle jobs one by one with a 10-second interval. Each job has 60 trainers and 10 parameter server processes, and will last for hours. We repeated the experiment 20 times: 10 with FluidEDL turned off and 10 with FluidEDL turned on. In Figure one, solid lines correspond to the first 10 experiments and dotted lines the rest. In the upper part of the figure, we see that the number of pending jobs increments monotonically without EDL. However, when EDL is turned on, resources are evenly distributed to all jobs. Fluid EDL kills some existing processes to make room for new jobs and jobs coming in at a later point in time. In both cases, the cluster is equally utilized (see lower part of figure).Figure 1. Fluid EDL evenly distributes resource among jobs.In the second test, each experiment ran 400 Nginx pods, which has higher priority than the six PaddlePaddle jobs. Initially, each PaddlePaddle job had 15 trainers and 10 parameter servers. We killed 100 Nginx pods every 90 seconds until 100 left, and then we started to increase the number of Nginx jobs by 100 every 90 seconds. The upper part of Figure 2 shows this process. The middle of the diagram shows that Fluid EDL automatically started some PaddlePaddle processes by decreasing Nginx pods, and killed PaddlePaddle processes by increasing Nginx pods later on. As a result, the cluster maintains around 90% utilization as shown in the bottom of the figure. When Fluid EDL was turned off, there were no PaddlePaddle processes autoincrement, and the utilization fluctuated with the varying number of Nginx pods.Figure 2. Fluid changes PaddlePaddle processes with the change of Nginx processes.We continue to work on FluidEDL and welcome comments and contributions. Visit the PaddlePaddle repo, where you can find the design doc, a simple tutorial, and experiment details.Xu Yan (Baidu Research)Helin Wang (Baidu Research)Yi Wu (Baidu Research)Xi Chen (Baidu Research)Weibao Gong (Baidu Research)Xiang Li (CoreOS)- Yi Wang (Baidu Research)
Quelle: kubernetes

Five Days of Kubernetes 1.9

Kubernetes 1.9 is live, made possible by hundreds of contributors pushing thousands of commits in this latest releases.The community has tallied around 32,300 commits in the main repo and continues rapid growth outside of the main repo, which signals growing maturity and stability for the project. The community has logged more than 90,700 commits across all repos and 7,800 commits across all repos for v1.8.0 to v1.9.0 alone.With the help of our growing community of 1,400 plus contributors, we issued more than 4,490 PRs and pushed more than 7,800 commits to deliver Kubernetes 1.9 with many notable updates, including enhancements for the workloads and stateful application support areas. This all points to increased extensibility and standards-based Kubernetes ecosystem.While many improvements have been contributed, we highlight key features in this series of in-depth posts listed below. Follow along and see what’s new and improved with workloads, Windows support and more.Day 1: 5 Days of Kubernetes 1.9Day 2: Windows and Docker support for Kubernetes (beta)Day 3: Storage, CSI framework (alpha)Day 4: Web Hook and Mission Critical, Dynamic Admission ControlDay 5: Introducing client-go version 6Day 6: Workloads APIConnectPost questions (or answer questions) on Stack OverflowJoin the community portal for advocates on K8sPortFollow us on Twitter @Kubernetesio for latest updates Connect with the community on SlackGet involved with the Kubernetes project on GitHub
Quelle: kubernetes