Announcing the Docker Customer Innovation Awards

We are excited to announce the first annual Docker Customer Innovation Award winners at DockerCon Barcelona today! We launched the awards this year to recognize customers who stand out in their adoption of Docker Enterprise platform to drive transformation within IT and their business.
38 companies were nominated, all of whom have spoken publicly about their containerization initiatives recently, or plan to soon. From looking at so many excellent nominees, we realized there were really two different stories — so we created two award categories. In each category, we have a winner and three finalists.
 
 
Business Transformation
Customers in this category have developed company-wide initiatives aimed at transforming IT and their business in a significant way, with Docker Enterprise as a key part if it. They typically started their journey two or more years ago and have containerized multiple applications across the organization.

WINNER:

Societe Generale transformed how the bank develops its software by building a container platform for migrating thousands of its applications to the cloud.

 

FINALISTS:

Bosch built a global platform that enables developers to build and deliver new software solutions and updates at digital speed.

Liberty Mutual consolidated infrastructure and VMs significantly, paving the way for innovation and a multi-cloud future.

MetLife modernized hundreds of traditional applications, driving 66 percent cost savings and creating a self-funding model to fuel change and innovation. Cut new product time to market by two-thirds.

 
Rising Stars
Customers in this category are early in their containerization journey and have already leveraged their first project with Docker Enterprise as a catalyst to innovate their business — often creating new applications or services.

WINNER:

Desigual built a brand new in-store shopping experience app in less than 5 months to connect customers and associates, creating an outstanding brand and shopping experience.

FINALISTS:

BCG leverages Docker Enterprise to develop breakthrough analytics and machine-learning solutions for clients with BCG’s Source.ai offering.

Citizens Bank (Franklin American Mortgage) created a dedicated innovation team sparked cultural change at a traditional mortgage company, allowing it to bring new products to market in weeks or months.
The Dutch Ministry of Justice evaluated Docker Enterprise as a way to accelerate application development, which helped spark an effort to modernize juvenile custodian services from whiteboards and sticky notes to a mobile app.

We want to give a big thanks to the winners and finalists, and to all of our remarkable customers have started innovation journeys with Docker.
We’ve opened the nomination process for 2019 since we will be announcing winners at DockerCon 2019 on April 29-May 2. If you’re interested in submitting or want to nominate someone else, you can learn how here.

Announcing the winners of the #Docker Customer Innovation AwardsClick To Tweet

The post Announcing the Docker Customer Innovation Awards appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Simplifying Kubernetes with Docker Compose and Friends

Today we’re happy to announce we’re open sourcing our support for using Docker Compose on Kubernetes. We’ve had this capability in Docker Enterprise for a little while but as of today you will be able to use this on any Kubernetes cluster you choose.

Why do I need Compose if I already have Kubernetes?
The Kubernetes API is really quite large. There are more than 50 first-class objects in the latest release, from Pods and Deployments to ValidatingWebhookConfiguration and ResourceQuota. This can lead to a verbosity in configuration, which then needs to be managed by you, the developer. Let’s look at a concrete example of that.
The Sock Shop is the canonical example of a microservices application. It consists of multiple services using different technologies and backends, all packaged up as Docker images. It also provides example configurations using different tools, including both Compose and raw Kubernetes configuration. Let’s have a look at the relative sizes of those configurations:
$ git clone https://github.com/microservices-demo/microservices-demo.git
$ cd deployment/kubernetes/manifests
$ (Get-ChildItem -Recurse -File | Get-Content | Measure-Object -line).Lines
908
$ cd ../../docker-compose
$ (Get-Content docker-compose.yml | Measure-Object -line).Lines
174
Describing the exact same multi-service application using just the raw Kubernetes objects takes more than 5 times the amount of configuration than with Compose. That’s not just an upfront cost to author – it’s also an ongoing cost to maintain. The Kubernetes API is amazingly general purpose – it exposes low-level primitives for building the full range of distributed systems. Compose meanwhile isn’t an API but a high-level tool focused on developer productivity. That’s why combining them together makes sense. For the common case of a set of interconnected web services, Compose provides an abstraction that simplifies Kubernetes configuration. For everything else you can still drop down to the raw Kubernetes API primitives. Let’s see all that in action.
First we need to install the Compose on Kubernetes controller into your Kubernetes cluster. This controller uses the standard Kubernetes extension points to introduce the `Stack` to the Kubernetes API. You can use any Kubernetes cluster you like, but if you don’t already have one available then remember that Docker Desktop comes with Kubernetes and the Compose controller built-in, and enabling it is as simple as ticking a box in the settings.
To install the controller manually on any Kubernetes cluster, see the full documentation for the current installation instructions.
Next let’s write a simple Compose file:
version: “3.7”
services:
  web:
    image: dockerdemos/lab-web
    ports:
     – “33000:80″
  words:
    image: dockerdemos/lab-words
    deploy:
      replicas: 3
      endpoint_mode: dnsrr
  db:
    image: dockerdemos/lab-db
We’ll then use the docker client to deploy this to a Kubernetes cluster running the controller:
$ docker stack deploy –orchestrator=kubernetes -c docker-compose.yml words
Waiting for the stack to be stable and running…
db: Ready       [pod status: 1/1 ready, 0/1 pending, 0/1 failed]
web: Ready      [pod status: 1/1 ready, 0/1 pending, 0/1 failed]
words: Ready    [pod status: 1/3 ready, 2/3 pending, 0/3 failed]
Stack words is stable and running
We can then interact with those objects via the Kubernetes API. Here you can see we’ve created the lower-level objects like Services, Pods, Deployments and ReplicaSets automatically:
$ kubectl get all
NAME                       READY     STATUS    RESTARTS   AGE
pod/db-85849797f6-bhpm8    1/1       Running   0          57s
pod/web-7974f485b7-j7nvt   1/1       Running   0          57s
pod/words-8fd6c974-44r4s   1/1       Running   0          57s
pod/words-8fd6c974-7c59p   1/1       Running   0          57s
pod/words-8fd6c974-zclh5   1/1       Running   0          57s

NAME                    TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/db              ClusterIP      None            <none>        55555/TCP      57s
service/kubernetes      ClusterIP      10.96.0.1       <none>        443/TCP        4d
service/web             ClusterIP      None            <none>        55555/TCP      57s
service/web-published   LoadBalancer   10.102.236.49   localhost     33000:31910/TCP   57s
service/words           ClusterIP      None            <none>        55555/TCP      57s

NAME                    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/db      1         1         1            1           57s
deployment.apps/web     1         1         1            1           57s
deployment.apps/words   3         3         3            3           57s

NAME                             DESIRED   CURRENT   READY     AGE
replicaset.apps/db-85849797f6    1         1         1         57s
replicaset.apps/web-7974f485b7   1         1         1         57s
replicaset.apps/words-8fd6c974   3         3         3         57s
It’s important to note that this isn’t a one-time conversion. The Compose on Kubernetes API Server introduces the Stack resource to the Kubernetes API. So we can query and manage everything at the same level of abstraction as we’re building the application. That makes delving into the details above useful for understanding how things work, or debugging issues, but not required most of the time:
$ kubectl get stack
NAME      STATUS      PUBLISHED PORTS   PODS     AGE      
words     Running     33000             5/5      4m
Integration with other Kubernetes tools
Because Stack is now a native Kubernetes object, you can work with it using other Kubernetes tools. As an example, save the as `stack.yaml`:
kind: Stack
apiVersion: compose.docker.com/v1beta2
metadata:
 name: hello
spec:
 services:
 – name: hello
   image: garethr/skaffold-example
   ports:
   – mode: ingress
     target: 5678
     published: 5678
     protocol: tcp
You can use a tool like Skaffold to have the image automatically rebuild and the Stack automatically redeployed whenever you change any of the details of your application. This makes for a great local inner-loop development experience. The following `skaffold.yaml` configuration file is all you need.
apiVersion: skaffold/v1alpha5
kind: Config
build:
 tagPolicy:
   sha256: {}
 artifacts:
 – image: garethr/skaffold-example
 local:
   useBuildkit: true
deploy:
 kubectl:
   manifests:
     – stack.yaml
The future
We already have some thoughts about a Helm plugin to make describing your application with Compose and deploying with Helm as easy as possible. We have lots of other ideas for helping to simplify the developer experience of working with Kubernetes too, without losing any of the power of the platform. We also want to work with the wider Cloud Native community, so if you have ideas and suggestions please let us know.
Kubernetes is designed to be extended, and we hope you like what we’ve been able to release today. If you’re one of the millions of Compose users you can now more easily move to and manage your applications on Kubernetes. If you’re a Kubernetes user struggling with too much low-level configuration then give Compose a try. Let us know in the comments what you think, and head over to GitHub to try things out and even open your first PR:

Compose on Kubernetes controller

#Docker Compose on Kubernetes is now open sourceClick To Tweet

The post Simplifying Kubernetes with Docker Compose and Friends appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Introducing Docker Desktop Enterprise

Nearly 1.4 million developers use Docker Desktop every single day because it is the simplest and easiest way for container-based development. Docker Desktop provides the Docker Engine with Swarm and Kubernetes orchestrators right on the desktop, all from a single install. While this is great for an individual user, in enterprise environments administrators often want to automate the Docker Desktop installation and ensure everyone on the development team has the same configuration following enterprise requirements and creating applications based on architectural standards.
 

 
Docker Desktop Enterprise is a new desktop offering that is the easiest, fastest and most secure way to create and deliver production-ready containerized applications. Developers can work with frameworks and languages of their choice, while IT can securely configure, deploy and manage development environments that align to corporate standards and practices. This enables organizations to rapidly deliver containerized applications from development to production.
Enterprise Manageability That Helps Accelerate Time-to-Production
Docker Desktop Enterprise provides a secure way to configure, deploy and manage developer environments while enforcing safe development standards that align to corporate policies and practices. IT teams and application architects can present developers with application templates designed specifically for their team, to bootstrap and standardize the development process and provide a consistent environment all the way to production.
Key new features for IT:

Packaged as standard MSI (Win) and PKG (Mac) distribution files that work with existing endpoint management tools with lockable settings via policy files
Present developers with customized and approved application templates, ready for coding

Enterprise Deployment & Configuration Packaging
Docker Desktop Enterprise enables IT desktop admins to deploy and manage Docker Desktop Enterprise across distributed developer teams with their preferred endpoint management tools using standard MSI and PKG files. No manual intervention or extra configuration from developers is required and desktop administrators can enable or disable particular settings within Docker Desktop Enterprise to meet corporate standards and provide the best developer experience.

Application Templates

Application architects can provide developers with trusted, customized application templates through the Application Designer interface in Docker Desktop Enterprise, helping to improve reliability and security by ensuring developers start from approved designs. Together, application teams and IT can implement consistent security and development practices across the entire software supply chain, from the developers’ desktops all the way to production.
Increase Developer Productivity and Ship Production-ready Containerized Applications
For developers, Docker Desktop Enterprise is the easiest and fastest way to build production-ready containerized applications working with frameworks and languages of choice and targeting every platform. Developers can rapidly innovate by leveraging company-provided application templates that instantly replicate production-approved application configurations on the local desktop.
Key new features for developers:

Configurable version packs instantly replicate production environment configurations on the local desktop
Application Designer interface allows for template-based workflows for creating containerized applications – no Docker CLI commands are required to get started

Configurable Version Packs

Docker Desktop has Docker Engine and Kubernetes built-in and with the addition of swappable version packs you can now synchronize your desktop development environment with the same Docker API and Kubernetes versions that are used in production with Docker Enterprise. You get the assurance that your application will not break due to incompatible API calls, and if you have multiple downstream environments running different versions of the APIs, you can quickly change your desktop configuration with the click of a button.

Application Designer       

The Application Designer is a new workflow that provides production-ready application and service templates that let you get coding quickly, with the reassurance that your application meets architectural standards. And even if you’ve never launched a container before, the Application Designer interface provides the foundational container artifacts and your organization’s skeleton code, getting you started with containers in minutes. Plus, Docker Desktop Enterprise integrates with your choice of development tools, whether you prefer an IDE or a text editor and command line interfaces.
 
The Docker Desktop Products
Docker Desktop Enterprise is a new addition to our desktop product portfolio which currently includes the free Docker Desktop Community products for MacOS and Windows. Docker Desktop is the simplest way to get started with container-based development on both Windows 10 and macOS with a set of features now available for the enterprise:

 
To learn more about Docker Desktop Enterprise:

Sign up to learn more about Docker Desktop Enterprise as we approach general availability
Watch the livestreams of the DockerCon EU keynotes, Tuesday from 09:00 – 11:00 CET and Wednesday from 9:30am-11:00am CET. (Replays will also be available)
Download Docker Desktop Community and build your first containerized application in minutes [ Windows | mac OS ]

Introducing #Docker Desktop Enterprise Click To Tweet

The post Introducing Docker Desktop Enterprise appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker App and CNAB

Docker App is a new tool we spoke briefly about back at DockerCon US 2018. We’ve been working on `docker-app` to make container applications simpler to share and easier to manage across different teams and between different environments, and we open sourced it so you can already download Docker App from GitHub at https://github.com/docker/app.
In talking to others about problems they’ve experienced sharing and collaborating on the broad area we call “applications” we came to a realisation: it’s a more general problem that others have been working on too. That’s why we’re happy to collaborate with Microsoft on the new Cloud Native Application Bundle (CNAB) specification.

Today’s cloud native applications typically use different technologies, each with their own toolchain. Maybe you’re using ARM templates and Helm charts, or CloudFormation and Compose, or Terraform and Ansible. There is no single solution in the market for defining and packaging these multi-service, multi-format distributed applications.
CNAB is an open source, cloud-agnostic specification for packaging and running distributed applications that aims to solve some of these problems. CNAB unifies the management of multi-service, distributed applications across different toolchains into a single all-in-one packaging format.
The draft specification is available at cnab.io and we’re actively looking both for folks interested in contributing to the spec itself, and to people interested in building tools around the specification. The latest release of Docker App is one such tool that implements the current CNAB spec. That means it can be used to both build CNAB bundles for Compose (which can then be used with any other CNAB client), and also to install, upgrade and uninstall any other CNAB bundle.
 
Sharing CNAB bundles on Docker Hub
One of the limitations of standalone Compose files is that they cannot be shared on Docker Hub or Docker Trusted Registry. Docker App solves this issue too. Here’s a simple Docker application which launches a very simple Prometheus stack:
version: 0.1.0
name: monitoring
description: A basic prometheus stack
maintainers:
 – name: Gareth Rushgrove
   email: garethr@docker.com

version: ‘3.7’

services:
 prometheus:
   image: prom/prometheus:${versions.prometheus}
   ports:
     – ${ports.prometheus}:9090

 alertmanager:
   image: prom/alertmanager:${versions.alertmanager}
   ports:
     – ${ports.alertmanager}:9093

ports:
   prometheus: 9090
   alertmanager: 9093
versions:
   prometheus: latest
   alertmanager: latest
With that saved as `monitoring.dockerapp` we can now build a CNAB and share that on Docker Hub.
$ docker-app push –namespace <your-namespace>
Now on another machine we can still interact with the shared application. For instance let’s use the `inspect` command to get information about our application:
$ docker-app inspect <your-namespace>/monitoring:0.1.0
monitoring 0.1.0

Maintained by: Gareth Rushgrove <garethr@docker.com>

A basic prometheus stack

Services (2) Replicas Ports Image
———— ——– —– —–
prometheus   1        9090  prom/prometheus:latest
alertmanager 1        9093  prom/alertmanager:latest

Parameters (4)        Value
————–        —–
ports.alertmanager    9093
ports.prometheus      9090
versions.alertmanager latest
versions.prometheus   latest
All the information from the Compose file is stored with the CNAB on Docker Hub, and if you notice, it’s also parameterized, so values can be substituted at runtime to fit the deployment requirements. We can install the application directly from Docker Hub as well:
docker-app install <your-namespace>/monitoring:0.1.0 –set ports.alertmanager=9095

Installing a Helm chart using Docker App
One question that has come up in the conversations we’ve had so far is how `docker-app` and now CNAB relates to Helm charts. The good news is that they all work great together! Here is an example using `docker-app` to install a CNAB bundle that packages a Helm chart. The following example uses the `hellohelm` example from the CNAB example bundles.
$ docker-app install -c local bundle.json
Do install for hellohelm
helm install –namespace hellohelm -n hellohelm /cnab/app/charts/alpine
NAME:   hellohelm
LAST DEPLOYED: Wed Nov 28 13:58:22 2018
NAMESPACE: hellohelm
STATUS: DEPLOYED

RESOURCES:
==> v1/Pod
NAME              AGE
hellohelm-alpine  0s

Next steps
If you’re interested in the technical details of the CNAB specification, either to see how it works under the hood or to maybe get involved in the specification work or building tools against it, you can find the spec at cnab.io.
If you’d like to get started building applications with Docker App you can download the latest release from github.com/docker/app and check out some of the examples provided in the repository.

#Docker App first tool to implement CNAB specClick To Tweet

The post Docker App and CNAB appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Announcing Cloud Native Application Bundle (CNAB)

As more organizations pursue cloud-native applications and infrastructures for creating modern software environments, it has become clear that there is no single solution in the market for defining and packaging these multi-service, multi-format distributed applications. Real-world applications can now span on-premises infrastructure and cloud-based services, requiring multiple tools like Terraform for the infrastructure, Helm charts and Docker Compose files for the applications, and CloudFormation or ARM templates for the cloud-services. Each of these need to be managed separately.
 

 
To address this problem, Microsoft in collaboration with Docker are announcing Cloud Native Application Bundle (CNAB) – an open source, cloud-agnostic specification for packaging and running distributed applications. CNAB unifies the management of multi-service, distributed applications across different toolchains into a single all-in-one packaging format.The CNAB specification lets you define resources that can be deployed to any combination of runtime environments and tooling including Docker Engine, Kubernetes, Helm, automation tools and cloud services.
Docker is the first to implement CNAB for containerized applications and will be expanding it across the Docker platform to support new application development, deployment and lifecycle management. Initially CNAB support will be released as part of our docker-app experimental tool for building, packaging and managing cloud-native applications. Docker lets you package CNAB bundles as Docker images, so you can distribute and share through Docker registry tools including Docker Hub and Docker Trusted Registry. Additionally, Docker will enable organizations to deploy and manage CNAB-based applications in Docker Enterprise in the upcoming months.
The draft specification is available at cnab.io and we’re actively looking for contributors to the spec itself and people interested in building tools around the specification. Docker will be contributing to the CNAB specification.
[Tweet “Announcing #CNAB-a new open specification from @Microsoft and #Docker”]
The post Announcing Cloud Native Application Bundle (CNAB) appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Core Workloads API GA

DaemonSet, Deployment, ReplicaSet, and StatefulSet are GAEditor’s Note: We’re happy to announce that the Core Workloads API is GA in Kubernetes 1.9! This blog post from Kenneth Owens reviews how Core Workloads got to GA from its origins, reveals changes in 1.9, and talks about what you can expect going forward.In the Beginning …There were Pods, tightly coupled containers that share resource requirements, networking, storage, and a lifecycle. Pods were useful, but, as it turns out, users wanted to seamlessly, reproducibly, and automatically create many identical replicas of the same Pod, so we created ReplicationController.Replication was a step forward, but what users really needed was higher level orchestration of their replicated Pods. They wanted rolling updates, roll backs, and roll overs. So the OpenShift team created DeploymentConfig. DeploymentConfigs were also useful, and OpenShift users were happy. In order to allow all OSS Kubernetes uses to share in the elation, and to take advantage of set-based label selectors, ReplicaSet and Deployment were added to the extensions/v1beta1 group version providing rolling updates, roll backs, and roll overs for all Kubernetes users.That mostly solved the problem of orchestrating containerized 12 factor apps on Kubernetes, so the community turned its attention to a different problem. Replicating a Pod <n> times isn’t the right hammer for every nail in your cluster. Sometimes, you need to run a Pod on every Node, or on a subset of Nodes (for example, shared side cars like log shippers and metrics collectors, Kubernetes add-ons, and Distributed File Systems). The state of the art was Pods combined with NodeSelectors, or static Pods, but this is unwieldy. After having grown used to the ease of automation provided by Deployments, users demanded the same features for this category of application, so DaemonSet was added to extension/v1beta1 as well.For a time, users were content, until they decided that Kubernetes needed to be able to orchestrate more than just 12 factor apps and cluster infrastructure. Whether your architecture is N-tier, service oriented, or micro-service oriented, your 12 factor apps depend on stateful workloads (for example, RDBMSs, distributed key value stores, and messaging queues) to provide services to end users and other applications. These stateful workloads can have availability and durability requirements that can only be achieved by distributed systems, and users were ready to use Kubernetes to orchestrate the entire stack.While Deployments are great for stateless workloads, they don’t provide the right guarantees for the orchestration of distributed systems. These applications can require stable network identities, ordered, sequential deployment, updates, and deletion, and stable, durable storage. PetSet was added to the apps/v1beta1 group version to address this category of application. Unfortunately, we were less than thoughtful with its naming, and, as we always strive to be an inclusive community, we renamed the kind to StatefulSet.Finally, we were done….Or were we?Kubernetes 1.8 and apps/v1beta2Pod, ReplicationController, ReplicaSet, Deployment, DaemonSet, and StatefulSet came to collectively be known as the core workloads API. We could finally orchestrate all of the things, but the API surface was spread across three groups, had many inconsistencies, and left users wondering about the stability of each of the core workloads kinds. It was time to stop adding new features and focus on consistency and stability.Pod and ReplicationController were at GA stability, and even though you can run a workload in a Pod, it’s a nucleus primitive that belongs in core. As Deployments are the recommended way to manage your stateless apps, moving ReplicationController would serve no purpose. In Kubernetes 1.8, we moved all the other core workloads API kinds (Deployment, DaemonSet, ReplicaSet, and StatefulSet) to the apps/v1beta2 group version. This had the benefit of providing a better aggregation across the API surface, and allowing us to break backward compatibility to fix inconsistencies. Our plan was to promote this new surface to GA, wholesale and as is, when we were satisfied with its completeness. The modifications in this release, which are also implemented in apps/v1, are described below.Selector Defaulting DeprecatedIn prior versions of the apps and extensions groups, label selectors of the core workloads API kinds were, when left unspecified, defaulted to a label selector generated from the kind’s template’s labels.This was completely incompatible with strategic merge patch and kubectl apply. Moreover, we’ve found that defaulting the value of a field from the value of another field of the same object is an anti-pattern, in general, and particularly dangerous for the API objects used to orchestrate workloads.Immutable SelectorsSelector mutation, while allowing for some use cases like promotable Deployment canaries, is not handled gracefully by our workload controllers, and we have always strongly cautioned users against it. To provide a consistent, usable, and stable API, selectors were made immutable for all kinds in the workloads API.We believe that there are better ways to support features like promotable canaries and orchestrated Pod relabeling, but, if restricted selector mutation is a necessary feature for our users, we can relax immutability in the future without breaking backward compatibility.The development of features like promotable canaries, orchestrated Pod relabeling, and restricted selector mutability is driven by demand signals from our users. If you are currently modifying the selectors of your core workload API objects, please tell us about your use case via a GitHub issue, or by participating in SIG apps.Default Rolling UpdatesPrior to apps/v1beta2, some kinds defaulted their update strategy to something other than RollingUpdate (e.g. app/v1beta1/StatefulSet uses OnDelete by default). We wanted to be confident that RollingUpdate worked well prior to making it the default update strategy, and we couldn’t change the default behavior in released versions without breaking our promise with respect to backward compatibility. In apps/v1beta2 we enabled RollingUpdate for all core workloads kinds by default.CreatedBy Annotation DeprecatedThe “kubernetes.io/created-by” was a legacy hold over from the days before garbage collection. Users should use an object’s ControllerRef from its ownerReferences to determine object ownership. We deprecated this feature in 1.8 and removed it in 1.9.Scale SubresourcesA scale subresource was added to all of the applicable kinds in apps/v1beta2 (DaemonSet scales based on its node selector).Kubernetes 1.9 and apps/v1In Kubernetes 1.9, as planned, we promoted the entire core workloads API surface to GA in the apps/v1 group version. We made a few more changes to make the API consistent, but apps/v1 is mostly identical to apps/v1beta2. The reality is that most users have been treating the beta versions of the core workloads API as GA for some time now. Anyone who is still using ReplicationControllers and shying away from DaemonSets, Deployments, and StatefulSets, due to a perceived lack of stability, should plan migrate their workloads (where applicable) to apps/v1. The minor changes that were made during promotion are described below.Garbage Collection Defaults to DeletePrior to apps/v1 the default garbage collection policy for Pods in a DaemonSet, Deployment, ReplicaSet, or StatefulSet, was to orphan the Pods. That is, if you deleted one of these kinds, the Pods that they owned would not be deleted automatically unless cascading deletion was explicitly specified. If you use kubectl, you probably didn’t notice this, as these kinds are scaled to zero prior to deletion. In apps/v1 all core worloads API objects will now, by default, be deleted when their owner is deleted. For most users, this change is transparent.Status ConditionsPrior to apps/v1 only Deployment and ReplicaSet had Conditions in their Status objects. For consistency’s sake, either all of the objects or none of them should have conditions. After some debate, we decided that Conditions are useful, and we added Conditions to StatefulSetStatus and DaemonSetStatus. The StatefulSet and DaemonSet controllers currently don’t populate them, but we may choose communicate conditions to clients, via this mechanism, in the future.Scale Subresource Migrated to autoscale/v1We originally added a scale subresource to the apps group. This was the wrong direction for integration with the autoscaling, and, at some point, we would like to use custom metrics to autoscale StatefulSets. So the apps/v1 group version uses the autoscaling/v1 scale subresource.Migration and DeprecationThe question most you’re probably asking now is, “What’s my migration path onto apps/v1 and how soon should I plan on migrating?” All of the group versions prior to apps/v1 are deprecated as of Kubernetes 1.9, and all new code should be developed against apps/v1, but, as discussed above, many of our users treat extensions/v1beta1 as if it were GA. We realize this, and the minimum support timelines in our deprecation policy are just that, minimums.In future releases, before completely removing any of the group versions, we will disable them by default in the API Server. At this point, you will still be able to use the group version, but you will have to explicitly enable it. We will also provide utilities to upgrade the storage version of the API objects to apps/v1. Remember, all of the versions of the core workloads kinds are bidirectionally convertible. If you want to manually update your core workloads API objects now, you can use kubectl convert to convert manifests between group versions.What’s Next?The core workloads API surface is stable, but it’s still software, and software is never complete. We often add features to stable APIs to support new use cases, and we will likely do so for the core workloads API as well. GA stability means that any new features that we do add will be strictly backward compatible with the existing API surface. From this point forward, nothing we do will break our backwards compatibility guarantees. If you’re looking to participate in the evolution of this portion of the API, please feel free to get involved in GitHub or to participate in SIG Apps.–Kenneth Owens, Software Engineer, GoogleDownload KubernetesGet involved with the Kubernetes project on GitHub Post questions (or answer questions) on Stack Overflow Connect with the community on SlackFollow us on Twitter @Kubernetesio for latest updates
Quelle: kubernetes

Cisco Now Reselling Docker Enterprise Edition

Today we are excited to announce the expansion of our partnership with the availability of Docker Enterprise Edition (EE), our container management platform on the Cisco Global Price List (GPL) and the release of the latest Cisco Validated Design (CVD):
Cisco UCS Infrastructure with Contiv and Docker Enterprise Edition for Container Management.

Now customers can purchase Docker EE directly from Cisco and their joint resellers to jumpstart their new year’s resolution for a more modern application architecture, reduce IT costs and redirect saving to innovation projects.  And with our latest CVD for Cisco UCS compute infrastructure with secure container networking fabric, Contiv,  we’ve provided a roadmap on how to get started so customers and partners can gain a faster, more reliable and predictable implementation of Docker EE.
For enterprises looking to use Docker’s container management platform but not sure where to start, we can help you take the first step. The Migrating Traditional Applications (MTA) Program, designed for IT operations teams, helps enterprises modernize existing legacy .NET Windows or Java Linux applications without modifying source code or re-architecting the application in just five days with Docker and Cisco Advanced Services. The results have been incredible, with customers saving over 50% on infrastructure costs and using those saving to unlock innovative IT projects. What are you waiting for? Let us help you get started with Docker EE today.
For Cisco resellers looking to deliver Docker EE to your customers, we can help you get started and get trained, go to Cisco Sales Central.
For more information on Docker EE and Cisco Solutions:

Watch the webinar: Docker and Cisco – Integrated Container Solutions for the Enterprise
Contact sales to learn more about getting started with your MTA pilot

 

Cisco now reselling Docker Enterprise Edition #DockerEE #dockerClick To Tweet

The post Cisco Now Reselling Docker Enterprise Edition appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Introducing client-go version 6

The Kubernetes API server exposes a REST interface consumable by any client. client-go is the official client library for the Go programming language. It is used both internally by Kubernetes itself (for example, inside kubectl) as well as by numerous external consumers:operators like the etcd-operator or prometheus-operator; higher level frameworks like KubeLess and OpenShift; and many more.The version 6 update to client-go adds support for Kubernetes 1.9, allowing access to the latest Kubernetes features. While the changelog contains all the gory details, this blog post highlights the most prominent changes and intends to guide on how to upgrade from version 5.This blog post is one of a number of efforts to make client-go more accessible to third party consumers. Easier access is a joint effort by a number of people from numerous companies, all meeting in the #client-go-docs channel of the Kubernetes Slack. We are happy to hear feedback and ideas for further improvement, and of course appreciate anybody who wants to contribute.API group changesThe following API group promotions are part of Kubernetes 1.9:Workload objects (Deployments, DaemonSets, ReplicaSets, and StatefulSets) have been promoted to the apps/v1 API group in Kubernetes 1.9. client-go follows this transition and allows developers to use the latest version by importing the k8s.io/api/apps/v1 package instead of k8s.io/api/apps/v1beta1 and by using Clientset.AppsV1().Admission Webhook Registration has been promoted to the admissionregistration.k8s.io/v1beta1 API group in Kubernetes 1.9. The former ExternalAdmissionHookConfiguration type has been replaced by the incompatible ValidatingWebhookConfiguration and MutatingWebhookConfiguration types. Moreover, the webhook admission payload type AdmissionReview in admission.k8s.io has been promoted to v1beta1. Note that versioned objects are now passed to webhooks. Refer to the admission webhook documentation for details.Validation for CustomResourcesIn Kubernetes 1.8 we introduced CustomResourceDefinitions (CRD) pre-persistence schema validation as an alpha feature. With 1.9, the feature got promoted to beta and will be enabled by default. As a client-go user, you will find the API types at k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1.The OpenAPI v3 schema can be defined in the CRD spec as:apiVersion: apiextensions.k8s.io/v1beta1kind: CustomResourceDefinitionmetadata: …spec:  …  validation:    openAPIV3Schema:      properties:        spec:          properties:            version:                type: string                enum:                – “v1.0.0″                – “v1.0.1″            replicas:                type: integer                minimum: 1                maximum: 10The schema in the above CRD applies following validations for the instance:spec.version must be a string and must be either “v1.0.0” or “v1.0.1”.spec.replicas must be an integer and must have a minimum value of 1 and a maximum value of 10.A CustomResource with invalid values for spec.version (v1.0.2) and spec.replicas (15) will be rejected: apiVersion: mygroup.example.com/v1kind: Appmetadata:  name: example-appspec:  version: “v1.0.2″  replicas: 15 $ kubectl create -f app.yamlThe App “example-app” is invalid: []: Invalid value: map[string]interface {}{“apiVersion”:”mygroup.example.com/v1″, “kind”:”App”, “metadata”:map[string]interface {}{“creationTimestamp”:”2017-08-31T20:52:54Z”, “uid”:”5c674651-8e8e-11e7-86ad-f0761cb232d1″, “selfLink”:””, “clusterName”:””, “name”:”example-app”, “namespace”:”default”, “deletionTimestamp”:interface {}(nil), “deletionGracePeriodSeconds”:(*int64)(nil)}, “spec”:map[string]interface {}{“replicas”:15, “version”:”v1.0.2″}}:validation failure list:spec.replicas in body should be less than or equal to 10spec.version in body should be one of [v1.0.0 v1.0.1] Note that with Admission Webhooks, Kubernetes 1.9 provides another beta feature to validate objects before they are created or updated. Starting with 1.9, these webhooks also allow mutation of objects (for example, to set defaults or to inject values). Of course, webhooks work with CRDs as well. Moreover, webhooks can be used to implement validations that are not easily expressible with CRD validation. Note that webhooks are harder to implement than CRD validation, so for many purposes, CRD validation is the right tool.Creating namespaced informersOften objects in one namespace or only with certain labels are to be processed in a controller. Informers now allow you to tweak the ListOptions used to query the API server to list and watch objects. Uninitialized objects (for consumption by initializers) can be made visible by setting IncludeUnitialized to true. All this can be done using the new NewFilteredSharedInformerFactory constructor for shared informers:import “k8s.io/client-go/informers”…sharedInformers := informers.NewFilteredSharedInformerFactory( client, 30*time.Minute, “some-namespace”, func(opt *metav1.ListOptions) { opt.LabelSelector = “foo=bar” },)  Note that the corresponding lister will only know about the objects matching the namespace and the given ListOptions. Note that the same restrictions apply for a List or Watch call on a client.This production code example of a cert-manager demonstrates how namespace informers can be used in real code.Polymorphic scale clientHistorically, only types in the extensions API group would work with autogenerated Scale clients. Furthermore, different API groups use different Scale types for their /scale subresources. To remedy these issues, k8s.io/client-go/scale provides a polymorphic scale client to scale different resources in different API groups in a coherent way: import (apimeta “k8s.io/apimachinery/pkg/api/meta” discocache “k8s.io/client-go/discovery/cached” “k8s.io/client-go/discovery””k8s.io/client-go/dynamic”“k8s.io/client-go/scale”)…cachedDiscovery := discocache.NewMemCacheClient(client.Discovery())restMapper := discovery.NewDeferredDiscoveryRESTMapper(cachedDiscovery, apimeta.InterfacesForUnstructured,)scaleKindResolver := scale.NewDiscoveryScaleKindResolver(client.Discovery(),)scaleClient, err := scale.NewForConfig(client, restMapper,dynamic.LegacyAPIPathResolverFunc, scaleKindResolver,)scale, err := scaleClient.Scales(“default”).Get(groupResource, “foo”) The returned scale object is generic and is exposed as the autoscaling/v1.Scale object. It is backed by an internal Scale type, with conversions defined to and from all the special Scale types in the API groups supporting scaling. We planto extend this to CustomResources in 1.10.If you’re implementing support for the scale subresource, we recommend that you expose the autoscaling/v1.Scale object.Type-safe DeepCopyDeeply copying an object formerly required a call to Scheme.Copy(Object) with the notable disadvantage of losing type safety. A typical piece of code from client-go version 5 required type casting:newObj, err := runtime.NewScheme().Copy(node)if err != nil {    return fmt.Errorf(“failed to copy node %v: %s”, node, err)}newNode, ok := newObj.(*v1.Node)if !ok {    return fmt.Errorf(“failed to type-assert node %v”, newObj)} Thanks to k8s.io/code-generator, Copy has now been replaced by a type-safe DeepCopy method living on each object, allowing you to simplify code significantly both in terms of volume and API error surface:newNode := node.DeepCopy()No error handling is necessary: this call never fails. If and only if the node is nil does DeepCopy() return nil.To copy runtime.Objects there is an additional DeepCopyObject() method in the runtime.Object interface.With the old method gone for good, clients need to update their copy invocations accordingly.Code generation and CustomResourcesUsing client-go’s dynamic client to access CustomResources is discouraged and superseded by type-safe code using the generators in k8s.io/code-generator. Check out the Deep Dive on the Open Shift blog to learn about using code generation with client-go.Comment BlocksYou can now place tags in the comment block just above a type or function, or in the second block above. There is no distinction anymore between these two comment blocks. This used to a be a source of subtle errors when using the generators:// second block above// +k8s:some-tag// first block above// +k8s:another-tagtype Foo struct {}Custom Client MethodsYou can now use extended tag definitions to create custom verbs . This lets you expand beyond the verbs defined by HTTP. This opens the door to higher levels of customization.For example, this block leads to the generation of the method UpdateScale(s *autoscaling.Scale) (*autoscaling.Scale, error):// genclient:method=UpdateScale,verb=update,subresource=scale,input=k8s.io/kubernetes/pkg/apis/autoscaling.Scale,result=k8s.io/kubernetes/pkg/apis/autoscaling.ScaleResolving Golang Naming ConflictsIn more complex API groups it’s possible for Kinds, the group name, the Go package name, and the Go group alias name to conflict. This was not handled correctly prior to 1.9. The following tags resolve naming conflicts and make the generated code prettier:// +groupName=example2.example.com// +groupGoName=SecondExampleThese are usually in the doc.go file of an API package. The first is used as the CustomResource group name when RESTfully speaking to the API server using HTTP. The second is used in the generated Golang code (for example, in the clientset) to access the group version:clientset.SecondExampleV1()It’s finally possible to have dots in Go package names. In this section’s example, you would put the groupName snippet into the pkg/apis/example2.example.com directory of your project.Example projectsKubernetes 1.9 includes a number of example projects which can serve as a blueprint for your own projects:k8s.io/sample-apiserver is a simple user-provided API server that is integrated into a cluster via API aggregation.k8s.io/sample-controller is a full-featured controller (also called an operator) with shared informers and a workqueue to process created, changed or deleted objects. It is based on CustomResourceDefinitions and uses k8s.io/code-generator to generate deepcopy functions, typed clientsets, informers, and listers.VendoringIn order to update from the previous version 5 to version 6 of client-go, the library itself as well as certain third-party dependencies must be updated. Previously, this process had been tedious due to the fact that a lot of code got refactored or relocated within the existing package layout across releases. Fortunately, far less code had to move in the latest version, which should ease the upgrade procedure for most users.State of the published repositoriesIn the past k8s.io/client-go, k8s.io/api, and k8s.io/apimachinery were updated infrequently. Tags (for example, v4.0.0) were created quite some time after the Kubernetes releases. With the 1.9 release we resumed running a nightly bot that updates all the repositories for public consumption, even before manual tagging. This includes the branches:masterrelease-1.8 / release-5.0release-1.9 / release-6.0 Kubernetes tags (for example, v1.9.1-beta1) are also applied automatically to the published repositories, prefixed with kubernetes- (for example, kubernetes-1.9.1-beta1).These tags have limited test coverage, but can be used by early adopters of client-go and the other libraries. Moreover, they help to vendor the correct version of k8s.io/api and k8s.io/apimachinery. Note that we only create a v6.0.3-like semantic versioning tag on k8s.io/client-go. The corresponding tag for k8s.io/api and k8s.io/apimachinery is kubernetes-1.9.3.Also note that only these tags correspond to tested releases of Kubernetes. If you depend on the release branch, e.g., release-1.9, your client is running on unreleased Kubernetes code.State of vendoring of client-goIn general, the list of which dependencies to vendor is automatically generated and written to the file Godeps/Godeps.json. Only the revisions listed there are tested. This means especially that we do not and cannot test the code-base against master branches of our dependencies. This puts us in the following situation depending on the used vendoring tool:godep reads Godeps/Godeps.json by running godep restore from k8s.io/client-go in your GOPATH. Then use godep save to vendor in your project. godep will choose the correct versions from your GOPATH.glide reads Godeps/Godeps.json automatically from its dependencies including from k8s.io/client-go, both on init and on update. Hence, glide should be mostly automatic as long as there are no conflicts.dep does not currently respect Godeps/Godeps.json in a consistent way, especially not on updates. It is crucial to specify client-go dependencies manually as constraints or overrides, also for non k8s.io/* dependencies. Without those, dep simply chooses the dependency master branches, which can cause problems as they are updated frequently.The Kubernetes and golang/dep community are aware of the problems [issue #1124, issue #1236] and are working together on solutions. Until then special care must be taken.Please see client-go’s INSTALL.md for more details.Updating dependencies – golang/depEven with the deficiencies of golang/dep today, dep is slowly becoming the de-facto standard in the Go ecosystem. With the necessary care and the awareness of the missing features, dep can be (and is!) used successfully. Here’s a demonstration of how to update a project with client-go 5 to the latest version 6 using dep:(If you are still running client-go version 4 and want to play it safe by not skipping a release, now is a good time to check out this excellent blog post describing how to upgrade to version 5, put together by our friends at Heptio.)Before starting, it is important to understand that client-go depends on two other Kubernetes projects: k8s.io/apimachinery and k8s.io/api. In addition, if you are using CRDs, you probably also depend on k8s.io/apiextensions-apiserver for the CRD client. The first exposes lower-level API mechanics (such as schemes, serialization, and type conversion), the second holds API definitions, and the third provides APIs related to CustomResourceDefinitions. In order for client-go to operate correctly, it needs to have its companion libraries vendored in correspondingly matching versions. Each library repository provides a branch named release-<version> where <version> refers to a particular Kubernetes version; for client-go version 6, it is imperative to refer to the release-1.9 branch on each repository.Assuming the latest version 5 patch release of client-go being vendored through dep, the Gopkg.toml manifest file should look something like this (possibly using branches instead of versions):[[constraint]]  name = “k8s.io/api”  version = “kubernetes-1.8.1″[[constraint]]  name = “k8s.io/apimachinery”  version = “kubernetes-1.8.1″[[constraint]]  name = “k8s.io/apiextensions-apiserver”  version = “kubernetes-1.8.1″[[constraint]]  name = “k8s.io/client-go”  version = “5.0.1”Note that some of the libraries could be missing if they are not actually needed by the client.Upgrading to client-go version 6 means bumping the version and tag identifiers as following (emphasis given):[constraint]]  name = “k8s.io/api”  version = “kubernetes-1.9.0″[[constraint]]  name = “k8s.io/apimachinery”  version = “kubernetes-1.9.0″[[constraint]]  name = “k8s.io/apiextensions-apiserver”  version = “kubernetes-1.9.0″[[constraint]]  name = “k8s.io/client-go”  version = “6.0.0”The result of the upgrade can be found here.A note of caution: dep cannot capture the complete set of dependencies in a reliable and reproducible fashion as described above. This means that for a 100% future-proof project you have to add constraints (or even overrides) to many other packages listed in client-go’s Godeps/Godeps.json. Be prepared to add them if something breaks. We are working with the golang/dep community to make this an easier and more smooth experience.Finally, we need to tell dep to upgrade to the specified versions by executing dep ensure. If everything goes well, the output of the command invocation should be empty, with the only indication that it was successful being a number of updated files inside the vendor folder.If you are using CRDs, you probably also use code-generation. The following block for Gopkg.toml will add the required code-generation packages to your project:required = [  “k8s.io/code-generator/cmd/client-gen”,  “k8s.io/code-generator/cmd/conversion-gen”,  “k8s.io/code-generator/cmd/deepcopy-gen”,  “k8s.io/code-generator/cmd/defaulter-gen”,  “k8s.io/code-generator/cmd/informer-gen”,  “k8s.io/code-generator/cmd/lister-gen”,][[constraint]]  branch = “kubernetes-1.9.0″  name = “k8s.io/code-generator” Whether you would also like to prune unneeded packages (such as test files) through dep or commit the changes into the VCS at this point is up to you — but from an upgrade perspective, you should now be ready to harness all the fancy new features that Kubernetes 1.9 brings through client-go.
Quelle: kubernetes

Extensible Admission is Beta

In this post we review a feature, available in the Kubernetes API server, that allows you to implement arbitrary control decisions and which has matured considerably in Kubernetes 1.9.The admission stage of API server processing is one of the most powerful tools for securing a Kubernetes cluster by restricting the objects that can be created, but it has always been limited to compiled code. In 1.9, we promoted webhooks for admission to beta, allowing you to leverage admission from outside the API server process.What is Admission?Admission is the phase of handling an API server request that happens before a resource is persisted, but after authorization. Admission gets access to the same information as authorization (user, URL, etc) and the complete body of an API request (for most requests).The admission phase is composed of individual plugins, each of which are narrowly focused and have semantic knowledge of what they are inspecting. Examples include: PodNodeSelector (influences scheduling decisions), PodSecurityPolicy (prevents escalating containers), and ResourceQuota (enforces resource allocation per namespace).Admission is split into two phases:Mutation, which allows modification of the body content itself as well as rejection of an API request.Validation, which allows introspection queries and rejection of an API request.An admission plugin can be in both phases, but all mutation happens before validation.MutationThe mutation phase of admission allows modification of the resource content before it is persisted. Because the same field can be mutated multiple times while in the admission chain, the order of the admission plugins in the mutation matters.One example of a mutating admission plugin is the `PodNodeSelector` plugin, which uses an annotation on a namespace `namespace.annotations[“scheduler.alpha.kubernetes.io/node-selector”]` to find a label selector and add it to the `pod.spec.nodeselector` field. This positively restricts which nodes the pods in a particular namespace can land on, as opposed to taints, which provide negative restriction (also with an admission plugin).ValidationThe validation phase of admission allows the enforcement of invariants on particular API resources. The validation phase runs after all mutators finish to ensure that the resource isn’t going to change again.One example of a validation admission plugin is also the `PodNodeSelector` plugin, which ensures that all pods’ `spec.nodeSelector` fields are constrained by the node selector restrictions on the namespace. Even if a mutating admission plugin tries to change the `spec.nodeSelector` field after the PodNodeSelector runs in the mutating chain, the PodNodeSelector in the validating chain prevents the API resource from being created because it fails validation.What are admission webhooks?Admission webhooks allow a Kubernetes installer or a cluster-admin to add mutating and validating admission plugins to the admission chain of `kube-apiserver` as well as any extensions apiserver based on k8s.io/apiserver 1.9, like metrics, service-catalog, or kube-projects, without recompiling them. Both kinds of admission webhooks run at the end of their respective chains and have the same powers and limitations as compiled admission plugins.What are they good for?Webhook admission plugins allow for mutation and validation of any resource on any API server, so the possible applications are vast. Some common use-cases include:Mutation of resources like pods. Istio has talked about doing this to inject side-car containers into pods. You could also write a plugin which forcefully resolves image tags into image SHAs.Name restrictions. On multi-tenant systems, reserving namespaces has emerged as a use-case.Complex CustomResource validation. Because the entire object is visible, a clever admission plugin can perform complex validation on dependent fields (A requires B) and even external resources (compare to LimitRanges).Security response. If you forced image tags into image SHAs, you could write an admission plugin that prevents certain SHAs from running.RegistrationWebhook admission plugins of both types are registered in the API, and all API servers (kube-apiserver and all extension API servers) share a common config for them. During the registration process, a webhook admission plugin describes:How to connect to the webhook admission serverHow to verify the webhook admission server (Is it really the server I expect?)Where to send the data at that server (which URL path)Which resources and which HTTP verbs it will handleWhat an API server should do on connection failures (for example, if the admission webhook server goes down)1 apiVersion: admissionregistration.k8s.io/v1beta12 kind: ValidatingWebhookConfiguration3 metadata:4   name: namespacereservations.admission.online.openshift.io5 webhooks:6 – name: namespacereservations.admission.online.openshift.io7   clientConfig:8     service:9       namespace: default10      name: kubernetes11     path: /apis/admission.online.openshift.io/v1alpha1/namespacereservations12    caBundle: KUBE_CA_HERE13  rules:14  – operations:15    – CREATE16    apiGroups:17    – “”18    apiVersions:19    – “*”20    resources:21    – namespaces22  failurePolicy: FailLine 6: `name` – the name for the webhook itself. For mutating webhooks, these are sorted to provide ordering.Line 7: `clientConfig` – provides information about how to connect to, trust, and send data to the webhook admission server.Line 13: `rules` – describe when an API server should call this admission plugin. In this case, only for creates of namespaces. You can specify any resource here so specifying creates of `serviceinstances.servicecatalog.k8s.io` is also legal.Line 22: `failurePolicy` – says what to do if the webhook admission server is unavailable. Choices are “Ignore” (fail open) or “Fail” (fail closed). Failing open makes for unpredictable behavior for all clients.Authentication and trustBecause webhook admission plugins have a lot of power (remember, they get to see the API resource content of any request sent to them and might modify them for mutating plugins), it is important to consider:How individual API servers verify their connection to the webhook admission server How the webhook admission server authenticates precisely which API server is contacting itWhether that particular API server has authorization to make the requestThere are three major categories of connection:From kube-apiserver or extension-apiservers to externally hosted admission webhooks (webhooks not hosted in the cluster)From kube-apiserver to self-hosted admission webhooksFrom extension-apiservers to self-hosted admission webhooksTo support these categories, the webhook admission plugins accept a kubeconfig file which describes how to connect to individual servers. For interacting with externally hosted admission webhooks, there is really no alternative to configuring that file manually since the authentication/authorization and access paths are owned by the server you’re hooking to.For the self-hosted category, a cleverly built webhook admission server and topology can take advantage of the safe defaulting built into the admission plugin and have a secure, portable, zero-config topology that works from any API server.Simple, secure, portable, zero-config topologyIf you build your webhook admission server to also be an extension API server, it becomes possible to aggregate it as a normal API server. This has a number of advantages:Your webhook becomes available like any other API under default kube-apiserver service `kubernetes.default.svc` (e.g. https://kubernetes.default.svc/apis/admission.example.com/v1/mymutatingadmissionreviews). Among other benefits, you can test using `kubectl`.Your webhook automatically (without any config) makes use of the in-cluster authentication and authorization provided by kube-apiserver. You can restrict access to your webhook with normal RBAC rules.Your extension API servers and kube-apiserver automatically (without any config) make use of their in-cluster credentials to communicate with the webhook.Extension API servers do not leak their service account token to your webhook because they go through kube-apiserver, which is a secure front proxy.Source: https://drive.google.com/a/redhat.com/file/d/12nC9S2fWCbeX_P8nrmL6NgOSIha4HDNp In short: a secure topology makes use of all security mechanisms of API server aggregation and additionally requires no additional configuration.Other topologies are possible but require additional manual configuration as well as a lot of effort to create a secure setup, especially when extension API servers like service catalog come into play. The topology above is zero-config and portable to every Kubernetes cluster.How do I write a webhook admission server?Writing a full server complete with authentication and authorization can be intimidating. To make it easier, there are projects based on Kubernetes 1.9 that provide a library for building your webhook admission server in 200 lines or less. Take a look at the generic-admission-apiserver and the kubernetes-namespace-reservation projects for the library and an example of how to build your own secure and portable webhook admission server.With the admission webhooks introduced in 1.9 we’ve made Kubernetes even more adaptable to your needs. We hope this work, driven by both Red Hat and Google, will enable many more workloads and support ecosystem components. (Istio is one example.) Now is a good time to give it a try! If you’re interested in giving feedback or contributing to this area, join us in the SIG API machinery.
Quelle: kubernetes

Using Your Own Private Registry with Docker Enterprise Edition

One of the things that makes Docker really cool, particularly compared to using virtual machines, is how easy it is to move around Docker images. If you’ve already been using Docker, you’ve almost certainly pulled images from Docker Hub. Docker Hub is Docker’s cloud-based registry service and has tens of thousands of Docker images to choose from. If you’re developing your own software and creating your own Docker images though, you’ll want your own private Docker registry. This is particularly true if you have images with proprietary licenses, or if you have a complex continuous integration (CI) process for your build system.
Docker Enterprise Edition includes Docker Trusted Registry (DTR), a highly available registry with secure image management capabilities which was built to run either inside of your own data center or on your own cloud-based infrastructure. In the next few weeks, we’ll go over how DTR is a critical component of delivering a secure, repeatable and consistent software supply chain.  You can get started with it today through our free hosted demo or by downloading and installing the free 30-day trial. The steps to get started with your own installation are below.
Setting Up Docker Enterprise Edition
Docker Trusted Registry runs on top of Universal Control Plane (UCP), so to begin let’s install a single-node cluster. If you’ve already got your own UCP cluster, you can skip this step.  On your docker host, run the command:
  # Pull and install UCP
  docker run -it –rm -v /var/run/docker.sock:/var/run/docker.sock –name ucp docker/ucp:latest install
Once UCP is up and running, there are a few more things you should do before you install DTR. Open up your browser against the UCP instance you just installed. There should be a link to it at the end of your log output. If you have already have a Docker Enterprise Edition license, go ahead and upload it through the UI. If you don’t, visit the Docker Store and pick up a free, 30-day trial.
Once you’ve got licensing squared away, you’re probably going to want to change the port which UCP is running on. Since this is a single node cluster, DTR and UCP are going to want to use the same TCP ports for running their web services. If you’ve got a UCP swarm with more than one node, this probably isn’t a problem because DTR will look for a node which has the required free ports. Inside of UCP, click on Admin Settings -> Cluster Configuration and change the Controller Port to something like 5443.
Installing DTR
We’re going to install a simple, single-node instance of Docker Trusted Registry.  If you were setting up your DTR for production use, you would likely set things up in High Availability (HA) mode which would require a different type of storage such as a cloud-based object store, or NFS. Since this is a single-node instance, we’re going to stick with the default local storage.
First we need to pull the DTR bootstrap image. The bootstrap image is a tiny, self-contained installer which connects to UCP and sets up all of the containers, volumes, and logical networks required to get DTR up and running.
Use the command:
  # Pull and run the DTR bootstrapper
  docker run -it –rm docker/dtr:latest install –ucp-insecure-tls
NOTE:  Both UCP and DTR by default come with their own certs which won’t be recognized by your system.  If you’ve set up UCP with TLS certs which are trusted by your system, you can omit the `–ucp-insecure-tls` option. Alternatively, you can use the `–ucp-ca` option which will let you specify the UCP CA certificate directly.
The DTR bootstrap image should then ask you for a couple of settings, such as the URL of your UCP installation and your UCP admin username and password.  It should only take a minute or two to pull all of the DTR images and set everything up.
Keeping Everything Secure
Once everything is up and running, you’re ready to push and pull images to and from
the registry.  Before we do that step though, let’s set up our TLS certificates so that we can securely talk to DTR.
On Linux, we can use these commands (just make certain you change DTR_HOSTNAME to reflect the DTR we just set up):
  # Pull the CA certificate from DTR (you can use wget if curl is unavailable)
  DTR_HOSTNAME=<Your DTR hostname>
  curl -k https://$(DTR_HOSTNAME)/ca > $(DTR_HOSTNAME).crt
  sudo mkdir /etc/docker/certs.d/$(DTR_HOSTNAME)
  sudo cp $(DTR_HOSTNAME) /etc/docker/certs.d/$(DTR_HOSTNAME)
  # Restart the docker daemon (use `sudo service docker restart` on Ubuntu 14.04)
  sudo systemctl restart docker
On Docker for Mac and Windows, we’ll set up our client a little bit differently.  Go in to Settings -> Daemon and in the Insecure Registries section, enter in your DTR hostname.  Click Apply, and your docker daemon should restart and you should be good to go.
Pushing and Pulling Images
We now need to set up a repository to hold an image. This is a little bit different than Docker Hub which automatically creates a repository if one doesn’t exist when you do a
docker push. To create the repository, point your browser to https://<Your DTR hostname> and
then sign-in with your admin credentials when prompted. If you added a license to UCP, that
license will automatically have been picked up by DTR. If not, make certain you upload
your license now.
Once you’re in, click on the ‘New Repository` button and create a new repository.
We’ll create a repo to hold Alpine linux, so type `alpine` in the name field, and click
`Save` (it’s labelled `Create` in DTR 2.5 and newer).
Now let’s go back to our shell and type the commands:
  # Pull the latest version of Alpine Linux
  docker pull alpine:latest
  # Sign in to your new DTR instance
  docker login <Your DTR hostname>
  # Tag Alpine to be able to push it to your DTR
  docker tag alpine:latest <Your DTR hostname>/admin/alpine:latest
  # Push the image to DTR
  docker push <Your DTR hostname>/admin/alpine:latest
And that’s it!  We just pulled a copy of the latest Alpine Linux, re-tagged it so that we could store it inside of DTR, and then pushed it to our private registry.  If you want to pull that image to a different Docker engine, set up your DTR certs as shown above, and issue the command:
   # Pull the image from DTR
   docker pull <Your DTR hostname>/admin/alpine:latest
DTR has a lot of great image management features built right in such as image caching, mirroring, scanning, signing, and even automated supply chain policies.  We’ll leave these to future blog posts which we can explore in more detail.

Step-by-step instructions on how to setup and use your own private registry with #Docker Enterprise…Click To Tweet

To learn more about Docker Enterprise Edition:

Visit the website and view pricing
Read more about Docker EE customers and the benefits they’re seeing
Don’t have time to install and configure Docker EE? Register for the free hosted trial to test drive Docker EE in just a few minutes

The post Using Your Own Private Registry with Docker Enterprise Edition appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/