Docker Certified Containers From IBM

The Docker Certified Technology Program is designed for ecosystem partners and customers to recognize containers and plugins that excel in quality, collaborative support and compliance. Docker Certification gives enterprises an easy way to run trusted software and components in containers on Docker Enterprise with support from both Docker and the publisher.  
As cloud computing continues to transform every business and industry, developers at global enterprises and emerging startups alike are increasingly leveraging container technologies to accelerate how they build modern web, mobile and IoT applications.  
IBM has achieved certification of its flagship Db2 database, Websphere-Liberty middleware server and Security Access Manager products now available on Docker Hub. These Certified Containers enable developers to accelerate building cloud-native applications for the Docker Enterprise platform.  Developers can deploy these solutions from IBM to any on-premises infrastructure or public cloud.  They are designed to assist in the modernization of traditional applications moving from on-premises monoliths into hybrid cloud microservices.
These solutions are validated by both Docker and IBM and are integrated into a seamless support pipeline that provides customers the world-class support they have become accustomed to when working with Docker and IBM.
Check out the latest certified technology available from IBM on Docker Hub:

IBM Security Access Manager
IBM WebSphere Application Server Liberty
IBM Db2 Developer-C Edition  

Learn More:

Learn more about Docker Enterprise and get a free trial today
Check out a Docker event near you
Contact us with any questions
Apply as a Partner

 

Now Available: #Docker Certified Containers from @IBM. Learn more here: Click To Tweet

 
The post Docker Certified Containers From IBM appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Speak at DockerCon San Francisco 2019 – Call for Papers is Open

 
Whether you missed DockerCon EU in Barcelona, or you already miss the fun, connections and learning you experienced at DockerCon – you won’t have to wait long for the next one. DockerCon returns to San Francisco on April 29 and extends through May 2, 2019 and the Call for Papers is now open. We are accepting talk submissions through January 18th at 11:59 PST.  
Submit a Talk

Attending DockerCon is an awesome experience, but so is speaking at DockerCon – it’s a great way to get to know the community, share ideas and collaborate. Don’t be nervous about proposing your idea – no topic is too small or too big. And for some speakers, DockerCon is their first time speaking publicly. Don’t be intimidated, DockerCon attendees are all looking to level up their skills, connect with fellow container fans and go home inspired to implement new containerization initiatives. Here are some suggested topics from the conference committee:

“How To” type sessions for developers or IT teams
Case Studies
Technical deep dives into container and distributed systems related components
Cool New Apps built with Docker containers
The craziest thing you have containerized
Wild Card – anything and everything!
The impact of change – both for organizations and ourselves as individuals and communities.
Inspirational stories

Note that our attendees expect practical guidance so vendor sales pitches will not be accepted.
Accepted speakers receive a complimentary conference pass, speakers gift and participate in a networking reception. Additionally, they receive help preparing their session, access to an online recording of their talk and the opportunity to share their experience with the broader Docker community.
 

Want to speak at #DockerCon SF 2019? The call for papers is now open. Submit a talk today:Click To Tweet

The post Speak at DockerCon San Francisco 2019 – Call for Papers is Open appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Desigual Transforms the In-Store Customer Experience with Docker Enterprise

Desigual Transforms the In-Store Customer Experience with Docker Enterprise

 
At DockerCon Barcelona, we awarded Desigual with the first ever Rising Star Docker Customer Innovation Award. The Desigual team earned the award by building a brand new in-store shopping assistant application in just 5 months thanks to Docker Enterprise. The digital shopping assistant is already deployed at over 100 stores, and is being rolled out to all of Desigual’s 500-plus clothing stores worldwide in the coming months.
In this 2 minute video, Desigual gives the highlights of their story:

The Desigual team analyzed existing sales data and found that of lost in-store sales, 60 percent were because a particular size was out of stock, and 40 percent were because a product wasn’t available in the catalog.
They wanted to create a customer-first shopping experience that would stand out among retail clothing brands and help store associates recommend alternatives to customers. To do that, they needed to tie multiple elements together: Store point-of-sale (POS), the online catalog, mobile capability, and personal attention through the shopper profile.
Mathias Kriegel, IT Ops Lead and Cloud Architect, and Joan Anton Sances, Software Architect, discussed the project and why they selected Docker Enterprise in their presentation at DockerCon Barcelona 2018.
They selected Docker Enterprise because it gives them enterprise-grade support and has let them create a secure and reliable software pipeline. It also met their need for Swarm and Kubernetes support, along with multi-platform support since the application has both .NET and Java components.
Desigual has shifted software development to a DevOps mentality and can now roll out updates or new software much faster and more reliably, even with a sophisticated technology stack that includes TIBCO enterprise software, Java, .NET and Android mobile components.
While all of Docker’s customers are starting to change how their software development and deployment process works, Desigual’s innovative thinking and customer focus stands out. Congratulations to the Desigual team for winning our 2018 Rising Star Customer Innovation Award!
Check this out:

Watch a 2 minute video on how Desigual uses Docker Enterprise
Learn more about Docker Enterprise

Discover how @desigual transforms their in-store customer experience with #Docker Enterprise. @mathkrieg @jantonsgClick To Tweet

 
The post Desigual Transforms the In-Store Customer Experience with Docker Enterprise appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

KubeCon NA 2018 Wrap Up: Docker and the Kubernetes Community

 
 
Right on the heels of DockerCon Europe, the Docker team was excited to be a part of KubeCon in Seattle last week for great conversations and collaboration with the Kubernetes community. In addition to our commitment to delivering a simple, integrated experience with Kubernetes in our Docker Desktop and Docker Enterprise products, we’re also excited by our work with the community at the very foundation of Kubernetes with projects like containerd and Notary/TUF and to talk container standards with the members of the Open Container Initiative (OCI). KubeCon is an opportunity for project maintainers to explain the status and roadmap of projects, but also to meet face to face and collaborate with contributors to determine what is next for cloud native applications.
Giving Back to the Kubernetes Community
The Docker and Kubernetes communities have been working together closely since Kubernetes was announced at DockerCon 2014. In line with our commitment to continue to make containerization technology like Kubernetes easier to use: a few weeks ago we open sourced Docker Compose on Kubernetes, a project that provides a simple way to define cloud native applications with a higher-level abstraction, the Docker Compose file. Docker Compose is a tool for defining and running multi-container Docker applications is already used by millions of Docker users.
Docker is also working on solutions that address challenges that the broader Kubernetes community is facing as more applications embrace cloud-native design and more applications are distributed applications. As announced during DockerCon Europe, Docker and Microsoft are teaming up to deliver the Cloud Native Application Bundle (CNAB) specification – an open source, cloud-agnostic specification for packaging and running distributed applications. With CNAB, organizations can package Helm charts, Kubernetes YAML files and Docker Compose files in a single format that is easily shareable and distributed in Docker Hub and Docker Trusted Registry. The format also extends to things like Terraform and Ansible scripts and any number of other configuration and automation formats. Here is a video of CNAB in action.
In one of these very productive gatherings of maintainers at KubeCon, CNAB team members from Microsoft and Docker joined the OCI maintainer meeting to discuss CNAB joining OCI.

#KubeCon is not only about talks, but also a lot about face to face meetings between engineers: here @OCI_ORG team discussing @cnab_spec pic.twitter.com/3WqSjOrG1a
— chanezon (@chanezon) December 13, 2018
At the Heart of Kubernetes: containerd and Notary
Containerd
It’s been about 2 years since containerd was donated to CNCF and the project has had tremendous momentum in that short time. Today, containerd 1.2 is the industry-standard container runtime that can be used with different technologies including gVisor, Kata containers, Firecracker and Balena, and is extensible to various platforms through it’s plugin model. Containerd is being widely adopted by the major cloud platforms including Alibaba, AWS, Azure, GKE and IBM Cloud. We’re excited by the maturity and direction we’re headed in and looking forward to graduation of containerd within CNCF.
You can watch the Intro and Deep Dive sessions on containerd here:

Intro to containerd
Deep Dive on containerd

One thing I love about KubeCon is the impromptu maintainer meetings after talks: after the Intro to containerd talk, we all went to AWS office with @containerd and Firecracker maintainers from @Docker @awscloud @Microsoft @GCPcloud @alibaba_cloud @IBM to talk about Firecracker containerd integration. The AWS team demoed their firecracker-containerd proof of concept to check that their architecture was sound.

One thing I love about #KubeCon is impromptu maintainer meetings after talks: here at Amazon with @containerd & Firecracker maintainers from @Docker @awscloud @Microsoft @GCPcloud @alibaba_cloud @IBM to talk about Firecracker containerd integrationhttps://t.co/rlvAYOxBox pic.twitter.com/pSuDPWv4Ss
— chanezon (@chanezon) December 12, 2018
Notary with TUF
Docker has implemented The Update Framework (TUF) in Notary as a way to protect and sign your container images and secure your environment with Docker Hub or Docker Trusted Registry. Notary is also a project within CNCF with maintainers from many different companies and the framework can be extended to other use cases.

Yesterday in @CloudNativeFdn board meeting one topic that came up was that security should be an important focus for cncf: good example today #KubeCon @justincormack & Pr Cappos from @nyutandon present TUF and Notary. pic.twitter.com/O2HT3CWHuY
— chanezon (@chanezon) December 11, 2018
You can watch the session on Notary here:

Intro to TUF / Notary

Docker and Kubernetes: The Road Ahead
Docker is driving Kubernetes forward from two ends. We continue to invest and collaborate from the foundational side through projects like containerd and Notary and standards like OCI and CNAB. And we continue to drive adoption in the enterprise through easy-to-use tools like Docker Desktop and Docker Enterprise which package conformant Kubernetes distributions with Docker tooling. Docker will continue to collaborate with the Kubernetes community with a focus on making Kubernetes easier to use and accessible to a larger set of users.
Here are other sessions from KubeCon Seattle 2018 from the Docker team:

Building Container Images on Your Kubernetes Cluster with Knative Build with Gareth Rushgrove
How to Choose a Kubernetes Runtime with Justin Cormack
How Standards, Specifications and Runtimes Make for Better Containers with Patrick Chanezon (Docker), Chris Aniszczyk (The Linux Foundation/CNCF), Jeffrey Borek (IBM) and Rithu Leena John (CoreOS/Red Hat)

Had a great time presenting with these fine gentlemen @jeffborek @chanezon @cra at #kubecon2018! pic.twitter.com/cqdS36wkD0
— Rithu Leena John (@rithu_john) December 14, 2018

Securing Application Telemetry & Tracing with SPIFFE and Envoy with Sabree Blackmon

To learn more:

Docker Desktop is the easiest way to run Kubernetes on your laptop
Docker Enterprise is the easiest way to run Kubernetes in the Enterprise.

#KubeCon NA 2018 Wrap Up: #Docker and the Kubernetes CommunityClick To Tweet

The post KubeCon NA 2018 Wrap Up: Docker and the Kubernetes Community appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Introducing the New Docker Hub

Today, we’re excited to announce that Docker Store and Docker Cloud are now part of Docker Hub, providing a single experience for finding, storing and sharing container images. This means that:

Docker Certified and Verified Publisher Images are now available for discovery and download on Docker Hub
Docker Hub has a new user experience

 
Millions of individual users and more than a hundred thousand organizations use Docker Hub, Store and Cloud for their container content needs. We’ve designed this Docker Hub update to bring together the features that users of each product know and love the most, while addressing known Docker Hub requests around ease of use, repository and team management.
Here’s what’s new:
Repositories

View recently pushed tags and automated builds on your repository page
Pagination added to repository tags
Improved repository filtering when logged in on the Docker Hub home page

Organizations and Teams

As an organization Owner, see team permissions across all of your repositories at a glance.
Add existing Docker Hub users to a team via their email (if you don’t remember their of Docker ID)

New Automated Builds

Speed up builds using Build Caching
Add environment variables and run tests in your builds
Add automated builds to existing repositories

Note: For Organizations, GitHub & BitBucket account credentials will need to be re-linked to your organization to leverage the new automated builds. Existing Automated Builds will be migrated to this new system over the next few months. Learn more
 
Improved Container Image Search

Filter by Official, Publisher and Certified images, guaranteeing a level of quality in the Docker images listed by your search query
Filter by categories to quickly drill down to the type of image you’re looking for

 
Existing URLs will continue to work, and you’ll automatically be redirected where appropriate. No need to update any bookmarks.
Verified Publisher Images and Plugins
Verified Publisher Images are now available on Docker Hub. Similar to Official Images, these images have been vetted by Docker. While Docker maintains the Official Images library, Verified Publisher Images are provided by our third-party software vendors.
Verified Publisher Images and Plugins:

Are tested and supported on Docker Enterprise platform by verified publishers
Adhere to Docker’s container best practices
Pass a functional API test suite
Complete a vulnerability scanning assessment
Are provided by partners with a collaborative support relationship

Let us know what you think
We’ll be rolling out the new Docker Hub to users over time at https://hub.docker.com.
Have feedback on these updates? We’d love to hear from you. Let us know in this short survey.
The post Introducing the New Docker Hub appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Announcing the Docker Customer Innovation Awards

We are excited to announce the first annual Docker Customer Innovation Award winners at DockerCon Barcelona today! We launched the awards this year to recognize customers who stand out in their adoption of Docker Enterprise platform to drive transformation within IT and their business.
38 companies were nominated, all of whom have spoken publicly about their containerization initiatives recently, or plan to soon. From looking at so many excellent nominees, we realized there were really two different stories — so we created two award categories. In each category, we have a winner and three finalists.
 
 
Business Transformation
Customers in this category have developed company-wide initiatives aimed at transforming IT and their business in a significant way, with Docker Enterprise as a key part if it. They typically started their journey two or more years ago and have containerized multiple applications across the organization.

WINNER:

Societe Generale transformed how the bank develops its software by building a container platform for migrating thousands of its applications to the cloud.

 

FINALISTS:

Bosch built a global platform that enables developers to build and deliver new software solutions and updates at digital speed.

Liberty Mutual consolidated infrastructure and VMs significantly, paving the way for innovation and a multi-cloud future.

MetLife modernized hundreds of traditional applications, driving 66 percent cost savings and creating a self-funding model to fuel change and innovation. Cut new product time to market by two-thirds.

 
Rising Stars
Customers in this category are early in their containerization journey and have already leveraged their first project with Docker Enterprise as a catalyst to innovate their business — often creating new applications or services.

WINNER:

Desigual built a brand new in-store shopping experience app in less than 5 months to connect customers and associates, creating an outstanding brand and shopping experience.

FINALISTS:

BCG leverages Docker Enterprise to develop breakthrough analytics and machine-learning solutions for clients with BCG’s Source.ai offering.

Citizens Bank (Franklin American Mortgage) created a dedicated innovation team sparked cultural change at a traditional mortgage company, allowing it to bring new products to market in weeks or months.
The Dutch Ministry of Justice evaluated Docker Enterprise as a way to accelerate application development, which helped spark an effort to modernize juvenile custodian services from whiteboards and sticky notes to a mobile app.

We want to give a big thanks to the winners and finalists, and to all of our remarkable customers have started innovation journeys with Docker.
We’ve opened the nomination process for 2019 since we will be announcing winners at DockerCon 2019 on April 29-May 2. If you’re interested in submitting or want to nominate someone else, you can learn how here.

Announcing the winners of the #Docker Customer Innovation AwardsClick To Tweet

The post Announcing the Docker Customer Innovation Awards appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Simplifying Kubernetes with Docker Compose and Friends

Today we’re happy to announce we’re open sourcing our support for using Docker Compose on Kubernetes. We’ve had this capability in Docker Enterprise for a little while but as of today you will be able to use this on any Kubernetes cluster you choose.

Why do I need Compose if I already have Kubernetes?
The Kubernetes API is really quite large. There are more than 50 first-class objects in the latest release, from Pods and Deployments to ValidatingWebhookConfiguration and ResourceQuota. This can lead to a verbosity in configuration, which then needs to be managed by you, the developer. Let’s look at a concrete example of that.
The Sock Shop is the canonical example of a microservices application. It consists of multiple services using different technologies and backends, all packaged up as Docker images. It also provides example configurations using different tools, including both Compose and raw Kubernetes configuration. Let’s have a look at the relative sizes of those configurations:
$ git clone https://github.com/microservices-demo/microservices-demo.git
$ cd deployment/kubernetes/manifests
$ (Get-ChildItem -Recurse -File | Get-Content | Measure-Object -line).Lines
908
$ cd ../../docker-compose
$ (Get-Content docker-compose.yml | Measure-Object -line).Lines
174
Describing the exact same multi-service application using just the raw Kubernetes objects takes more than 5 times the amount of configuration than with Compose. That’s not just an upfront cost to author – it’s also an ongoing cost to maintain. The Kubernetes API is amazingly general purpose – it exposes low-level primitives for building the full range of distributed systems. Compose meanwhile isn’t an API but a high-level tool focused on developer productivity. That’s why combining them together makes sense. For the common case of a set of interconnected web services, Compose provides an abstraction that simplifies Kubernetes configuration. For everything else you can still drop down to the raw Kubernetes API primitives. Let’s see all that in action.
First we need to install the Compose on Kubernetes controller into your Kubernetes cluster. This controller uses the standard Kubernetes extension points to introduce the `Stack` to the Kubernetes API. You can use any Kubernetes cluster you like, but if you don’t already have one available then remember that Docker Desktop comes with Kubernetes and the Compose controller built-in, and enabling it is as simple as ticking a box in the settings.
To install the controller manually on any Kubernetes cluster, see the full documentation for the current installation instructions.
Next let’s write a simple Compose file:
version: “3.7”
services:
  web:
    image: dockerdemos/lab-web
    ports:
     – “33000:80″
  words:
    image: dockerdemos/lab-words
    deploy:
      replicas: 3
      endpoint_mode: dnsrr
  db:
    image: dockerdemos/lab-db
We’ll then use the docker client to deploy this to a Kubernetes cluster running the controller:
$ docker stack deploy –orchestrator=kubernetes -c docker-compose.yml words
Waiting for the stack to be stable and running…
db: Ready       [pod status: 1/1 ready, 0/1 pending, 0/1 failed]
web: Ready      [pod status: 1/1 ready, 0/1 pending, 0/1 failed]
words: Ready    [pod status: 1/3 ready, 2/3 pending, 0/3 failed]
Stack words is stable and running
We can then interact with those objects via the Kubernetes API. Here you can see we’ve created the lower-level objects like Services, Pods, Deployments and ReplicaSets automatically:
$ kubectl get all
NAME                       READY     STATUS    RESTARTS   AGE
pod/db-85849797f6-bhpm8    1/1       Running   0          57s
pod/web-7974f485b7-j7nvt   1/1       Running   0          57s
pod/words-8fd6c974-44r4s   1/1       Running   0          57s
pod/words-8fd6c974-7c59p   1/1       Running   0          57s
pod/words-8fd6c974-zclh5   1/1       Running   0          57s

NAME                    TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/db              ClusterIP      None            <none>        55555/TCP      57s
service/kubernetes      ClusterIP      10.96.0.1       <none>        443/TCP        4d
service/web             ClusterIP      None            <none>        55555/TCP      57s
service/web-published   LoadBalancer   10.102.236.49   localhost     33000:31910/TCP   57s
service/words           ClusterIP      None            <none>        55555/TCP      57s

NAME                    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/db      1         1         1            1           57s
deployment.apps/web     1         1         1            1           57s
deployment.apps/words   3         3         3            3           57s

NAME                             DESIRED   CURRENT   READY     AGE
replicaset.apps/db-85849797f6    1         1         1         57s
replicaset.apps/web-7974f485b7   1         1         1         57s
replicaset.apps/words-8fd6c974   3         3         3         57s
It’s important to note that this isn’t a one-time conversion. The Compose on Kubernetes API Server introduces the Stack resource to the Kubernetes API. So we can query and manage everything at the same level of abstraction as we’re building the application. That makes delving into the details above useful for understanding how things work, or debugging issues, but not required most of the time:
$ kubectl get stack
NAME      STATUS      PUBLISHED PORTS   PODS     AGE      
words     Running     33000             5/5      4m
Integration with other Kubernetes tools
Because Stack is now a native Kubernetes object, you can work with it using other Kubernetes tools. As an example, save the as `stack.yaml`:
kind: Stack
apiVersion: compose.docker.com/v1beta2
metadata:
 name: hello
spec:
 services:
 – name: hello
   image: garethr/skaffold-example
   ports:
   – mode: ingress
     target: 5678
     published: 5678
     protocol: tcp
You can use a tool like Skaffold to have the image automatically rebuild and the Stack automatically redeployed whenever you change any of the details of your application. This makes for a great local inner-loop development experience. The following `skaffold.yaml` configuration file is all you need.
apiVersion: skaffold/v1alpha5
kind: Config
build:
 tagPolicy:
   sha256: {}
 artifacts:
 – image: garethr/skaffold-example
 local:
   useBuildkit: true
deploy:
 kubectl:
   manifests:
     – stack.yaml
The future
We already have some thoughts about a Helm plugin to make describing your application with Compose and deploying with Helm as easy as possible. We have lots of other ideas for helping to simplify the developer experience of working with Kubernetes too, without losing any of the power of the platform. We also want to work with the wider Cloud Native community, so if you have ideas and suggestions please let us know.
Kubernetes is designed to be extended, and we hope you like what we’ve been able to release today. If you’re one of the millions of Compose users you can now more easily move to and manage your applications on Kubernetes. If you’re a Kubernetes user struggling with too much low-level configuration then give Compose a try. Let us know in the comments what you think, and head over to GitHub to try things out and even open your first PR:

Compose on Kubernetes controller

#Docker Compose on Kubernetes is now open sourceClick To Tweet

The post Simplifying Kubernetes with Docker Compose and Friends appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Introducing Docker Desktop Enterprise

Nearly 1.4 million developers use Docker Desktop every single day because it is the simplest and easiest way for container-based development. Docker Desktop provides the Docker Engine with Swarm and Kubernetes orchestrators right on the desktop, all from a single install. While this is great for an individual user, in enterprise environments administrators often want to automate the Docker Desktop installation and ensure everyone on the development team has the same configuration following enterprise requirements and creating applications based on architectural standards.
 

 
Docker Desktop Enterprise is a new desktop offering that is the easiest, fastest and most secure way to create and deliver production-ready containerized applications. Developers can work with frameworks and languages of their choice, while IT can securely configure, deploy and manage development environments that align to corporate standards and practices. This enables organizations to rapidly deliver containerized applications from development to production.
Enterprise Manageability That Helps Accelerate Time-to-Production
Docker Desktop Enterprise provides a secure way to configure, deploy and manage developer environments while enforcing safe development standards that align to corporate policies and practices. IT teams and application architects can present developers with application templates designed specifically for their team, to bootstrap and standardize the development process and provide a consistent environment all the way to production.
Key new features for IT:

Packaged as standard MSI (Win) and PKG (Mac) distribution files that work with existing endpoint management tools with lockable settings via policy files
Present developers with customized and approved application templates, ready for coding

Enterprise Deployment & Configuration Packaging
Docker Desktop Enterprise enables IT desktop admins to deploy and manage Docker Desktop Enterprise across distributed developer teams with their preferred endpoint management tools using standard MSI and PKG files. No manual intervention or extra configuration from developers is required and desktop administrators can enable or disable particular settings within Docker Desktop Enterprise to meet corporate standards and provide the best developer experience.

Application Templates

Application architects can provide developers with trusted, customized application templates through the Application Designer interface in Docker Desktop Enterprise, helping to improve reliability and security by ensuring developers start from approved designs. Together, application teams and IT can implement consistent security and development practices across the entire software supply chain, from the developers’ desktops all the way to production.
Increase Developer Productivity and Ship Production-ready Containerized Applications
For developers, Docker Desktop Enterprise is the easiest and fastest way to build production-ready containerized applications working with frameworks and languages of choice and targeting every platform. Developers can rapidly innovate by leveraging company-provided application templates that instantly replicate production-approved application configurations on the local desktop.
Key new features for developers:

Configurable version packs instantly replicate production environment configurations on the local desktop
Application Designer interface allows for template-based workflows for creating containerized applications – no Docker CLI commands are required to get started

Configurable Version Packs

Docker Desktop has Docker Engine and Kubernetes built-in and with the addition of swappable version packs you can now synchronize your desktop development environment with the same Docker API and Kubernetes versions that are used in production with Docker Enterprise. You get the assurance that your application will not break due to incompatible API calls, and if you have multiple downstream environments running different versions of the APIs, you can quickly change your desktop configuration with the click of a button.

Application Designer       

The Application Designer is a new workflow that provides production-ready application and service templates that let you get coding quickly, with the reassurance that your application meets architectural standards. And even if you’ve never launched a container before, the Application Designer interface provides the foundational container artifacts and your organization’s skeleton code, getting you started with containers in minutes. Plus, Docker Desktop Enterprise integrates with your choice of development tools, whether you prefer an IDE or a text editor and command line interfaces.
 
The Docker Desktop Products
Docker Desktop Enterprise is a new addition to our desktop product portfolio which currently includes the free Docker Desktop Community products for MacOS and Windows. Docker Desktop is the simplest way to get started with container-based development on both Windows 10 and macOS with a set of features now available for the enterprise:

 
To learn more about Docker Desktop Enterprise:

Sign up to learn more about Docker Desktop Enterprise as we approach general availability
Watch the livestreams of the DockerCon EU keynotes, Tuesday from 09:00 – 11:00 CET and Wednesday from 9:30am-11:00am CET. (Replays will also be available)
Download Docker Desktop Community and build your first containerized application in minutes [ Windows | mac OS ]

Introducing #Docker Desktop Enterprise Click To Tweet

The post Introducing Docker Desktop Enterprise appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker App and CNAB

Docker App is a new tool we spoke briefly about back at DockerCon US 2018. We’ve been working on `docker-app` to make container applications simpler to share and easier to manage across different teams and between different environments, and we open sourced it so you can already download Docker App from GitHub at https://github.com/docker/app.
In talking to others about problems they’ve experienced sharing and collaborating on the broad area we call “applications” we came to a realisation: it’s a more general problem that others have been working on too. That’s why we’re happy to collaborate with Microsoft on the new Cloud Native Application Bundle (CNAB) specification.

Today’s cloud native applications typically use different technologies, each with their own toolchain. Maybe you’re using ARM templates and Helm charts, or CloudFormation and Compose, or Terraform and Ansible. There is no single solution in the market for defining and packaging these multi-service, multi-format distributed applications.
CNAB is an open source, cloud-agnostic specification for packaging and running distributed applications that aims to solve some of these problems. CNAB unifies the management of multi-service, distributed applications across different toolchains into a single all-in-one packaging format.
The draft specification is available at cnab.io and we’re actively looking both for folks interested in contributing to the spec itself, and to people interested in building tools around the specification. The latest release of Docker App is one such tool that implements the current CNAB spec. That means it can be used to both build CNAB bundles for Compose (which can then be used with any other CNAB client), and also to install, upgrade and uninstall any other CNAB bundle.
 
Sharing CNAB bundles on Docker Hub
One of the limitations of standalone Compose files is that they cannot be shared on Docker Hub or Docker Trusted Registry. Docker App solves this issue too. Here’s a simple Docker application which launches a very simple Prometheus stack:
version: 0.1.0
name: monitoring
description: A basic prometheus stack
maintainers:
 – name: Gareth Rushgrove
   email: garethr@docker.com

version: ‘3.7’

services:
 prometheus:
   image: prom/prometheus:${versions.prometheus}
   ports:
     – ${ports.prometheus}:9090

 alertmanager:
   image: prom/alertmanager:${versions.alertmanager}
   ports:
     – ${ports.alertmanager}:9093

ports:
   prometheus: 9090
   alertmanager: 9093
versions:
   prometheus: latest
   alertmanager: latest
With that saved as `monitoring.dockerapp` we can now build a CNAB and share that on Docker Hub.
$ docker-app push –namespace <your-namespace>
Now on another machine we can still interact with the shared application. For instance let’s use the `inspect` command to get information about our application:
$ docker-app inspect <your-namespace>/monitoring:0.1.0
monitoring 0.1.0

Maintained by: Gareth Rushgrove <garethr@docker.com>

A basic prometheus stack

Services (2) Replicas Ports Image
———— ——– —– —–
prometheus   1        9090  prom/prometheus:latest
alertmanager 1        9093  prom/alertmanager:latest

Parameters (4)        Value
————–        —–
ports.alertmanager    9093
ports.prometheus      9090
versions.alertmanager latest
versions.prometheus   latest
All the information from the Compose file is stored with the CNAB on Docker Hub, and if you notice, it’s also parameterized, so values can be substituted at runtime to fit the deployment requirements. We can install the application directly from Docker Hub as well:
docker-app install <your-namespace>/monitoring:0.1.0 –set ports.alertmanager=9095

Installing a Helm chart using Docker App
One question that has come up in the conversations we’ve had so far is how `docker-app` and now CNAB relates to Helm charts. The good news is that they all work great together! Here is an example using `docker-app` to install a CNAB bundle that packages a Helm chart. The following example uses the `hellohelm` example from the CNAB example bundles.
$ docker-app install -c local bundle.json
Do install for hellohelm
helm install –namespace hellohelm -n hellohelm /cnab/app/charts/alpine
NAME:   hellohelm
LAST DEPLOYED: Wed Nov 28 13:58:22 2018
NAMESPACE: hellohelm
STATUS: DEPLOYED

RESOURCES:
==> v1/Pod
NAME              AGE
hellohelm-alpine  0s

Next steps
If you’re interested in the technical details of the CNAB specification, either to see how it works under the hood or to maybe get involved in the specification work or building tools against it, you can find the spec at cnab.io.
If you’d like to get started building applications with Docker App you can download the latest release from github.com/docker/app and check out some of the examples provided in the repository.

#Docker App first tool to implement CNAB specClick To Tweet

The post Docker App and CNAB appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Announcing Cloud Native Application Bundle (CNAB)

As more organizations pursue cloud-native applications and infrastructures for creating modern software environments, it has become clear that there is no single solution in the market for defining and packaging these multi-service, multi-format distributed applications. Real-world applications can now span on-premises infrastructure and cloud-based services, requiring multiple tools like Terraform for the infrastructure, Helm charts and Docker Compose files for the applications, and CloudFormation or ARM templates for the cloud-services. Each of these need to be managed separately.
 

 
To address this problem, Microsoft in collaboration with Docker are announcing Cloud Native Application Bundle (CNAB) – an open source, cloud-agnostic specification for packaging and running distributed applications. CNAB unifies the management of multi-service, distributed applications across different toolchains into a single all-in-one packaging format.The CNAB specification lets you define resources that can be deployed to any combination of runtime environments and tooling including Docker Engine, Kubernetes, Helm, automation tools and cloud services.
Docker is the first to implement CNAB for containerized applications and will be expanding it across the Docker platform to support new application development, deployment and lifecycle management. Initially CNAB support will be released as part of our docker-app experimental tool for building, packaging and managing cloud-native applications. Docker lets you package CNAB bundles as Docker images, so you can distribute and share through Docker registry tools including Docker Hub and Docker Trusted Registry. Additionally, Docker will enable organizations to deploy and manage CNAB-based applications in Docker Enterprise in the upcoming months.
The draft specification is available at cnab.io and we’re actively looking for contributors to the spec itself and people interested in building tools around the specification. Docker will be contributing to the CNAB specification.
[Tweet “Announcing #CNAB-a new open specification from @Microsoft and #Docker”]
The post Announcing Cloud Native Application Bundle (CNAB) appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/