Tail Kubernetes with Stern

Editor’s note: today’s post is by Antti Kupila, Software Engineer, at Wercker, about building a tool to tail multiple pods and containers on Kubernetes.We love Kubernetes here at Wercker and build all our infrastructure on top of it. When deploying anything you need to have good visibility to what’s going on and logs are a first view into the inner workings of your application. Good old tail -f has been around for a long time and Kubernetes has this too, built right into kubectl.I should say that tail is by no means the tool to use for debugging issues but instead you should feed the logs into a more persistent place, such as Elasticsearch. However, there’s still a place for tail where you need to quickly debug something or perhaps you don’t have persistent logging set up yet (such as when developing an app in Minikube).Multiple PodsKubernetes has the concept of Replication Controllers which ensure that n pods are running at the same time. This allows rolling updates and redundancy. Considering they’re quite easy to set up there’s really no reason not to do so.However now there are multiple pods running and they all have a unique id. One issue here is that you’ll need to know the exact pod id (kubectl get pods) but that changes every time a pod is created so you’ll need to do this every time. Another consideration is the fact that Kubernetes load balances the traffic so you won’t know at which pod the request ends up at. If you’re tailing pod A but the traffic ends up at pod B you’ll miss what happened.Let’s say we have a pod called service with 3 replicas. Here’s what that would look like:$ kubectl get pods                         # get pods to find pod ids$ kubectl log -f service-1786497219-2rbt1  # pod 1$ kubectl log -f service-1786497219-8kfbp  # pod 2$ kubectl log -f service-1786497219-lttxd  # pod 3Multiple containersWe’re heavy users gRPC for internal services and expose the gRPC endpoints over REST using gRPC Gateway. Typically we have server and gateway living as two containers in the same pod (same binary that sets the mode by a cli flag). The gateway talks to the server in the same pod and both ports are exposed to Kubernetes. For internal services we can talk directly to the gRPC endpoint while our website communicates using standard REST to the gateway.This poses a problem though; not only do we now have multiple pods but we also have multiple containers within the pod. When this is the case the built-in logging of kubectl requires you to specify which containers you want logs from.If we have 3 replicas of a pod and 2 containers in the pod you’ll need 6 kubectl log -f <pod id> <container id>. We work with big monitors but this quickly gets out of hand…If our service pod has a server and gateway container we’d be looking at something like this:$ kubectl get pods                                 # get pods to find pod ids$ kubectl describe pod service-1786497219-2rbt1    # get containers in pod$ kubectl log -f service-1786497219-2rbt1 server   # pod 1$ kubectl log -f service-1786497219-2rbt1 gateway  # pod 1$ kubectl log -f service-1786497219-8kfbp server   # pod 2$ kubectl log -f service-1786497219-8kfbp gateway  # pod 2$ kubectl log -f service-1786497219-lttxd server   # pod 3$ kubectl log -f service-1786497219-lttxd gateway  # pod 3SternTo get around this we built Stern. It’s a super simple utility that allows you to specify both the pod id and the container id as regular expressions. Any match will be followed and the output is multiplexed together, prefixed with the pod and container id, and color-coded for human consumption (colors are stripped if piping to a file).Here’s how the service example would look:$ stern serviceThis will match any pod containing the word service and listen to all containers within it. If you only want to see traffic to the server container you could do stern –container server service and it’ll stream the logs of all the server containers from the 3 pods.The output would look something like this:$ stern service+ service-1786497219-2rbt1 › server+ service-1786497219-2rbt1 › gateway+ service-1786497219-8kfbp › server+ service-1786497219-8kfbp › gateway+ service-1786497219-lttxd › server+ service-1786497219-lttxd › gatewayservice-1786497219-8kfbp server Log message from serverservice-1786497219-2rbt1 gateway Log message from gatewayservice-1786497219-8kfbp gateway Log message from gatewayservice-1786497219-lttxd gateway Log message from gatewayservice-1786497219-lttxd server Log message from serverservice-1786497219-2rbt1 server Log message from serverIn addition, if a pod is killed and recreated during a deployment Stern will stop listening to the old pod and automatically hook into the new one. There’s no more need to figure out what the id of that newly created pod is.Configuration optionsStern was deliberately designed to be minimal so there’s not much to it. However, there are still a couple configuration options we can highlight here. They’re very similar to the ones built into kubectl so if you’re familiar with that you should feel right at home.–timestamps adds the timestamp to each line–since shows log entries since a certain time (for instance –since 15min)–kube-config allows you to specify another Kubernetes config. Defaults to ~/.kube/config–namespace allows you to only limit the search to a certain namespaceRun stern –help for all options.ExamplesTail the gateway container running inside of the envvars pod on staging     stern –context staging –container gateway envvarsShow auth activity from 15min ago with timestamps     stern -t –since 15m authFollow the development of some-new-feature in minikube     stern –context minikube some-new-featureView pods from another namespace     stern –namespace kube-system kubernetes-dashboardGet SternStern is open source and available on GitHub, we’d love your contributions or ideas. If you don’t want to build from source you can also download a precompiled binary from GitHub releases. 
Quelle: kubernetes

5 Tales from the Docker Crypt

(Cue the Halloween music)
Welcome to my crypt. This is the crypt keeper speaking and I’ll be your spirit guide on your journey through the dangerous and frightening world of IT applications. Today you will learn about 5 spooky application stories covering everything from cobweb covered legacy processes to shattered CI/CD pipelines. As these stories unfold, you will hear  how Docker helped banish cost, complexity and chaos.
Tale 1 &; “Demo Demons”
Splunk was on a mission to enable their employees and partners across the globe to deliver demos of their software regardless of where they’re located in the world, and have each demo function consistently. These business critical demos include everything from Splunk security, to web analytics and IT service intelligence. This vision proved to be quite complex to execute. At times their SEs would be in customer meetings, but their demos would sometimes fail. They needed to ensure that each of their 30 production demos within their Splunk Oxygen demo platform could live forever in eternal greatness.
To ensure their demos were working smoothly with their customers, Splunk uses Docker Datacenter, our on-premises solution that brings container management and deployment services to the enterprise via an integrated platform. Images are stored within the on-premises Docker Trusted Registry and are connected  to their Active Directory server so that users have the correct role-based access to the images. These images are publicly accessible to people who are authenticated but are outside of the corporate firewall. Their sales engineers can now pull the images from DTR and give the demo offline ensuring that anyone who goes out and represents the Splunk brand, can demo without demise.
Tale 2 &8211; “Monster Maintenance”
Cornell University&;s IT team was spending too many resources taking care of r their installation of Confluence. Their team spent 1,770 hours maintaining applications over a six month period and were in need of utilizing immutable infrastructure that could be easily torn down once processes were complete. Portability across their application lifecycle, which included everything from development, to production, was also a challenge.
With a Docker Datacenter (DDC) commercial subscription from Docker, they now host their Docker images in a central location, allowing multiple organizations to access them securely. Docker Trusted Registry provides high availability via DTR replicas, ensuring that their dockerized apps are continuously available, even if a node fails. With Docker, they experience a 10X reduction in maintenance time. Additionally, he portability of Docker containers helps their workloads move across multiple environments, streamlining their application development, and deployment processes. The team is now able to deploy applications 13X faster than in the past by leveraging reusable architecture patterns and simplified build and deployment processes.
Tale 3 &8211; “Managing Menacing Monoliths and Microservices!”
SA Home Loans, a mortgage firm located in South Africa was experiencing slow application deployment speeds. It took them 2 weeks just to get their newly developed applications over to their testing environment, slowing innovation. These issues extended to production as well. Their main home loan servicing software, a mixture of monolithic Windows services and IIS applications, was complex and difficult to update,placing a strain on the business. Even scarier was that when they deployed new features or fixes, they didn’t have an easy or reliable roll back plan if something went wrong (no blue/green deployment). In addition, their company decided to adopt a microservices architecture. They soon realized that upon completion of this project they’d have over 50 separate services across their Dockerized nodes in production! Orchestration now presented itself as a new challenge.
To solve their issues, SA Home Loans trusts in Docker Datacenter. SA Home Loans can now deploy apps 30 times more often! The solution also provides the production-ready container orchestration solution that they were looking for. Since DDC has embedded swarm within it, it shares the Docker engine APIs, and is one less complex thing to learn. The Docker Datacenter solution provides ease of use and familiar frontend for the ops team.
 
Tale 4 &8211; “Unearthly Labor”
USDA’s legacy website platform consisted of seven manually managed monolithic application servers that implemented technologies using traditional labor-intensive techniques that required expensive resources. Their systems administrators had to SSH into individual systems deploying updates and configuration one-by-one. USDA discovered that this approach lacked the flexibility and scalability to provide the services necessary for supporting their large number of diverse apps built with PHP, Ruby, and Java – namely Drupal, Jekyll, and Jira. A different approach would be required to fulfill the shared platform goals of USDA.
USDA now uses Docker and has expedited their project and modernized their entire development process. In just 5 weeks. they launched four government websites on their new dockerized  platform to production. Later, an additional four websites were launched including one for the first Lady, Michelle Obama, without any  additional hardware costs. By using Docker, the USDA saved  upwards of $150,000 in technology infrastructure costs alone. Because they could leverage a shared infrastructure model, they were also able to reduce  labor costs as well. Using Docker provided the USDA with the  agility needed  to develop, test, secure, and even deploy modern software in a high-security federal government datacenter environment.
Tale 5 &8211; “An Apparition of CI/CD”
Healthdirect dubbed their original applications development process &;anti CI/CD&; as it was broken, and difficult to create a secure end-to-end CI/CD pipeline. They had a CI/CD process for the infrastructure team, but were unable to repeat the process across multiple business units. The team wanted repeatability but lacked the ability to deploy their apps and provide 100% hands-off automation. .
Today Healthdirect is using Docker Datacenter. Now their developers are empowered in the release process and the code developed locally ships to production without changes. With Docker, Healthdirect was able to  innovate faster and deploy their applications to production, with ease.
So there they are. 5 spooky tales for you on this Halloween day.To learn more about Docker Datacenter check out this demo.
Now, be gone from my crypt. It’s time for me to retire back to my coffin.
Oh and one more thing….Happy Halloween!!
For more resources:

Hear from Docker customers
Learn more about Docker Datacenter
Sign up for your 30 day free evaluation of Docker Datacenter

 

5 spooky Tales from the Docker Crypt  To Tweet

The post 5 Tales from the Docker Crypt appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Considerations for Running Docker for Windows Server 2016 with Hyper-V VMs

We often get asked at , “Where should I run my application? On bare metal, virtual or cloud?” The beauty of Docker is that you can run a anywhere, so we usually answer this question with “It depends.” Not what you were looking for, right?
To answer this, you first need to consider which infrastructure makes the most sense for your application architecture and business goals. We get this question so often that our technical evangelist, Mike Coleman has written a few blogs to provide some guidance:

To Use Physical Or To Use Virtual: That Is The Container Deployment Question
So, When Do You Use A Container Or VM?

During our recent webinar, titled &;Docker for Windows Server 2016&;, this question came up a lot, specifically what to consider when deploying a Windows Server 2016 application in a -V VM with Docker and how it works. First, you’ll need to understand the differences between Windows Server containers, Hyper-V containers, and Hyper-V VMs before considering how they work together.
A Hyper-V container is a Windows Server container running inside a stripped down Hyper-V VM that is only instantiated for containers.

This provides additional kernel isolation and separation from the host OS that is used by the containerized application. Hyper-V containers automatically create a Hyper-V VM using the application’s base image and the Hyper-V VM includes the required application binaries, libraries inside that Windows container. For more information on Windows Containers read our blog. Whether your application runs as a Windows Server container or as a Hyper-V container is a runtime decision. Additional isolation is a good option for multi tenant environments. No changes are required to the Dockerfile or image, the same image can be run in either mode.
Here we the the top Hyper-V container questions with answers:
Q: I thought that containers do not need a hypervisor?
A: Correct, but since a Hyper-V container packages the same container image with its own dedicated kernel it ensures tighter isolation in multi-tenant environments which may be a business or application requirement for specific Windows Server 2016 applications.
Q: ­Do you need a hypervisor layer before the OS in both Hyper-V and Docker for Windows Server containers?
A: The hypervisor is optional. With Windows Server containers, isolation is achieved not with hypervisor, but with process isolation, filesystem and registry sandboxing.
Q: Can the Hyper-V containers be managed from the Hyper-V Manager, in the same way that the VM&;s are? (ie. turned on/off, check memory usage, etc?)
A: While Hyper-V is the runtime technology powering Hyper-V Isolation, Hyper-V containers are not VMs and neither appear as a Hyper-V resource nor be managed with classic Hyper-V tools, like Hyper-V Manager. Hyper-V containers are only executed at runtime by the Docker Engine.
Q: Can you run Windows Server container and Hyper-V Containers running Linux workloads on the same host?
A: Yes. You can run a Hyper-V VM with a Linux OS on a physical host running Windows Server.  Inside the VM, you can run containers built with Linux.

Next week we’ll bring you the next blog in our Windows Server 2016 Q&A Series &; Top questions about Docker for SQL Server Express. See you again next week.
For more resources:

Learn more: www.docker.com/microsoft
Read the blog: Webinar Recap: Docker For Windows Server 2016
Learn how to get started with Docker for Windows Server 2016
Read the blog to get started shifting a legacy Windows virtual machine to a Windows Container

Top considerations for running Docker @WindowsServer container in Hyper-VClick To Tweet

The post Considerations for Running Docker for Windows Server 2016 with Hyper-V VMs appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Weekly Roundup | October 23, 2016

 

The last week of October 2016 is over and you know what that means; another news . Highlights include Windows workloads with Image2Docker, part four of the SwarmKit series, and a Docker InfraKit test-drive! As we begin a new week, let’s recap our five top stories:

Windows Workloads with Image2Docker &; a community supported and designed project to demonstrate the ease of creating Windows Containers from existing servers. Interested parties are encouraged to fork it, play with it and contribute pull requests back to the community.

SwarmKit &8211; Part 4 &8211;  a tutorial series on Docker SwarmKit led by Gabriel Schenker. Part four of the series explains how to create a swarm in the cloud and run a sample application on it.

Docker Volumes  &8211; user instructions on how to make sure posts and images stay permanent via Docker volumes, even with an upgrade to a container image, as showcased by Alex Ellis.

InfraKit Test-Drive &8211; a detailed illustration of a sample Docker image created to demonstrate InfraKit’s self-healing operation via Ajeet Raina.  

Testing Swarm on Raspberry Pi &8211; a screencast of Docker swarm and how it’s able to recover from failure of an ethernet interface. Author Mathia Renner reinforces swarms ability to recover flawlessly from a reboot and crash. 

Weekly Roundup: Top 5 Docker stories for the week 10/23/16Click To Tweet

The post Docker Weekly Roundup | October 23, 2016 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Containerize Windows workloads with Image2Docker

Yesterday, we held a packed webinar on using the Image2Docker tool that prototypes shifting a legacy Windows virtual machine to a Windows Container Dockerfile.
Image2Docker is an open source, community generated powershell module that searches for common components of a Windows Server VM and generates a Dockerfile to match. Originally created by Docker Captain Trevor Sullivan, it is now an open source tool hosted in our GitHub repository. Currently there is discovery of components such IIS, Apache, SQL Server and more. As an input it supports VHD, VHDX, and WIM files. When paired with Microsoft’s Virtual Machine Converter, you can start with pretty much any VM format.
Image2Docker is community supported and designed to show you how easy it is to create Windows Containers from your existing servers. We strongly encourage you to fork it, play with it and contribute pull requests back to the community. Or just install it and use it to generate your own Dockerfiles.
Watch the on-demand webinar to learn more about how to it was built, how to use it and how to contribute.

 Here are some of the most popular questions from the sessions with answers.
Is it possible to containerize an application based on an older versions of Windows Server?
Anything you can run on Windows 2016, you can run in Docker, with the exception of anything that requires a GUI. That includes .NET applications, SQL Server, IIS, and all the Windows Server components.
Are there any success stories of people building applications using Docker for Windows Server?
Yes. Tyco has been working with Docker and Windows Server 2016 since Tech Preview and their story is featured here.
Can I run the Windows Server image as a container on a Linux host or any other OS?
For licensing reasons, Windows Server images will currently run on Windows Server 2016 and Windows 10 only.
More Resources:

Check out the Docker and Microsoft partnership
Learn More about Docker for Windows Server
Getting started with Windows Containers Tutorial

The post Containerize Windows workloads with Image2Docker appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

How We Architected and Run Kubernetes on OpenStack at Scale at Yahoo! JAPAN

Editor’s note: today’s post is by the Infrastructure Engineering team at Yahoo! JAPAN, talking about how they run OpenStack on Kubernetes. This post has been translated and edited for context with permission — originally published on the Yahoo! JAPAN engineering blog. IntroThis post outlines how Yahoo! JAPAN, with help from Google and Solinea, built an automation tool chain for “one-click” code deployment to Kubernetes running on OpenStack. We’ll also cover the basic security, networking, storage, and performance needs to ensure production readiness. Finally, we will discuss the ecosystem tools used to build the CI/CD pipeline, Kubernetes as a deployment platform on VMs/bare metal, and an overview of Kubernetes architecture to help you architect and deploy your own clusters. PrefaceSince our company started using OpenStack in 2012, our internal environment has changed quickly. Our initial goal of virtualizing hardware was achieved with OpenStack. However, due to the progress of cloud and container technology, we needed the capability to launch services on various platforms. This post will provide our example of taking applications running on OpenStack and porting them to Kubernetes.Coding LifecycleThe goal of this project is to create images for all required platforms from one application code, and deploy those images onto each platform. For example, when code is changed at the code registry, bare metal images, Docker containers and VM images are created by CI (continuous integration) tools, pushed into our image registry, then deployed to each infrastructure platform.We use following products in our CICD pipeline:FunctionProductCode registryGitHub EnterpriseCI toolsJenkinsImage registryArtifactoryBug tracking systemJIRAdeploying Bare metal platformOpenStack Ironicdeploying VM platformOpenStackdeploying container platformKubernetesImage Creation. Each image creation workflow is shown in the next diagram.VM Image Creation:push code to GitHubhook to Jenkins masterLaunch job at Jenkins slave checkout Packer repositoryRun Service JobExecute Packer by build scriptPacker start VM for OpenStack Glance Configure VM and install required applicationscreate snapshot and register to glanceDownload the new created image from GlanceUpload the image to ArtifactoryBare Metal Image Creation:push code to GitHubhook to Jenkins masterLaunch job at Jenkins slave checkout Packer repositoryRun Service JobDownload base bare metal image by build scriptbuild script execute diskimage-builder with Packer to create bare metal imageUpload new created image to GlanceUpload the image to ArtifactoryContainer Image Creation:push code to GitHubhook to Jenkins masterLaunch job at Jenkins slave checkout Dockerfile repositoryRun Service JobDownload base docker image from ArtifactoryIf no docker image found at Artifactory, download from Docker HubExecute docker build and create image Upload the image to ArtifactoryPlatform Architecture.Let’s focus on the container workflow to walk through how we use Kubernetes as a deployment platform. This platform architecture is as below.FunctionProductInfrastructure ServicesOpenStackContainer HostCentOSContainer Cluster ManagerKubernetesContainer NetworkingProject CalicoContainer EngineDockerContainer RegistryArtifactoryService RegistryetcdSource Code ManagementGitHub EnterpriseCI toolJenkinsInfrastructure ProvisioningTerraformLoggingFluentd, Elasticsearch, KibanaMetricsHeapster, Influxdb, GrafanaService MonitoringPrometheusWe use CentOS for Container Host (OpenStack instances) and install Docker, Kubernetes, Calico, etcd and so on. Of course, it is possible to run various container applications on Kubernetes. In fact, we run OpenStack as one of those applications. That’s right, OpenStack on Kubernetes on OpenStack. We currently have more than 30 OpenStack clusters, that quickly become hard to manage and operate. As such, we wanted to create a simple, base OpenStack cluster to provide the basic functionality needed for Kubernetes and make our OpenStack environment easier to manage.Kubernetes ArchitectureLet me explain Kubernetes architecture in some more detail. The architecture diagram is below.ProductDescriptionOpenStack KeystoneKubernetes Authentication and AuthorizationOpenStack CinderExternal volume used from Pod (grouping of multiple containers) kube-apiserverConfigure and validate objects like Pod or Services (definition of access to services in container) through REST API kube-schedulerAllocate Pods to each nodekube-controller-managerExecute Status management, manage replication controllerkubeletRun on each node as agent and manage PodcalicoEnable inter-Pod connection using BGPkube-proxyConfigure iptable NAT tables to configure IP and load balance (ClusterIP)etcdDistribute KVS to store Kubernetes and Calico informationetcd-proxyRun on each node and transfer client request to etcd clustersTenant Isolation To enable multi-tenant usage like OpenStack, we utilize OpenStack Keystone for authentication and authorization.Authentication With a Kubernetes plugin, OpenStack Keystone can be used for Authentication. By Adding authURL of Keystone at startup Kubernetes API server, we can use OpenStack OS_USERNAME and OS_PASSWORD for Authentication. AuthorizationWe currently use the ABAC (Attribute-Based Access Control) mode of Kubernetes Authorization. We worked with a consulting company, Solinea, who helped create a utility to convert OpenStack Keystone user and tenant information to Kubernetes JSON policy file that maps Kubernetes ABAC user and namespace information to OpenStack tenants. We then specify that policy file when launching Kubernetes API Server. This utility also creates namespaces from tenant information. These configurations enable Kubernetes to authenticate with OpenStack Keystone and operate in authorized namespaces. Volumes and Data Persistence Kubernetes provides “Persistent Volumes” subsystem which works as persistent storage for Pods. “Persistent Volumes” is capable to support cloud-provider storage, it is possible to utilize OpenStack cinder-volume by using OpenStack as cloud provider. NetworkingFlannel and various networking exists as networking model for Kubernetes, we used Project Calico for this project. Yahoo! JAPAN recommends to build data center with pure L3 networking like redistribute ARP validation or IP CLOS networking, Project Calico matches this direction. When we apply overlay model like Flannel, we cannot access to Pod IP from outside of Kubernetes clusters. But Project Calico makes it possible. We also use Project Calico for Load Balancing we describe later.In Project Calico, broadcast production IP by BGP working on BIRD containers (OSS routing software) launched on each nodes of Kubernetes. By default, it broadcast in cluster only. By setting peering routers outside of clusters, it makes it possible to access a Pod from outside of the clusters. External Service Load BalancingThere are multiple choices of external service load balancers (access to services from outside of clusters) for Kubernetes such as NodePort, LoadBalancer and Ingress. We could not find solution which exactly matches our requirements. However, we found a solution that almost matches our requirements by broadcasting Cluster IP used for Internal Service Load Balancing (access to services from inside of clusters) with Project Calico BGP which enable External Load Balancing at Layer 4 from outside of clusters.Service Discovery Service Discovery is possible at Kubernetes by using SkyDNS addon. This is provided as cluster internal service, it is accessible in cluster like ClusterIP. By broadcasting ClusterIP by BGP, name resolution works from outside of clusters. By combination of Image creation workflow and Kubernetes, we built the following tool chain which makes it easy from code push to deployment.SummaryIn summary, by combining Image creation workflows and Kubernetes, Yahoo! JAPAN, with help from Google and Solinea, successfully built an automated tool chain which makes it easy to go from code push to deployment, while taking multi-tenancy, authn/authz, storage, networking, service discovery and other necessary factors for production deployment. We hope you found the discussion of ecosystem tools used to build the CI/CD pipeline, Kubernetes as a deployment platform on VMs/bare-metal, and the overview of Kubernetes architecture to help you architect and deploy your own clusters. Thank you to all of the people who helped with this project. –Norifumi Matsuya, Hirotaka Ichikawa, Masaharu Miyamoto and Yuta Kinoshita. This post has been translated and edited for context with permission — originally published on the Yahoo! JAPAN engineer blog where this was one in a series of posts focused on Kubernetes.
Quelle: kubernetes

Even more Docker Labs!

Since we launched Docker Labs back in May, we’ve had a lot of interest. So we keep adding more and improving the labs that we have. We now have 22 hands on labs for you to choose from, ranging from beginner tutorials to much more advanced ones. Here’s a peek at what we have:

To accompany the launch of Windows containers in Microsoft Windows Server 2016, we launched a Windows Container beginner tutorial to walk you through setting up your environment, running basic containers and creating a basic Docker Compose multi-container application using Windows containers.
We added 6 security tutorials to take advantage of some of Docker’s strong security features.
A Docker community member liked our Java debugging tutorials so much, he translated our labs into Spanish.
We added a new Node.js tutorial to show how to easily you can debug Node.js applications live in container.

So check out Docker Labs to learn more about using Docker. And as always, we really encourage contributions. So if you have a lab you want to get out there, or find a way to improve what we have, please contribute today.
The post Even more Docker Labs! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Webinar Recap: Docker for Windows Server 2016

Last week, we held our first webinar on “ for Windows Server 2016” to a record number of attendees, showcasing the most exciting new Windows Server 2016 feature &; containers powered by Commercially Supported Docker Engine.
Docker CS Engine and containers are now available natively on Windows and supported by Microsoft with Docker’s Commercially Supported (CS) Engine included in Windows Server 2016.Now developers and IT pros can begin the same transformation for Windows-based apps and infrastructure to reap the benefits they’ve seen with Docker for Linux: enhanced security, agility, and improved portability and freedom to run applications on bare metal, virtual or cloud environments.
Watch the on-demand webinar to learn more about the technical innovations that went into making Docker containers run natively on Windows and how to get started.
Webinar: Docker for Windows Server 2016

Here are just a few of the most frequently asked questions from the session.  We’re still sorting through the rest and will post them in a follow up blog.
Q: How do I get started?
A: Docker and Microsoft have worked to make getting started simple, we have some great resources to get you started whether you&;re a developer or an IT pro:

Complete the Docker for Windows Containers Lab on GitHub
Read the blog: Build And Run Your First Docker Windows Server Container
View the images in Docker Hub that Microsoft has made available to the community to start building Windows containers: https://hub.docker.com/r/microsoft/
Get started converting existing Windows applications to Docker containers:

Read the blog: Image2Docker: A New Tool For Prototyping Windows VM Conversions
Register for the webinar on October 25th at 10AM PST &8211; Containerize Windows workloads with Image2Docker Tool

Q: How is Docker for Windows Server 2016 licensed?
A: Docker CS Engine comes included at no additional cost with Windows Server 2016 Datacenter, Standard, and Essentials editions with support provided by Microsoft and backed by Docker. Support is provided in accordance with the selected Windows Server 2016 support contract with available SLAs and hotfixes and full support for Docker APIs.
Q: Is there a specific Windows release that supports Docker for development?
A: You can get started using Windows 10 Anniversary Edition by installing Docker for Windows (direct link for  public beta channel) or by downloading and installing Windows Server 2016. You can also get started using Azure.
To learn more about how to get started, read our blog: Build And Run Your First Docker Windows Server Container or get started with the Docker for Windows Containers Lab on GitHub.
Q: Windows has a Server Core and Nano Sever base image available. What should I use?
A: Windows Server Core is designed for backwards compatibility. It is a larger base image but has the things you need so your existing applications are able to run in Docker. Nano Server is slimmer and is best suited for new applications that don’t have legacy dependencies.
For more resources:

Learn more: www.docker.com/microsoft
Read the blog: Top 5 Docker Questions From Microsoft Ignite
Learn more about the Docker and Microsoft partnership
Read the blog:  Introducing Docker For Windows Server 2016

Check out the Docker for Windows Server 2016 Webinar video and Q&A Recap w/ @friism Click To Tweet

The post Webinar Recap: Docker for Windows Server 2016 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Community Spotlight: Nimret Sandhu

Nimret Sandhu has shown himself to be a key player in the success of the Seattle Meetup group; and now with almost 2000 eager members,  organizing engaging events has become quite the responsibility! On top of his busy work schedule at Dev9, his extracurricular activities and a family life, Nimret took the time to tell us his Docker story, his favorite thing about the Docker Community and also departed with some words of wisdom for anyone just starting a meetup group.
 
Tell us about your first experience with Docker. What drew you to joining as an organizer for the Docker Seattle Meetup group?
My first experience with Docker was when our company, Dev9, looked into partnering with this up-and-coming startup named Docker a couple of years ago. Since I’m a long time *nix user who’s been exposed to solaris zones, bsd jails, etc. in the past, I looked into it, and immediately realized the potential. Once I downloaded and played around with it, I was so blown away by the technology that I started evangelizing it to our clients. I gave a talk on it and volunteered to help out with the Docker Seattle Meetup. I had already been running the Seattle Java User’s Group for a few years, and it was quite natural for me to volunteer to join the Docker Seattle Meetup group since I am quite passionate about technology.
Now that you use Docker, how do you use it and what do you use it for?
In my role as Director of Technology for Dev9, I am expected to delve into technical nuances when necessary while also managing multiple teams for clients in the Seattle area. Accelerated, rapid development is critical. Docker allows me to experiment with various enterprise-related technologies, primarily in the Java and JavaScript space. Projects are typically software development and/or Continuous Delivery leveraging tools such as the JVM, Jenkins, Spring Boot etc.
Docker is extremely easy to work with and provides a convenient way to package a solution together. I’ve found it to be incredibly helpful in accelerating my speed of development.
What are some aspects you love about organizing Docker Meetup events? 
I love the energy and diversity within the Docker community. People really have an interest in this tech and the domain. When people take the time to show up, it makes a big difference. We always have a great turnout and people are very engaged.
What I love about organizing the events is that we have such a wide variety of presentations. A mix from vendors, companies who use the technology, or people who are playing around with it for their own needs. It’s a great forum to exchange ideas, network and even find the next opportunity.
What advice would you give to a new organizer that just started their Docker Meetup group?

Start small, but start. The journey of a thousand miles begins with a single step.
Get the word out. Put info on community calendars (i.e. WTIA, Geekwire) and applicable places that people read. You can even mention this event at another meet-up. Look into mailing lists for start-ups or small organizations.
Coordinate with people who run other meetups to leverage synergies.
Ask for volunteers and companies to help.
Seek sponsorships &; many local businesses and companies are interested in hosting, providing food or being involved in other ways.
Attend other meetups to gain tips and thoughts from the organizers. Network with them on-going.

What do you do when you are not organizing meetup events?
As the Director of Technology for Dev9, I lead teams of software developers and am responsible for the projects we have in the Seattle area. Most of the projects are server-side, client-side and mobile. I help assemble teams, assist business development efforts, conduct up-front assessments for clients, hire and retain staff, and manage projects to ensure customer satisfaction and best practices in modern software development techniques. I am also the chair of the Seattle Java Users Group (SeaJUG), and have been for the last decade. I am on multiple Advisory Boards with the University of Washington Professional and Continuous Education program and help set direction and content in technology, ensuring that the programs stay up-to-date. Most importantly, I’m a father to my two lovely daughters and enjoy family time in general.
Take look at my Geek of the Week feature for more info!
Motto or personal mantra?
Work hard, play hard.

Huge shout out to Nirmet Sandhu and all docker meetup organizers for their contributions! Click To Tweet

The post Docker Community Spotlight: Nimret Sandhu appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Building Globally Distributed Services using Kubernetes Cluster Federation

Editor’s note: Today’s post is by Allan Naim, Product Manager, and Quinton Hoole, Staff Engineer at Google, showing how to deploy a multi-homed service behind a global load balancer and have requests sent to the closest cluster.In Kubernetes 1.3, we announced Kubernetes Cluster and introduced the concept of Cross Cluster Service Discovery, enabling developers to deploy a service that was sharded across a federation of clusters spanning different zones, regions or cloud providers. This enables developers to achieve higher availability for their applications, without sacrificing quality of service, as detailed in our previous blog post. In the latest release, Kubernetes 1.4, we’ve extended Cluster Federation to support Replica Sets, Secrets, Namespaces and Ingress objects. This means that you no longer need to deploy and manage these objects individually in each of your federated clusters. Just create them once in the federation, and have its built-in controllers automatically handle that for you.Federated Replica Sets leverage the same configuration as non-federated Kubernetes Replica Sets and automatically distribute Pods across one or more federated clusters. By default, replicas are evenly distributed across all clusters, but for cases where that is not the desired behavior, we’ve introduced Replica Set preferences, which allow replicas to be distributed across only some clusters, or in non-equal proportions (define annotations). Starting with Google Cloud Platform (GCP), we’ve introduced Federated Ingress as a Kubernetes 1.4 alpha feature which enables external clients point to a single IP address and have requests sent to the closest cluster with usable capacity in any region, zone of the Federation. Federated Secrets automatically create and manage secrets across all clusters in a Federation, automatically ensuring that these are kept globally consistent and up-to-date, even if some clusters are offline when the original updates are applied.Federated Namespaces are similar to the traditional Kubernetes Namespaces providing the same functionality. Creating them in the Federation control plane ensures that they are synchronized across all the clusters in Federation.Federated Events are similar to the traditional Kubernetes Events providing the same functionality. Federation Events are stored only in Federation control plane and are not passed on to the underlying kubernetes clusters.Let’s walk through how all this stuff works. We’re going to provision 3 clusters per region, spanning 3 continents (Europe, North America and Asia). The next step is to federate these clusters. Kelsey Hightower developed a tutorial for setting up a Kubernetes Cluster Federation. Follow the tutorial to configure a Cluster Federation with clusters in 3 zones in each of the 3 GCP regions, us-central1, europe-west1 and asia-east1. For the purpose of this blog post, we’ll provision the Federation Control Plane in the us-central1-b zone. Note that more highly available, multi-cluster deployments are also available, but not used here in the interests of simplicity.The rest of the blog post assumes that you have a running Kubernetes Cluster Federation provisioned. Let’s verify that we have 9 clusters in 3 regions running.$ kubectl –context=federation-cluster get clustersNAME              STATUS    AGEgce-asia-east1-a     Ready     17mgce-asia-east1-b     Ready     15mgce-asia-east1-c     Ready     10mgce-europe-west1-b   Ready     7mgce-europe-west1-c   Ready     7mgce-europe-west1-d   Ready     4mgce-us-central1-a    Ready     1mgce-us-central1-b    Ready     53sgce-us-central1-c    Ready     39sYou can download the source used in this blog post here. The source consists of the following files:configmaps/zonefetch.yaml – retrieves the zone from the instance metadata server and concatenates into volume mount pathreplicasets/nginx-rs.yaml – deploys a Pod consisting of an nginx and busybox containeringress/ingress.yaml – creates a load balancer with a global VIP  that distributes requests to the closest nginx backendservices/nginx.yaml – exposes the nginx backend as an external serviceIn our example, we’ll be deploying the service and ingress object using the federated control plane. The ConfigMap object isn’t currently supported by Federation, so we’ll be deploying it manually in each of the underlying Federation clusters. Our cluster deployment will look as follows:We’re going to deploy a Service that is sharded across our 9 clusters. The backend deployment will consist of a Pod with 2 containers:busybox container that fetches the zone and outputs an HTML with the zone embedded in it into a Pod volume mount pathnginx container that reads from that Pod volume mount path and serves an HTML containing the zone it’s running inLet’s start by creating a federated service object in the federation-cluster context.$ kubectl –context=federation-cluster create -f services/nginx.yamlIt will take a few minutes for the service to propagate across the 9 clusters. $ kubectl –context=federation-cluster describe services nginxName:                   nginxNamespace:              defaultLabels:                 app=nginxSelector:               app=nginxType:                   LoadBalancerIP:LoadBalancer Ingress:   108.59.xx.xxx, 104.199.xxx.xxx, …Port:                   http    80/TCPNodePort:               http    30061/TCPEndpoints:              <none>Session Affinity:       NoneLet’s now create a Federated Ingress. Federated Ingresses are created in much that same way as traditional Kubernetes Ingresses: by making an API call which specifies the desired properties of your logical ingress point. In the case of Federated Ingress, this API call is directed to the Federation API endpoint, rather than a Kubernetes cluster API endpoint. The API for Federated Ingress is 100% compatible with the API for traditional Kubernetes Services.$ cat ingress/ingress.yaml apiVersion: extensions/v1beta1kind: Ingressmetadata:  name: nginxspec:  backend:    serviceName: nginx    servicePort: 80$ kubectl –context=federation-cluster create -f ingress/ingress.yaml ingress “nginx” createdOnce created, the Federated Ingress controller automatically:creates matching Kubernetes Ingress objects in every cluster underlying your Cluster Federationensures that all of these in-cluster ingress objects share the same logical global L7 (i.e. HTTP(S)) load balancer and IP addressmonitors the health and capacity of the service “shards” (i.e. your Pods) behind this ingress in each clusterensures that all client connections are routed to an appropriate healthy backend service endpoint at all times, even in the event of Pod, cluster, availability zone or regional outagesWe can verify the ingress objects are matching in the underlying clusters. Notice the ingress IP addresses for all 9 clusters is the same.$ for c in $(kubectl config view -o jsonpath='{.contexts[*].name}’); do kubectl –context=$c get ingress; doneNAME      HOSTS     ADDRESS   PORTS     AGEnginx     *                   80        1hNAME      HOSTS     ADDRESS          PORTS     AGEnginx     *         130.211.40.xxx   80        40mNAME      HOSTS     ADDRESS          PORTS     AGEnginx     *         130.211.40.xxx   80        1hNAME      HOSTS     ADDRESS          PORTS     AGEnginx     *         130.211.40.xxx   80        26mNAME      HOSTS     ADDRESS          PORTS     AGEnginx     *         130.211.40.xxx   80        1hNAME      HOSTS     ADDRESS          PORTS     AGEnginx     *         130.211.40.xxx   80        25mNAME      HOSTS     ADDRESS          PORTS     AGEnginx     *         130.211.40.xxx   80        38mNAME      HOSTS     ADDRESS          PORTS     AGEnginx     *         130.211.40.xxx   80        3mNAME      HOSTS     ADDRESS          PORTS     AGEnginx     *         130.211.40.xxx   80        57mNAME      HOSTS     ADDRESS          PORTS     AGEnginx     *         130.211.40.xxx   80        56mNote that in the case of Google Cloud Platform, the logical L7 load balancer is not a single physical device (which would present both a single point of failure, and a single global network routing choke point), but rather a truly global, highly available load balancing managed service, globally reachable via a single, static IP address.Clients inside your federated Kubernetes clusters (i.e. Pods) will be automatically routed to the cluster-local shard of the Federated Service backing the Ingress in their cluster if it exists and is healthy, or the closest healthy shard in a different cluster if it does not. Note that this involves a network trip to the HTTP(S) load balancer, which resides outside your local Kubernetes cluster but inside the same GCP region.The next step is to schedule the service backends. Let’s first create the ConfigMap in each cluster in the Federation. We do this by submitting the ConfigMap to each cluster in the Federation.$ for c in $(kubectl config view -o jsonpath='{.contexts[*].name}’); do kubectl –context=$c create -f configmaps/zonefetch.yaml; doneLet’s have a quick peek at our Replica Set:$ cat replicasets/nginx-rs.yaml apiVersion: extensions/v1beta1kind: ReplicaSetmetadata:  name: nginx  labels:    app: nginx    type: demospec:  replicas: 9  template:    metadata:      labels:        app: nginx    spec:      containers:      – image: nginx        name: frontend        ports:          – containerPort: 80        volumeMounts:        – name: html-dir          mountPath: /usr/share/nginx/html      – image: busybox        name: zone-fetcher        command:          – “/bin/sh”          – “-c”          – “/zonefetch/zonefetch.sh”        volumeMounts:        – name: zone-fetch          mountPath: /zonefetch        – name: html-dir          mountPath: /usr/share/nginx/html      volumes:        – name: zone-fetch          configMap:            defaultMode: 0777            name: zone-fetch        – name: html-dir          emptyDir:            medium: “”The Replica Set consists of 9 replicas, spread evenly across 9 clusters within the Cluster Federation. Annotations can also be used to control which clusters Pods are scheduled to. This is accomplished by adding annotations to the Replica Set spec, as follows:apiVersion: extensions/v1beta1kind: ReplicaSetmetadata:  name: nginx-us  annotations:    federation.kubernetes.io/replica-set-preferences: |        {            “rebalance”: true,            “clusters”: {                “gce-us-central1-a”: {                    “minReplicas”: 2,                    “maxReplicas”: 4,                    “weight”: 1                },                “gce-us-central10b”: {                    “minReplicas”: 2,                    “maxReplicas”: 4,                    “weight”: 1                }            }        }For the purpose of our demo, we’ll keep things simple and spread our Pods evenly across the Cluster Federation.Let’s create the federated Replica Set:$ kubectl –context=federation-cluster create -f replicasets/nginx-rs.yamlVerify the Replica Sets and Pods were created in each cluster:$ for c in $(kubectl config view -o jsonpath='{.contexts[*].name}’); do kubectl –context=$c get rs; doneNAME      DESIRED   CURRENT   READY     AGEnginx     1         1         1         42sNAME      DESIRED   CURRENT   READY     AGEnginx     1         1         1         14mNAME      DESIRED   CURRENT   READY     AGEnginx     1         1         1         45sNAME      DESIRED   CURRENT   READY     AGEnginx     1         1         1         46sNAME      DESIRED   CURRENT   READY     AGEnginx     1         1         1         47sNAME      DESIRED   CURRENT   READY     AGEnginx     1         1         1         48sNAME      DESIRED   CURRENT   READY     AGEnginx     1         1         1         49sNAME      DESIRED   CURRENT   READY     AGEnginx     1         1         1         49sNAME      DESIRED   CURRENT   READY     AGEnginx     1         1         1         49s$ for c in $(kubectl config view -o jsonpath='{.contexts[*].name}’); do kubectl –context=$c get po; doneNAME          READY     STATUS    RESTARTS   AGEnginx-ph8zx   2/2       Running   0          25sNAME          READY     STATUS    RESTARTS   AGEnginx-sbi5b   2/2       Running   0          27sNAME          READY     STATUS    RESTARTS   AGEnginx-pf2dr   2/2       Running   0          28sNAME          READY     STATUS    RESTARTS   AGEnginx-imymt   2/2       Running   0          30sNAME          READY     STATUS    RESTARTS   AGEnginx-9cd5m   2/2       Running   0          31sNAME          READY     STATUS    RESTARTS   AGEnginx-vxlx4   2/2       Running   0          33sNAME          READY     STATUS    RESTARTS   AGEnginx-itagl   2/2       Running   0          33sNAME          READY     STATUS    RESTARTS   AGEnginx-u7uyn   2/2       Running   0          33sNAME          READY     STATUS    RESTARTS   AGEnginx-i0jh6   2/2       Running   0          34sBelow is an illustration of how the nginx service and associated ingress deployed. To summarize, we have a global VIP (130.211.23.176) exposed using a Global L7 load balancer that forwards requests to the closest cluster with available capacity.To test this out, we’re going to spin up 2 Google Cloud Engine (GCE) instances, one in us-west1-b and the other in asia-east1-a. All client requests are automatically routed, via the shortest network path, to a healthy Pod in the closest cluster to the origin of the request. So for example, HTTP(S) requests from Asia will be routed directly to the closest cluster in Asia that has available capacity. If there are no such clusters in Asia, the request will be routed to the next closest cluster (in this case the U.S.). This works irrespective of whether the requests originate from a GCE instance or anywhere else on the internet. We only use a GCE instance for simplicity in the demo. We can SSH directly into the VMs using the Cloud Console or by issuing a gcloud SSH command. $ gcloud compute ssh test-instance-asia –zone asia-east1-a—–user@test-instance-asia:~$ curl 130.211.40.186<!DOCTYPE html><html><head><title>Welcome to the global site!</title></head><body><h1>Welcome to the global site! You are being served from asia-east1-b</h1><p>Congratulations!</p>user@test-instance-asia:~$ exit—-$ gcloud compute ssh test-instance-us –zone us-west1-b—-user@test-instance-us:~$ curl 130.211.40.186<!DOCTYPE html><html><head><title>Welcome to the global site!</title></head><body><h1>Welcome to the global site! You are being served from us-central1-b</h1><p>Congratulations!</p>—-Federations of Kubernetes Clusters can include clusters running in different cloud providers (e.g. GCP, AWS), and on-premises (e.g. on OpenStack). However, in Kubernetes 1.4, Federated Ingress is only supported across Google Cloud Platform clusters. In future versions we intend to support hybrid cloud Ingress-based deployments.To summarize, we walked through leveraging the Kubernetes 1.4 Federated Ingress alpha feature to deploy a multi-homed service behind a global load balancer. External clients point to a single IP address and are sent to the closest cluster with usable capacity in any region, zone of the Federation, providing higher levels of availability without sacrificing latency or ease of operation.We’d love to hear feedback on Kubernetes Cross Cluster Services. To join the community:Post issues or feature requests on GitHubJoin us in the federation channel on SlackParticipate in the Cluster Federation SIGDownload KubernetesFollow Kubernetes on Twitter @Kubernetesio for latest updates
Quelle: kubernetes