OpenStack Developer Mailing List Digest January 7-13

SuccessBot Says

dims 1: Rally running against Glance (Both Rally and Glance using py3.5).
AJaegar 2: docs.openstack.org is served from the new Infra file server that is AFS based.
jd 3: Gnocchi 3.1 will be shipped with an empty /etc and will work without any config file by default.
cdent 4 : edleafe found narrowed down an important bug in gabbi.
Tell us yours via OpenStack IRC channels with message “ <message>”
All

Return of the Architecture Working Group

Meeting times Alternate, even weeks Thursday at 20:00UTC, odd weeks Thursday at 01:00UTC
Currently two proposes:

“Base Services” proposal 5 recognizes components leveraging features from external services that OpenStack components can assume will be present. Two kinds:

Local (like a hypervisor on a compute node)
Global (like a database)

“Nova Compute API” proposal 6 breaking nova-compute out of Nova itself.

Full thread

Restarting Service-types-authority / service catalog work

In anticipation of having a productive time in Atlanta for the PTG, various patches have been refreshed 7.
Two base IASS services aren’t in the list yet because of issues:

Neutron / network &; discrepancy between common use of “network” and “networking” in the API reference URL. Other services in the list have the service-type and the URL name for the API reference are the same.
Cinder / volume &8211; Moving forward from using volumev2 and volumev3 in devstack.

Full thread

Feedback From Driver Maintainers About Future of Driver Projects

Major observations

Yes drivers are an important part of OpenStack.
Discoverability of drivers needs to be fixed immediately.
It’s important to have visibility in a central place of the status of each driver.
Both driver developer and a high level person at a company should feel they’re part of something.
Give drivers access to publish to docs.openstack.org.
What constitutes a project was never for drivers. Drivers are part part of the project. Driver developers contribute to OpenStack by creating drivers.

Discoverability:

Consensus: it is currently all over the place 8 9 10.
There should be CI results available.
Discoverability can be fixed independently of governance changes.

Driver projects official or not?

Out-of-tree vendors have a desire to become “official” OpenStack projects.
Opinion: let driver projects become official without CI requirements.
Opinion: Do not allow drivers projects to become official, that doesn’t mean they shouldn’t easily be discoverable.
Opinion: We don&;t need to open the flood gates of allowing vendors to be teams in the OpenStack governance to make the vendors developers happy.
Fact: This implies being placed under the TC oversight. It is a significant move that could have unintended side-effects, it is hard to reverse (kicking out teams we accepted is worse than not including them in the first place), and our community is divided on the way forward. So we need to give that question our full attention and not rush the answer.
Opinion: Consider driver log 11 an official OpenStack project to be listed under governance with a PTL, weekly meetings, and all that it required to allow the team to be effective in their mission of keeping the marketplace a trustworthy resource for learning about OpenStack driver ecosystem.

Driver Developers:

Opinion: A driver developer that ONLY contributes to vendor specific driver code should not have the same influence as other OpenStack developers, voting for PTL, TC, and ATC status.
Opinion: PTLs should leverage the extra-atcs option in the governance repo.

In-tree VS out-of-tree

Cinder has in-tree drivers, but also has out-of-tree drivers when their CI is not maintained or when minimum feature requirements are not met. They are marked as ‘not supported’ and have a single release to get things working before being moved out-of-tree.
Ironic has a single out-of-tree repo 12 &; But also in-tree 13
Neutron has all drivers out-of-tree, with project names like: ‘networking-cisco’.
Many opinions on the “stick-based” approach the cinder team took.
Opinion: The in-tree vs out-of-tree argument is developer focused. Out-of-tree drivers have obvious benefits (develop quickly, maintain their own team, no need for a core to review the patch). But a vendor that is looking to make sure a driver is supported will not be searching git repos (goes back to discoverability).
Opinion: May be worth handling the projects that keep supported drivers in-tree differently that we handle projects that have everything out-of-tree.

Full thread

POST /api-wg/news

Guidelines currently under review:

Add guidelines on usage of state vs. status 14
Add guidelines for boolean names 15
Clarify the status values in versions 16
Define pagination guidelines 17
Add API capabilities discovery guideline 18
Add guideline for invalid query parameters 19

Full thread

New Deadline for PTG Travel Support Program

Help contributors that are not otherwise funded to join their project team gathering 20
Originally the application acceptance was set to close January 15, but it’s now extended to the end-of-day Tuesday January 17th.
Apply now if you need it! 21
Submissions will be evaluated next week and grantees will be notified by Friday, January 20th.
Register for the event 22 if you haven’t yet. Prices will increase on January 24 and February 14.
If you haven’t already booked your hotel yet, do ASAP in the event hotel itself using the PTG room block. This helps us keep costs under control and helps share the most time with the event participants.

Closes January 27
Book now 23

Full thread

Release Countdown For Week R-5

Focus:

Feature work and major refactoring be starting to wrap up as we approach the the third milestone.

Release Tasks:

stable/ocata branches will be created and configured with a small subset of the core review team. Release liaisons should ensure that these groups exist and the membership is correct.

General Notes:

We will start the soft string freeze during R-4 (Jan 23-27) 24
Subscribe to the release calendar with your favorite calendaring software 25

Important Dates:

Final release for non-client libraries: January 19
Ocata 3 milestone with feature and requirements freeze: January 26
Ocata release schedule 26

Full thread

 
[1] &8211; http://eavesdrop.openstack.org/irclogs/%23openstack-glance/%23openstack-glance.2017-01-09.log.html
[2] &8211; http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2017-01-10.log.html
[3] &8211; http://eavesdrop.openstack.org/irclogs/%23openstack-telemetry/%23openstack-telemetry.2017-01-11.log.html
[4] &8211; http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2017-01-12.log.html
[5] &8211; http://git.openstack.org/cgit/openstack/arch-wg/tree/proposals/base-services.rst
[6] &8211; https://review.openstack.org/#/c/411527/1
[7] &8211; https://review.openstack.org/#/c/286089/
[8] &8211; http://docs.openstack.org/developer/cinder/drivers.html
[9] &8211; http://docs.openstack.org/developer/nova/support-matrix.html
[10] &8211; http://stackalytics.openstack.org/report/driverlog
[11] &8211; http://git.openstack.org/cgit/openstack/driverlog
[12] &8211; https://git.openstack.org/cgit/openstack/ironic-staging-drivers
[13] &8211; http://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers
[14] &8211; https://review.openstack.org/#/c/411528/
[15] &8211; https://review.openstack.org/#/c/411529/
[16] &8211; https://review.openstack.org/#/c/411849/
[17] &8211; https://review.openstack.org/#/c/390973/
[18] &8211; https://review.openstack.org/#/c/386555/
[19] &8211; https://review.openstack.org/417441
[20] &8211; http://www.openstack.org/ptg#
[21] &8211; https://openstackfoundation.formstack.com/forms/travelsupportptg_atlanta
[22] &8211; https://pikeptg.eventbrite.com/
[23] &8211; https://www.starwoodmeeting.com/events/start.action?id=1609140999&key=381BF4AA
[24] &8211; https://releases.openstack.org/ocata/schedule.html#-soft-sf
[25] &8211; https://releases.openstack.org/schedule.ics
[26] &8211; http://releases.openstack.org/ocata/schedule.html
Quelle: openstack.org

InfraKit Under the Hood: High Availability

Back in October, released , an open source toolkit for creating and managing declarative, self-healing infrastructure. This is the first in a two part series that dives more deeply into the internals of InfraKit.
Introduction
At Docker,  our mission to build tools of mass innovation constantly challenges to look at ways to improve the way developers and operators work. Docker Engine with integrated orchestration via Swarm mode have greatly simplified and improved efficiency in application deployment and management of microservices. Going a level deeper, we asked ourselves if we could improve the lives of operators by making tools to simplify and automate orchestration of infrastructure resources. This led us to open source InfraKit, as set of building blocks for creating self-healing and self-managing systems.

There are articles and tutorials (such as this, and this) to help you get acquainted with InfraKit. InfraKit is made up of a set of components which actively manage infrastructure resources based on a declarative specification. These active agents continuously monitor and reconcile differences between your specification and actual infrastructure state. So far, we have implemented the functionality of scaling groups to support the creation of a compute cluster or application cluster that can self-heal and dynamically scale in size. To make this functionality available for different infrastructure platforms (e.g. AWS or bare-metal) and extensible for different applications (e.g. Zookeeper or Docker orchestration), we support customization and adaptation through the instance and flavor plugins. The group controller exposes operations for scaling in and out and for rolling update and communicates with the plugins using JSON-RPC 2.0 over HTTP. While the project provides packages implemented in Go for building platform-specific plugins (like this one for AWS), it is possible to use other language and tooling to create interesting and compatible plugins.
High Availability
Because InfraKit is used to ensure the availability and scaling of a cluster, it needs to be highly available and perform its duties without interruption.  To support this requirement, we consider the following:

Redundancy & Failover &; for active management without interruption.
Infrastructure State &8212; for an accurate view of the cluster and its resources.
User specification &8212; keeping it available even in case of failure.

Redundancy & Failover
Running multiple sets of the InfraKit daemons on separate physical nodes is an obvious approach to achieving redundancy.  However, while multiple replicas are running, only one of the replica sets can be active at a time. Having at most one leader (or master) at any time ensures no multiple controllers are independently making decisions and thus end up conflicting with one another while attempting to correct infrastructure state. However, with only one active instance at any given time, the role of the active leader must transition smoothly and quickly to another replica in the event of failure. When a node running as the leader crashes, another set of InfraKit daemons will assume leadership and attempt to correct the infrastructure state. This corrective measure will then restore the lost instance in the cluster, bringing the cluster back to the desired state before outage.
There are many options in implementing this leadership election mechanism. Popular coordinators for this include Zookeeper and Etcd which are consensus-based systems in which multiple nodes form a quorum. Similar to these is the Docker engine (1.12+) running in Swarm Mode, which is based on SwarmKit, a native clustering technology based on the same Raft consensus algorithm as Etcd. In keeping with the goal of creating a toolkit for building systems, we made these design choices:

InfraKit only needs to observe leadership in a cluster: when the node becomes the leader, the InfraKit daemons on that node become active. When leadership is lost, the daemons on the old leader are deactivated, while control is transferred over to the InfraKit daemons running on the new leader.
Create a simple API for sending leadership information to InfraKit. This makes it possible to connect InfraKit to a variety of inputs from Docker Engines in Swarm Mode (post 1.12) to polling a file in a shared file system (e.g. AWS EFS).
InfraKit does not itself implement leader election. This allows InfraKit to be readily integrated into systems that already have its own manager quorum and leader election such as Docker Swarm. Of course, it’s possible to add leader election using a coordinator such as Etcd and feed that to InfraKit via the leadership observation API.

With this design, coupled with a coordinator, we can run InfraKit daemons in replicas on multiple nodes in a cluster while ensuring only one leader is active at any given time. When leadership changes, InfraKit daemons running on the new leader must be able to assess infrastructure state and determine the delta from user specification.
Infrastructure State
Rather than relying on an internal, central datastore to manage the state of the infrastructure, such as an inventory of all vm instances, InfraKit aggregates and computes the infrastructure state based on what it can observe from querying the infrastructure provider. This means that:

The instance plugin needs to transform the query from the group controller to appropriate calls to the provider’s API.
The infrastructure provider should support labeling or tagging of provisioned resources such as vm instances.
In cases where the provider does not support labeling and querying resources by labels, the instance plugin has the responsibility to maintain that state. Approaches for this vary with plugin implementation but they often involve using services such as S3 for persistence.

Not having to store and manage infrastructure state greatly simplified the system. Since the infrastructure state is always aggregated and computed on-demand, it is always up to date. However, other factors such as availability and performance of the platform API itself can impact observability. For example, high latencies and even API throttling must be handled carefully in determining the cluster state and consequently deriving a plan to push toward convergence with the user’s specifications.
User Specification
InfraKit daemons continuously observe the infrastructure state and compares that with the user’s specification. The user’s specification for the cluster is expressed in JSON format and is used to determine the necessary steps to drive towards convergence. InfraKit requires this information to be highly available so that in the event of failover, the user specification can be accessed by the new leader.
There are options for implementing replication of the user specification. These range from using file systems backed by persistent object stores such as S3 to EFS to using distributed key-value store such as Zookeeper or Etcd. Like other parts of the toolkit, we opted to define an interface with different implementations of this configuration store. In the repo, there are stores implemented using file system and Docker Swarm. More implementations are possible and we welcome contributions!
Conclusion
In this article, we have examined some of the considerations in designing InfraKit. As a systems meant to be incorporated as a toolkit into larger systems, we aimed for modularity and composability. To achieve these goals, the project specifies interfaces which define interactions of different subsystems. As a rule, we try to provide different implementations to test and demonstrate these ideas. One such implementation of high availability with InfraKit leverages Docker Engine in Swarm Mode &8212; the native clustering and orchestration technology of the Docker Platform &8212; to give the swarm self-healing properties. In the next installment, we will investigate this in greater detail.
Check out the InfraKit repository README for more info, a quick tutorial and to start experimenting &8212; from plain files to Terraform integration to building a Zookeeper ensemble. Have a look, explore, and send us a PR or open an issue with your ideas!
More Resources:

Check out all the Infrastructure Plumbing projects
Sign up for Docker for AWS or Docker for Azure
Try Docker today

InfraKit Under the Hood: High Availability docker Click To Tweet

 
The post InfraKit Under the Hood: High Availability appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Introduction to YAML: Creating a Kubernetes deployment

The post Introduction to YAML: Creating a Kubernetes deployment appeared first on Mirantis | The Pure Play OpenStack Company.
In previous articles, we&;ve been talking about how to use Kubernetes to spin up resources. So far, we&8217;ve been working exclusively on the command line, but there&8217;s an easier and more useful way to do it: creating configuration files using YAML. In this article, we&8217;ll look at how YAML works and use it to define first a Kubernetes Pod, and then a Kubernetes Deployment.
YAML Basics
It&8217;s difficult to escape YAML if you&8217;re doing anything related to many software fields &; particularly Kubernetes, SDN, and OpenStack. YAML, which stands for Yet Another Markup Language, or YAML Ain&8217;t Markup Language (depending who you ask) is a human-readable text-based format for specifying configuration-type information. For example, in this article, we&8217;ll pick apart the YAML definitions for creating first a Pod, and then a Deployment.
Using YAML for K8s definitions gives you a number of advantages, including:

Convenience: You&8217;ll no longer have to add all of your parameters to the command line
Maintenance: YAML files can be added to source control, so you can track changes
Flexibility: You&8217;ll be able to create much more complex structures using YAML than you can on the command line

YAML is a superset of JSON, which means that any valid JSON file is also a valid YAML file. So on the one hand, if you know JSON and you&8217;re only ever going to write your own YAML (as opposed to reading other people&8217;s) you&8217;re all set.  On the other hand, that&8217;s not very likely, unfortunately.  Even if you&8217;re only trying to find examples on the web, they&8217;re most likely in (non-JSON) YAML, so we might as well get used to it.  Still, there may be situations where the JSON format is more convenient, so it&8217;s good to know that it&8217;s available to you.
Fortunately, there are only two types of structures you need to know about in YAML:

Lists
Maps

That&8217;s it. You might have maps of lists and lists of maps, and so on, but if you&8217;ve got those two structures down, you&8217;re all set. That&8217;s not to say there aren&8217;t more complex things you can do, but in general, this is all you need to get started.
YAML Maps
Let&8217;s start by looking at YAML maps.  Maps let you associate name-value pairs, which of course is convenient when you&8217;re trying to set up configuration information.  For example, you might have a config file that starts like this:

apiVersion: v1
kind: Pod
The first line is a separator, and is optional unless you&8217;re trying to define multiple structures in a single file. From there, as you can see, we have two values, v1 and Pod, mapped to two keys, apiVersion and kind.
This kind of thing is pretty simple, of course, and you can think of it in terms of its JSON equivalent:
{
“apiVersion”: “v1″,
“kind”: “Pod”
}
Notice that in our YAML version, the quotation marks are optional; the processor can tell that you&8217;re looking at a string based on the formatting.
You can also specify more complicated structures by creating a key that maps to another map, rather than a string, as in:

apiVersion: v1
kind: Pod
metadata:
 name: rss-site
 labels:
   app: web
In this case, we have a key, metadata, that has as its value a map with 2 more keys, name and labels. The labels key itself has a map as its value. You can nest these as far as you want to.
The YAML processor knows how all of these pieces relate to each other because we&8217;ve indented the lines. In this example I&8217;ve used 2 spaces for readability, but the number of spaces doesn&8217;t matter &8212; as long as it&8217;s at least 1, and as long as you&8217;re CONSISTENT.  For example, name and labels are at the same indentation level, so the processor knows they&8217;re both part of the same map; it knows that app is a value for labels because it&8217;s indented further.
Quick note: NEVER use tabs in a YAML file.
So if we were to translate this to JSON, it would look like this:
{
“apiVersion”: “v1″,
“kind”: “Pod”,
“metadata”: {
“name”: “rss-site”,
“labels”: {
“app”: “web”
}
}
}
Now let&8217;s look at lists.
YAML lists
YAML lists are literally a sequence of objects.  For example:
args
 – sleep
 – “1000”
 – message
 – “Bring back Firefly!”
As you can see here, you can have virtually any number of items in a list, which is defined as items that start with a dash (-) indented from the parent.  So in JSON, this would be:
{
“args”: [“sleep”, “1000”, “message”, “Bring back Firefly!”]
}
And of course, members of the list can also be maps:

apiVersion: v1
kind: Pod
metadata:
 name: rss-site
 labels:
   app: web
spec:
 containers:
   – name: front-end
     image: nginx
     ports:
       – containerPort: 80
   – name: rss-reader
     image: nickchase/rss-php-nginx:v1
     ports:
       – containerPort: 88
So as you can see here, we have a list of containers &;objects&;, each of which consists of a name, an image, and a list of ports.  Each list item under ports is itself a map that lists the containerPort and its value.
For completeness, let&8217;s quickly look at the JSON equivalent:
{
“apiVersion”: “v1″,
“kind”: “Pod”,
“metadata”: {
“name”: “rss-site”,
“labels”: {
“app”: “web”
}
},
“spec”: {
“containers”: [{
“name”: “front-end”,
“image”: “nginx”,
“ports”: [{
“containerPort”: “80”
}]
},
{
“name”: “rss-reader”,
“image”: “nickchase/rss-php-nginx:v1″,
“ports”: [{
“containerPort”: “88”
}]
}]
}
}
As you can see, we&8217;re starting to get pretty complex, and we haven&8217;t even gotten into anything particularly complicated! No wonder YAML is replacing JSON so fast.
So let&8217;s review.  We have:

maps, which are groups of name-value pairs
lists, which are individual items
maps of maps
maps of lists
lists of lists
lists of maps

Basically, whatever structure you want to put together, you can do it with those two structures.  
Creating a Pod using YAML
OK, so now that we&8217;ve got the basics out of the way, let&8217;s look at putting this to use. We&8217;re going to first create a Pod, then a Deployment, using YAML.
If you haven&8217;t set up your cluster and kubectl, go ahead and check out this article series on setting up Kubernetes before you go on.  It&8217;s OK, we&8217;ll wait&;.

Back already?  Great!  Let&8217;s start with a Pod.
Creating the pod file
In our previous example, we described a simple Pod using YAML:
&8212;
apiVersion: v1
kind: Pod
metadata:
 name: rss-site
 labels:
   app: web
spec:
 containers:
   &; name: front-end
     image: nginx
     ports:
       &8211; containerPort: 80
   &8211; name: rss-reader
     image: nickchase/rss-php-nginx:v1
     ports:
       &8211; containerPort: 88
Taking it apart one piece at a time, we start with the API version; here it&8217;s just v1. (When we get to deployments, we&8217;ll have to specify a different version because Deployments don&8217;t exist in v1.)
Next, we&8217;re specifying that we want to create a Pod; we might specify instead a Deployment, Job, Service, and so on, depending on what we&8217;re trying to achieve.
Next we specify the metadata. Here we&8217;re specifying the name of the Pod, as well as the label we&8217;ll use to identify the pod to Kubernetes.
Finally, we&8217;ll specify the actual objects that make up the pod. The spec property includes any containers, storage volumes, or other pieces that Kubernetes needs to know about, as well as properties such as whether to restart the container if it fails. You can find a complete list of Kubernetes Pod properties in the Kubernetes API specification, but let&8217;s take a closer look at a typical container definition:
&8230;
spec:
 containers:
   &8211; name: front-end
     image: nginx
     ports:
       &8211; containerPort: 80
   &8211; name: rss-reader
&8230;
In this case, we have a simple, fairly minimal definition: a name (front-end), the image on which it&8217;s based (nginx), and one port on which the container will listen internally (80).  Of these, only the name is really required, but in general, if you want it to do anything useful, you&8217;ll need more information.
You can also specify more complex properties, such as a command to run when the container starts, arguments it should use, a working directory, or whether to pull a new copy of the image every time it&8217;s instantiated.  You can also specify even deeper information, such as the location of the container&8217;s exit log.  Here are the properties you can set for a Container:

name
image
command
args
workingDir
ports
env
resources
volumeMounts
livenessProbe
readinessProbe
livecycle
terminationMessagePath
imagePullPolicy
securityContext
stdin
stdinOnce
tty

Now let&8217;s go ahead and actually create the pod.
Creating the pod using the YAML file
The first step, of course, is to go ahead and create a text file.   Call it pod.yaml and add the following text, just as we specified it earlier:
&8212;
apiVersion: v1
kind: Pod
metadata:
 name: rss-site
 labels:
   app: web
spec:
 containers:
   &8211; name: front-end
     image: nginx
     ports:
       &8211; containerPort: 80
   &8211; name: rss-reader
     image: nickchase/rss-php-nginx:v1
     ports:
       &8211; containerPort: 88
Save the file, and tell Kubernetes to create its contents:
> kubectl create -f pod.yaml
pod “rss-site” created
As you can see, K8s references the name we gave the Pod.  You can see that if you ask for a list of the pods:
> kubectl get pods
NAME       READY     STATUS              RESTARTS   AGE
rss-site   0/2       ContainerCreating   0          6s
If you check early enough, you can see that the pod is still being created.  After a few seconds, you should see the containers running:
> kubectl get pods
NAME       READY     STATUS    RESTARTS   AGE
rss-site   2/2       Running   0          14s
From here, you can test out the Pod (just as we did in the previous article), but ultimately we want to create a Deployment, so let&8217;s go ahead and delete it so there aren&8217;t any name conflicts:
> kubectl delete pod rss-site
pod “rss-site” deleted
Troubleshooting pod creation
Sometimes, of course, things don&8217;t go as you expect. Maybe you&8217;ve got a networking issue, or you&8217;ve mistyped something in your YAML file.  You might see an error like this:
> kubectl get pods
NAME       READY     STATUS         RESTARTS   AGE
rss-site   1/2       ErrImagePull   0          9s
In this case, we can see that one of our containers started up just fine, but there was a problem with the other.  To track down the problem, we can ask Kubernetes for more information on the Pod:
> kubectl describe pod rss-site
Name:           rss-site
Namespace:      default
Node:           10.0.10.7/10.0.10.7
Start Time:     Sun, 08 Jan 2017 08:36:47 +0000
Labels:         app=web
Status:         Pending
IP:             10.200.18.2
Controllers:    <none>
Containers:
 front-end:
   Container ID:               docker://a42edaa6dfbfdf161f3df5bc6af05e740b97fd9ac3d35317a6dcda77b0310759
   Image:                      nginx
   Image ID:                   docker://sha256:01f818af747d88b4ebca7cdabd0c581e406e0e790be72678d257735fad84a15f
   Port:                       80/TCP
   State:                      Running
     Started:                  Sun, 08 Jan 2017 08:36:49 +0000
   Ready:                      True
   Restart Count:              0
   Environment Variables:      <none>
 rss-reader:
   Container ID:
   Image:                      nickchase/rss-php-nginx
   Image ID:
   Port:                       88/TCP
   State:                      Waiting
    Reason:                   ErrImagePull
   Ready:                      False
   Restart Count:              0
   Environment Variables:      <none>
Conditions:
 Type          Status
 Initialized   True
 Ready         False
 PodScheduled  True
No volumes.
QoS Tier:       BestEffort
Events:
 FirstSeen     LastSeen        Count   From                    SubobjectPath  Type             Reason                  Message
 ———     ——–        —–   —-                    ————-  ——– ——                  ——-
 45s           45s             1       {default-scheduler }                   Normal           Scheduled               Successfully assigned rss-site to 10.0.10.7
 44s           44s             1       {kubelet 10.0.10.7}     spec.containers{front-end}      Normal          Pulling                 pulling image “nginx”
 45s           43s             2       {kubelet 10.0.10.7}                    Warning          MissingClusterDNS       kubelet does not have ClusterDNS IP configured and cannot create Pod using “ClusterFirst” policy. Falling back to DNSDefault policy.
 43s           43s             1       {kubelet 10.0.10.7}     spec.containers{front-end}      Normal          Pulled                  Successfully pulled image “nginx”
 43s           43s             1       {kubelet 10.0.10.7}     spec.containers{front-end}      Normal          Created                 Created container with docker id a42edaa6dfbf
 43s           43s             1       {kubelet 10.0.10.7}     spec.containers{front-end}      Normal          Started                 Started container with docker id a42edaa6dfbf
 43s           29s             2       {kubelet 10.0.10.7}     spec.containers{rss-reader}     Normal          Pulling                 pulling image “nickchase/rss-php-nginx”
 42s           26s             2       {kubelet 10.0.10.7}     spec.containers{rss-reader}     Warning         Failed                  Failed to pull image “nickchase/rss-php-nginx”: Tag latest not found in repository docker.io/nickchase/rss-php-nginx
 42s           26s             2       {kubelet 10.0.10.7}                    Warning          FailedSync              Error syncing pod, skipping: failed to “StartContainer” for “rss-reader” with ErrImagePull: “Tag latest not found in repository docker.io/nickchase/rss-php-nginx”

 41s   12s     2       {kubelet 10.0.10.7}     spec.containers{rss-reader}    Normal   BackOff         Back-off pulling image “nickchase/rss-php-nginx”
 41s   12s     2       {kubelet 10.0.10.7}                                    Warning  FailedSync      Error syncing pod, skipping: failed to “StartContainer” for “rss-reader” with ImagePullBackOff: “Back-off pulling image “nickchase/rss-php-nginx””
As you can see, there&8217;s a lot of information here, but we&8217;re most interested in the Events &8212; specifically, once the warnings and errors start showing up.  From here I was able to quickly see that I&8217;d forgotten to add the :v1 tag to my image, so it was looking for the :latest tag, which didn&8217;t exist.  
To fix the problem, I first deleted the Pod, then fixed the YAML file and started again. Instead, I could have fixed the repo so that Kubernetes could find what it was looking for, and it would have continued on as though nothing had happened,.
Now that we&8217;ve successfully gotten a Pod running, let&8217;s look at doing the same for a Deployment.
Creating a Deployment using YAML
Finally, we&8217;re down to creating the actual Deployment.  Before we do that, though, it&8217;s worth understanding what it is we&8217;re actually doing.
K8s, remember, manages container-based resources. In the case of a Deployment, you&8217;re creating a set of resources to be managed. For example, where we created a single instance of the Pod in the previous example, we might create a Deployment to tell Kubernetes to manage a set of replicas of that Pod &8212; literally, a ReplicaSet &8212; to make sure that a certain number of them are always available.  So we might start our Deployment definition like this:
&8212;
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: rss-site
spec:
 replicas: 2
Here we&8217;re specifying the apiVersion as extensions/v1beta1 &8212; remember, Deployments aren&8217;t in v1, as Pods were &8212; and that we want a Deployment. Next we specify the name. We can also specify any other metadata we want, but let&8217;s keep things simple for now.
Finally, we get into the spec. In the Pod spec, we gave information about what actually went into the Pod; we&8217;ll do the same thing here with the Deployment. We&8217;ll start, in this case, by saying that whatever Pods we deploy, we always want to have 2 replicas. You can set this number however you like, of course, and you can also set properties such as the selector that defines the Pods affected by this Deployment, or the minimum number of seconds a pod must be up without any errors before it&8217;s considered &8220;ready&8221;.  You can find a full list of the Deployment specification properties in the Kuberenetes v1beta1 API reference.
OK, so now that we know we want 2 replicas, we need to answer the question: &8220;Replicas of what?&8221;  They&8217;re defined by templates:
&8212;
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: rss-site
spec:
 replicas: 2
 template:
   metadata:
     labels:
       app: web
   spec:
     containers:
       &8211; name: front-end
         image: nginx
         ports:
           &8211; containerPort: 80
       &8211; name: rss-reader
         image: nickchase/rss-php-nginx:v1
         ports:
           &8211; containerPort: 88
Look familiar?  It should; it&8217;s virtually identical to the Pod definition in the previous section, and that&8217;s by design. Templates are simply definitions of objects to be replicated &8212; objects that might, in other circumstances, by created on their own.
Now let&8217;s go ahead and create the deployment.  Add the YAML to a file called deployment.yaml and point Kubernetes at it:
> kubectl create -f deployment.yaml
deployment “rss-site” created
To see how it&8217;s doing, we can check on the deployments list:
> kubectl get deployments
NAME       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
rss-site   2         2         2            1           7s
As you can see, Kubernetes has started both replicas, but only one is available. You can check the event log by describing the Deployment, as before:
> kubectl describe deployment rss-site
Name:                   rss-site
Namespace:              default
CreationTimestamp:      Mon, 09 Jan 2017 17:42:14 +0000=
Labels:                 app=web
Selector:               app=web
Replicas:               2 updated | 2 total | 1 available | 1 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
OldReplicaSets:         <none>
NewReplicaSet:          rss-site-4056856218 (2/2 replicas created)
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type            Reason                  Message
 ———     ——–        —–   —-                            ————-   ——–        ——                  ——-
 46s           46s             1       {deployment-controller }               Normal           ScalingReplicaSet       Scaled up replica set rss-site-4056856218 to 2
As you can see here, there&8217;s no problem, it just hasn&8217;t finished scaling up yet. Another few seconds, and we can see that both Pods are running:
> kubectl get deployments
NAME       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
rss-site   2         2         2            2           1m
What we&8217;ve seen so far
OK, so let&8217;s review. We&8217;ve basically covered three topics:

YAML is a human-readable text-based format that let&8217;s you easily specify configuration-type information by using a combination of maps of name-value pairs and lists of items (and nested versions of each).
YAML is the most convenient way to work with Kubernetes objects, and in this article we looked at creating Pods and Deployments.
You can get more information on running (or should-be-running) objects by asking Kubernetes to describe them.

So that&8217;s our basic YAML tutorial. We&8217;re going to be tackling a great deal of Kubernetes-related content in the coming months, so if there&8217;s something specific you want to hear about, let us know in the comments, or tweet us at @MirantisIT.
The post Introduction to YAML: Creating a Kubernetes deployment appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

The Dollars and Cents of How to Consume a Private Cloud

The post The Dollars and Cents of How to Consume a Private Cloud appeared first on Mirantis | The Pure Play OpenStack Company.
In my blog, how does the world consume private clouds?, we reviewed different ways to consume private cloud software:

Do-it-yourself (DIY)
Software distribution from a vendor
Managed service (your hardware & datacenter, software managed by a vendor)
Managed & hosted service (hardware, software, datacenter all outsourced)

Let’s look at the economics of the first three alternatives. Rather than an absolute total-cost-of-ownership (TCO) analysis, we will focus on a relative comparison where line items that are identical in all three scenarios, e.g., hardware costs, will be removed.
Of course, cost is not the only criteria in choosing your consumption model; there are other criteria, such as the ability to recruit OpenStack talent, long-term strategic interests, customizations required, and so on, but these topics are not covered in this blog.
DIY
This initially appears to be a no-brainer option. After all, isn’t open-source free software? Doesn’t one just download, install and be on their merry way? Unfortunately not &; open-source software provides numerous benefits such as higher innovation velocity, ability to influence direction and functionality, elimination of vendor lock-in and short-circuiting standards by defining APIs, drivers and plugins. But “free” is not a benefit mainly because open-source projects are not finished products. Below are typical costs incurred in a DIY scenario based on the numerous customers we have had the opportunity to work with who initially tried DIY OpenStack.

Cost
Representative Breakdown

Fixed size engineering team of 13 engineers
(Size independent of cloud scale)
5 Upstream engineers (to fix bugs, work on features, create reference architecture)
5 QA engineers (to package, QA & do interop testing)
3 Lifecycle tooling & monitoring engineers

Fixed size IT/OPS team of 9 engineers
(Size independent of cloud scale)
1 IT architect (to architect, do capacity planning)
1 L3 engineer (troubleshooting)
2 L2 engineers (to deploy, update, upgrade, and do ongoing management)
5 L1 engineers (to monitor, look at basic issues, respond to tenant requests)

Variable size engineering team of 1.1 person per 100 nodes and 1.1 person per 1PB storage
(Size depends on cloud scale, kicks-in only when past fixed size minimums &; so no double counting)
Compute:
0.3 IT/OPS architects per 100 nodes
0.1 L3 IT/OPS engineer per 100 nodes
0.3 L2 IT/OPS engineer per 100 nodes
0.4 L1 IT/OPS engineer per 100 nodes
Storage:
0.3 IT/OPS architects per 1PB storage
0.1 L3 IT/OPS engineer per 1PB storage
0.3 L2 IT/OPS engineer per 1 PB storage
0.4 L1 IT/OPS engineer per 1 PB storage

Dev/ Test cloud
$50,000 depreciated across 3 years required to test updates, upgrades, configuration changes etc.

Loss of availability
A DIY cloud typically has a lower availability than alternatives. Once you calculate the number of minutes of cloud downtime per year, you can multiply this by the margin loss per minute.
E.g. for 98% cloud availability and $50 loss per minute of cloud downtime equates to a loss of $525,600 per year.

Production delays
A DIY cloud typically takes longer to implement, delaying a production deployment.
E.g. for 6 months of delay and each month causing the business $50,000 of loss, that equates to $300,000 of one-time loss.

 
Software Distribution from a Vendor
In this consumption model, the engineering burden is shifted to the vendor, but the IT/OPS task resides with the user. The costs look like follows:

Cost
Representative Breakdown

Fixed size IT/ OPS team of 3.5 engineers
(Size independent of cloud scale, the team is much smaller than in the DIY case because there is a vendor to take support calls)
0.5 IT architect (to architect, do capacity planning)
1 L2 engineers (to deploy, update, upgrade, ongoing management)
2 L1 engineers (to monitor, look at basic issues, respond to tenant requests)

Variable size engineering team of 1 person per 100 nodes and 1 person per 1PB storage
(Size varies depending on cloud scale, kicks-in only when past fixed size minimums &8211; so no double counting)
Compute:
0.3 IT/OPS architects per 100 nodes
0.3 L2 IT/OPS engineer per 100 nodes
0.4 L1 IT/OPS engineer per 100 nodes
Storage:
0.3 IT/OPS architects per 1PB storage
0.3 L2 IT/OPS engineer per 1 PB storage
0.4 L1 IT/OPS engineer per 1 PB storage

Dev/Test cloud
$50,000 depreciated across 3 years required to test updates, upgrades, configuration changes etc.

Loss of availability
A cloud based on a distro typically has better availability than DIY. Once you calculate the number of minutes of cloud downtime per year, you can multiply this by the margin loss per minute.
E.g. for 99.5% cloud availability and $50 loss per minute of cloud downtime equates to a loss of $262,800 per year.

Software support costs
In lieu of the internal engineering team, in this scenario, there is a support cost payable to the vendor.

 
Managed Service from a Vendor
Here the engineering and IT/OPS burden for the software is shifted to the vendor. The costs look like follows:

Cost
Representative Breakdown

Loss of availability
A managed cloud typically offers the highest availability of the three options. Once you calculate the number of minutes of cloud downtime per year, you can multiply this by the margin loss per minute.
E.g. for 99.9% cloud availability and $50 loss per minute of cloud downtime equates to a loss of $52,560 per year.

Managed services costs
In lieu of the internal engineering & IT/OPS team, in this scenario, there is a managed service fee payable to the vendor.

 
The Bottom Line
Here are results of 3 scenarios we ran:

Relative Costs (4 year timeline)

Initial number of VMs
3,000
20,000
60,000

DIY cost/VM
$1,448
$249
$118

Distro cost/VM
$614
$179
$124

Managed cloud cost/VM
$298
$189
$149

The net-net is that for small clouds, managed is a very attractive option. For mid-size clouds a distribution may be more cost effective. For the largest clouds, DIY might be the least expensive option assuming the IT team can keep the availability reasonably high 98.5% or higher.
The post The Dollars and Cents of How to Consume a Private Cloud appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Modeling complex applications with Kubernetes AppController

The post Modeling complex applications with Kubernetes AppController appeared first on Mirantis | The Pure Play OpenStack Company.
When you&;re first looking at Kubernetes applications, it&8217;s common to see a simple scenario that may include several pieces &; but not explicit dependencies. But what happens when you have an application that does include dependencies. For example, what happens if the database must always be configured before the web servers, and so on? It&8217;s common for situations to arise in which resources need to be created in a specific order, which isn&8217;t easily accomodated with today&8217;s templates.
To solve this problem, Mirantis Development Manager for Kubernetes projects Piotr Siwczak explained the concept and implementation of the Kubernetes AppController, which enables you to orchestrate and manage the creation of dependences for a multi-part application as part of the deployment process.
You can see the entire presentation below:

The post Modeling complex applications with Kubernetes AppController appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Bailian: From Brick & Mortar to Brick & Click using OpenStack, DevOps

The post Bailian: From Brick &; Mortar to Brick &038; Click using OpenStack, DevOps appeared first on Mirantis | The Pure Play OpenStack Company.
Being an established player in a market can definitely have its advantages. If you&;re big enough, there are advantages of scale and barriers to entry that can make it possible to get comfortable in your market.
But what happens when the market flips on its ear?
This was the situation in which Shanghai-based Bailian group found itself in several years ago. China&8217;s largest retailer, the chain of more than 6000 grocery and department stores was spread all over the country.
Many of the brick-and-mortar company&8217;s online competitors, such as JD.com, Suning, and Taobao were introducing new sites and campaigns, and other traditional enterprises were moving to a multi-channel strategy.  In 2014, Bailian decided to join them.
Chinese consumers bought close to $600 billion in online goods during 2015, a 33 percent increase from the prior year. The company knew that if it were going to survive, it had to solve several major problems:

Lack of agility: Some applications were not cloud native and took months to update, and waiting for a new server could take weeks, slowing development of new applications to a crawl.
Server underutilization: As much hardware as Bailian was using, there was still a huge amount of unused capacity that represented wasted money. It had to be streamlined and simplified.

The company set out to create the largest offline to online commerce platform in the industry &; and to do that, they had to replace their existing IT infrastructure.
Choosing a platform
“Our transition from traditional brick and mortar to omni-channel business presented a great opportunity but an equally large challenge,” says Lu Qichuan, Director of IaaS and Cloud Integration Architecture, Bailian Group. “We needed a large scale IT platform that would enable our innovation and growth.” Thinking big, Lu and his team outlined four guiding principles for their new platform — fast development, dynamic scaling, uncompromised availability, and low cost of operations. These guidelines would support aggressive online growth targets through 2020.
And it wasn&8217;t as though Bailian was a stranger to online commerce. The company was already running a Shanghai grocery delivery service, on its existing IT platforms. But it knew that its existing applications, which were not yet cloud-ready, weren&8217;t just complex to support; they also required long development cycles. Add to this the desire to not just port legacy applications such as supply chain logistics and data management to the new, more flexible infrastructure, but also to reclaim applications running on public cloud, and the way forward was clear: private cloud was what Bailian needed.
But which? The company had already zeroed in on many of the advantages of OpenStack. In particular, Bailian Group was impressed by the platform’s continuous innovation, with rich new feature sets every six months.  The IT team also valued OpenStack’s lower licensing and maintenance cost, flexible architecture, and its complete elimination of vendor lock in.
Finally, Bailian Group is a state-owned enterprise, so when China&8217;s Ministry of Industry and Information (MIIT) officially declared its support for the OpenStack ecosystem, the decision was straightforward.
Bailian Group then selected the OpenStack managed services of UMCloud, the Shanghai-based joint venture between Mirantis and UCloud, China’s largest independent public cloud provider. UMCloud’s charter to accelerate OpenStack adoption and embrace China’s “Internet Plus” national policy closely matched Bailian Group’s platform strategy. “We found OpenStack to be the most open and flexible cloud technology, and Mirantis and UMCloud to be the best partners to help us launch our new omni-channel commerce platform,” says Lu.
Start small, think big, scale fast
Bailian Group’s IT leaders worked with Mirantis and UMCloud to quickly build a 20-node MVP (minimum viable product) using the latest OpenStack distribution and Fuel software to deploy and manage all cloud components. The architecture included Ceph distributed storage, Neutron and OVS software defined networking, KVM virtualization, F5 load balancers, and the StackLight logging, monitoring and alerting (LMA) toolchain.

With this early success, the team quickly added capacity and will soon reach 300 nodes and 5000 VMs in this first phase of a three phase, five-year plan. Already a handful of applications are in production on the new platform, including one that manages offline-to-online store advertisement images using distributed Ceph storage. The team has also added new cloud application development tools and processes that foster a CI/CD and DevOps culture and increase innovation and time-to-market. This development environment includes a PaaS platform powered by the Murano application catalog and Sahara for data analysis.  
For phase two, the IT team anticipates expanding the OpenStack platform to 500 nodes across two data centers and more than 10,000 applications by the end of 2018. Phase two will also add a Services Oriented Architecture (SOA), microservices, and dynamic energy savings.
Embracing the strategy of starting small, thinking big, and scaling fast, phase three will extend to 3000 nodes and over 10 million virtual machines and applications by the end of 2020. Phase three will also add an industry cloud and SaaS services that drive prosperity of the retail business and show other retailers the processes and benefits of cloud platform innovation and offline to online digital transformation.
Interested in more information about how Bailian Group is making the most of OpenStack to solve its agility problems? Get the full case study.
The post Bailian: From Brick &038; Mortar to Brick &038; Click using OpenStack, DevOps appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

How do I create a new Docker image for my application?

The post How do I create a new Docker image for my application? appeared first on Mirantis | The Pure Play OpenStack Company.
In our previous series, we looked at how to deploy Kubernetes and create a cluster. We also looked at how to deploy an application on the cluster and configure OpenStack instances so you can access it.  Now we&;re going to get deeper into Kubernetes development by looking at creating new Docker images so you can deploy your own applications and make them available to other people.
How Docker images work
The first thing that we need to understand is how Docker images themselves work.
The key to a Docker image is that it&8217;s alayered file system. In other words, if you start out with an image that&8217;s just the operating system (say Ubuntu) and then add an application (say Nginx), you&8217;ll wind up with something like this:

As you can see, the difference between IMAGE1 and IMAGE2 is just the application itself, and then IMAGE4 has the changes made on layers 3 and 4. So in order to create an image, you are basically starting with a base image and defining the changes to it.
Now, I hear you asking, &;But what if I want to start from scratch?&; Well, let&8217;s define &8220;from scratch&8221; for a minute. Chances are you mean you want to start with a clean operating system and go from there. Well, in most cases there&8217;s a base image for that, so you&8217;re still starting with a base image.  (If not, you can check out the instructions for creating a Docker base image.)
In general, there are two ways to create a new Docker image:

Create an image from an existing container: In this case, you start with an existing image, customize it with the changes you want, then build a new image from it.
Use a Dockerfile: In this case, you use a file of instructions &; the Dockerfile &8212; to specify the base image and the changes you want to make to it.

In this article, we&8217;re going to look at both of those methods. Let&8217;s start with creating a new image from an existing container.
Create from an existing container
In this example, we&8217;re going to start with an image that includes the nginx web application server and PHP. To that, we&8217;re going to add support for reading RSS files using an open source package called SimplePie. We&8217;ll then make a new image out of the altered container.
Create the original container
The first thing we need to do is instantiate the original base image.

The very first step is to make sure that your system has Docker installed.  If you followed our earlier series on running Kubernetes on OpenStack, you&8217;ve already got this handled.  If not, you can follow the instructions here to do just deploy Docker.
Next you&8217;ll need to get the base image. In the case of this tutorial, that&8217;s webdevops/php-nginx, which is part of the Docker Hub, so in order to &8220;pull&8221; it you&8217;ll need to have a Docker Hub ID.  If you don&8217;t have one already, go to https://hub.docker.com and create a free account.
Go to the command line where you have Docker installed and log in to the Docker hub:
# docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don’t have a Docker ID, head over to https://hub.docker.com to create one.
Username: nickchase
Password:
Login Succeeded

We&8217;re going to start with the base image.  Instantiate webdevops/php-nginx:
# docker run -dP webdevops/php-nginx
The -dP flag makes sure that the container runs in the background, and that the ports on which it listens are made available.
Make sure the container is running:
# docker ps
CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS                                                                    NAMES
1311034ca7dc        webdevops/php-nginx   “/opt/docker/bin/entr”   35 seconds ago      Up 34 seconds       0.0.0.0:32822->80/tcp, 0.0.0.0:32821->443/tcp, 0.0.0.0:32820->9000/tcp   small_bassi

A couple of notes here. First off, because we didn&8217;t specify a particular name for the container, Docker assigned one.  In this example, it&8217;s small_bassi.  Second, notice that there are 3 ports that are open: 80, 443, and 9000, and that they&8217;ve been mapped to other ports (in this case 32822, 32821 and 32820, respectively &8212; on your machine these ports will be different).  This makes it possible for multiple containers to be &8220;listening&8221; on the same port on the same machine.  So if we were to try and access a web page being hosted by this container, we&8217;d do it by accessing:

http://localhost:32822

So far, though, there aren&8217;t any pages to access; let&8217;s fix that.
Create a file on the container
In order for us to test this container, we need to create a sample PHP file.  We&8217;ll do that by logging into the container and creating a file.

Login to the container
# docker exec -it small_bassi /bin/bash
root@1311034ca7dc:/#
Using exec with the -it switch creates an interactive session for you to execute commands directly within the container. In this case, we&8217;re executing /bin/bash, so we can do whatever else we need.
The document root for the nginx server in this container is at /app, so go ahead and create the /app/index.php file:
vi /app/index.php

Add a simple PHP routine to the file and save it:
<?php
for ($i; $i < 10; $i++){
    echo “Item number “.$i.”n”;
}
?>

Now exit the container to go back to the main command line:
root@1311034ca7dc:/# exit

Now let&8217;s test the page.  To do that, execute a simple curl command:
# curl http://localhost:32822/index.php
Item number
Item number 1
Item number 2
Item number 3
Item number 4
Item number 5
Item number 6
Item number 7
Item number 8
Item number 9

Now that we know PHP is working, it&8217;s time to go ahead and add RSS.
Make changes to the container
Now that we know PHP is working we can go ahead and add RSS support using the SimplePie package.  To do that, we&8217;ll simply download it to the container and install it.

The first step is to log back into the container:
# docker exec -it small_bassi /bin/bash
root@1311034ca7dc:/#

Next go ahead and use curl to download the package, saving it as a zip file:
root@1311034ca7dc:/# curl https://codeload.github.com/simplepie/simplepie/zip/1.4.3 > simplepie1.4.3.zip

Now you need to install it.  To do that, unzip the package, create the appropriate directories, and copy the necessary files into them:
root@1311034ca7dc:/# unzip simplepie1.4.3.zip
root@1311034ca7dc:/# mkdir /app/php
root@1311034ca7dc:/# mkdir /app/cache
root@1311034ca7dc:/# mkdir /app/php/library
root@1311034ca7dc:/# cp -r s*/library/* /app/php/library/.
root@1311034ca7dc:/# cp s*/autoloader.php /app/php/.
root@1311034ca7dc:/# chmod 777 /app/cache

Now we just need a test page to make sure that it&8217;s working. Create a new file in the /app directory:
root@1311034ca7dc:/# vi /app/rss.php

Now add the sample file.  (This file is excerpted from the SimplePie website, but I&8217;ve cut it down for brevity&8217;s sake, since it&8217;s not really the focus of what we&8217;re doing. Please see the original version for comments, etc.)
<?php
require_once(‘php/autoloader.php’);
$feed = new SimplePie();
$feed->set_feed_url(“http://rss.cnn.com/rss/edition.rss”);
$feed->init();
$feed->handle_content_type();
?>
<html>
<head><title>Sample SimplePie Page</title></head>
<body>
<div class=”header”>
<h1><a href=”<?php echo $feed->get_permalink(); ?>”><?php echo $feed->get_title(); ?></a></h1>
<p><?php echo $feed->get_description(); ?></p>
</div>
<?php foreach ($feed->get_items() as $item): ?>
<div class=”item”>
<h2><a href=”<?php echo $item->get_permalink(); ?>”><?php echo $item->get_title(); ?></a></h2>
<p><?php echo $item->get_description(); ?></p>
<p><small>Posted on <?php echo $item->get_date(‘j F Y | g:i a’); ?></small></p>
</div>
<?php endforeach; ?>
</body>
</html>

Exit the container:
root@1311034ca7dc:/# exit

Now let&8217;s make sure it&8217;s working. Remember, we need to access the container on the alternate port (check docker ps to see what ports you need to use):
# curl http://localhost:32822/rss.php
<html>
<head><title>Sample SimplePie Page</title></head>
<body>
       <div class=”header”>
               <h1><a href=”http://www.cnn.com/intl_index.html”>CNN.com – RSS Channel – Intl Homepage – News</a></h1>
               <p>CNN.com delivers up-to-the-minute news and information on the latest top stories, weather, entertainment, politics and more.</p>
       </div>

Now that we have a working container, we can turn it into a new image.
Create the new image
Now that we have a working container, we want to turn it into an image and push it to the Docker Hub so we can use it.  The name you&8217;ll use for your container typically will have three parts:
[username]/[imagename]:[tags]
For example, my Docker Hub username is nickchase, so I am going to name version 1 of my new RSS-ified container
nickchase/rss-php-nginx:v1

Now, if when we first started talking about differences between layers you started to think about version control systems, you&8217;re right.  The first step in creating a new image is to commit the changes that we&8217;ve already made, adding a message about the changes and specifying the author, as in:
docker commit -m “Message” -a “Author Name” [containername] [imagename]
So in my case, that will be:
# docker commit -m “Added RSS” -a “Nick Chase” small_bassi nickchase/rss-php-nginx:v1
sha256:148f1dbceb292b38b40ae6cb7f12f096acf95d85bb3ead40e07d6b1621ad529e

Next we want to go ahead and push the new image to the Docker Hub so we can use it:
# docker push nickchase/rss-php-nginx:v1
The push refers to a repository [docker.io/nickchase/rss-php-nginx]
69671563c949: Pushed
3e78222b8621: Pushed
5b33e5939134: Pushed
54798bfbf935: Pushed
b8c21f8faea9: Pushed

v1: digest: sha256:48da56a77fe4ecff4917121365d8e0ce615ebbdfe31f48a996255f5592894e2b size: 3667

Now if you list the images that are available, you should see it in the list:
# docker images
REPOSITORY                TAG                 IMAGE ID            CREATED             SIZE
nickchase/rss-php-nginx   v1                  148f1dbceb29        11 minutes ago      677 MB
nginx                     latest              abf312888d13        3 days ago          181.5 MB
webdevops/php-nginx       latest              93037e4c8998        3 days ago          675.4 MB
ubuntu                    latest              e4415b714b62        2 weeks ago         128.1 MB
hello-world               latest              c54a2cc56cbb        5 months ago        1.848 kB

Now let&8217;s go ahead and test it.  We&8217;ll start by stopping and removing the original container, so we can remove the local copy of the image:
# docker stop small_bassi
# docker rm small_bassi

Now we can remove the image itself:
# docker rmi nickchase/rss-php-nginx:v1
Untagged: nickchase/rss-php-nginx:v1
Untagged: nickchase/rss-php-nginx@sha256:0a33c7a25a6d2db4b82517b039e9e21a77e5e2262206fdcac8b96f5afa64d96c
Deleted: sha256:208c4fc237bb6b2d3ef8fa16a78e105d80d00d75fe0792e1dcc77aa0835455e3
Deleted: sha256:d7de4d9c00136e2852c65e228944a3dea3712a4e7bcb477eb7393cd309be179b

If you run docker images again, you&8217;ll see that it&8217;s gone:
# docker images
REPOSITORY                TAG                 IMAGE ID            CREATED             SIZE
nginx                     latest              abf312888d13        3 days ago          181.5 MB
webdevops/php-nginx       latest              93037e4c8998        3 days ago          675.4 MB
ubuntu                    latest              e4415b714b62        2 weeks ago         128.1 MB
hello-world               latest              c54a2cc56cbb        5 months ago        1.848 kB

Now if you create a new container based on this image, you will see it get downloaded from the Docker Hub:
# docker run -dP nickchase/rss-php-nginx:v1

Finally, test the new container by getting the new port&;
# docker ps
CONTAINER ID        IMAGE                        COMMAND                  CREATED             STATUS              PORTS                                                                    NAMES
13a423324d80        nickchase/rss-php-nginx:v1   “/opt/docker/bin/entr”   6 seconds ago       Up 5 seconds        0.0.0.0:32825->80/tcp, 0.0.0.0:32824->443/tcp, 0.0.0.0:32823->9000/tcp   goofy_brahmagupta

&8230; and accessing the rss.php file.
curl http://localhost:32825/rss.php

You should see the same output as before.
Use a Dockerfile
Manually creating a new image from an existing container gives you a lot of control, but it does have one downside. If the base container gets updated, you&8217;re not necessarily going to have the benefits of those changes.
For example, suppose I wanted a container that always takes the latest version of the Ubuntu operating system and builds on that? The previous method doesn&8217;t give us that advantage.
Instead, we can use a method called the Dockerfile, which enables us to specify a particular version of a base image, or specify that we want to always use the latest version.  
For example, let&8217;s say we want to create a version of the rss-php-nginx container that starts with v1 but serves on port 88 (rather than the traditional 80).  To do that, we basically want to perform three steps:

Start with the desired of the base container.
Tell Nginx to listen on port 88 rather than 80.
Let Docker know that the container listens on port 88.

We&8217;ll do that by creating a local context, downloading a local copy of the configuration file, updating it, and creating a Dockerfile that includes instructions for building the new container.
Let&8217;s get that set up.

Create a working directory in which to build your new container.  What you call it is completely up to you. I called mine k8stutorial.
From the command line, In the local context, start by instantiating the image so we have something to work from:
# docker run -dP nickchase/rss-php-nginx:v1

Now get a copy of the existing vhost.conf file. In this particular container, you can find it at /opt/docker/etc/nginx/vhost.conf.  
# docker cp amazing_minksy:/opt/docker/etc/nginx/vhost.conf .
Note that I&8217;ve a new container named amazing_minsky to replace small_bassi. At this point you should have a copy of vhost.conf in your local directory, so in my case, it would be ~/k8stutorial/vhost.conf.
You now have a local copy of the vhost.conf file.  Using a text editor, open the file and specify that nginx should be listening on port 88 rather than port 80:
server {
   listen   88 default_server;
   listen 8000 default_server;
   server_name  _ *.vm docker;

Next we want to go ahead and create the Dockerfile.  You can do this in any text editor.  The file, which should be called Dockerfile, should start by specifying the base image:
FROM nickchase/rss-php-nginx:v1

Any container that is instantiated from this image is going to be listening on port 80, so we want to go ahead and overwrite that Nginx config file with the one we&8217;ve edited:
FROM nickchase/rss-php-nginx:v1
COPY vhost.conf /opt/docker/etc/nginx/vhost.conf

Finally, we need to tell Docker that the container listens on port 88:
FROM nickchase/rss-php-nginx:v1
COPY vhost.conf /opt/docker/etc/nginx/vhost.conf
EXPOSE 88

Now we need to build the actual image. To do that, we&8217;ll use the docker build command:
# docker build -t nickchase/rss-php-nginx:v2 .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM nickchase/rss-php-nginx:v1
—> 208c4fc237bb
Step 2 : EXPOSE 88
—> Running in 23408def6214
—> 93a43c3df834
Removing intermediate container 23408def6214
Successfully built 93a43c3df834
Notice that we&8217;ve specified the image name, along with a new tag (you can also create a completely new image) and the directory in which to find the Dockerfile and any supporting files.
Finally, push the new image to the hub:
# docker push nickchase/rss-php-nginx:v2

Test out your new image by instantiating it and pulling up the test page.
# docker run -dP nickchase/rss-php-nginx:v2
root@kubeclient:/home/ubuntu/tutorial# docker ps
CONTAINER ID        IMAGE                        COMMAND                  CREATED             STATUS              PORTS                                                                                           NAMES
04f4b384e8e2        nickchase/rss-php-nginx:v2   “/opt/docker/bin/entr”   8 seconds ago       Up 7 seconds        0.0.0.0:32829->80/tcp, 0.0.0.0:32828->88/tcp, 0.0.0.0:32827->443/tcp, 0.0.0.0:32826->9000/tcp   goofy_brahmagupta
13a423324d80        nickchase/rss-php-nginx:v1   “/opt/docker/bin/entr”   12 minutes ago      Up 12 minutes       0.0.0.0:32825->80/tcp, 0.0.0.0:32824->443/tcp, 0.0.0.0:32823->9000/tcp                          amazing_minsky

Notice that you now have a mapped port for port 88 you can call:
curl http://localhost:32828/rss.php
Other things you can do with Dockerfile
Docker defines a whole list of things you can do with a Dockerfile, such as:

.dockerignore
FROM
MAINTAINER
RUN
CMD
EXPOSE
ENV
COPY
ENTRYPOINT
VOLUME
USER
WORKDIR
ARG
ONBUILD
STOPSIGNAL
LABEL

As you can see, there&8217;s quite a bit of flexibility here.  You can see the documentation for more information, and wsargent has published a good Dockerfile cheat sheet.
Moving forward
As you can see, creating new Docker images that can be used by you or by other developers is pretty straightforward.  You have the option to manually create and commit changes, or to script them using a Dockerfile.
In our next tutorial, we&8217;ll look at using YAML to manage these containers with Kubernetes.
The post How do I create a new Docker image for my application? appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

What should operators consider when deploying NFV

The post What should operators consider when deploying NFV appeared first on Mirantis | The Pure Play OpenStack Company.
NFV comes with big promises and one of the key drivers for NFV is to allow operators to rapidly launch and scale new applications. Today, if an operator wants to launch a new application, the process can be rather complex. It requires a lot of preparation and planning as the data center space has to be allocated, specialized servers, networking and storage have to be acquired. It has to be architected for 5 nines of availability plus integrated with other network elements. Given the costs involved in this process, every project is scrutinized by finance departments and this cautious approach leaves very little room for innovation.
In an NFV world, every application is a piece of software that can run on virtualized servers, storage and networks. Keeping the hardware separate from software gives a new level of flexibility. NFV infrastructure is built as a utility, and when it is time to launch new applications, you do not have to worry about such things as finding racks or integrating servers or even the storage. All of this is already provided by NFV and it is just a matter of allocating the right resources.
Additionally, integration becomes easier as networks are virtualized and pre-integrated. This works fine &; as long as the application is simple and not subscriber-aware. If the application is subscriber aware, it needs to integrate with provisioning systems, and for a typical operator this can be a nine- to twelve-month long process that can cost up to a million dollars per integration. Therefore, for subscriber-aware applications, the agility of NFV can be easily lost.
Fortunately, you can recover that agility by using a built-in virtual User Data Repository (vUDR, or Subscriber Data Management as a Service) as part of your NFV infrastructure. reason some of the more forward-looking operators are placing a vUDR as one of the first subscriber-aware applications in the NFV cloud.
There are clear benefits to this approach. Once the vUDR is in place, all subscriber-related information is readily available to applications that want to use it. New applications launched on NFV don&;t need a one-to-one provisioning integration and operators can start enjoying ‘agility’ for subscriber-aware applications too.
Subscriber Data Management (SDM) is a mission critical application. Before any voice connection can be established, any data service accessed, or any message sent, internal systems need to authenticate a subscriber and their device to authorize their request. For a communications network, SDM is the life-giving oxygen &; services simply cannot be offered without authenticating the subscriber. Openwave Mobility vUDR SDM solution has been validated within Mirantis OpenStack environment and deploying it as the first NFV application helps operators maximize the Agility benefit promised by NFV.
Openwave Mobility vUDR is validated with Mirantis Openstack
Openwave Mobility vUDR is the industry’s first NFV-enabled Subscriber Data Management solution, and has been deployed by several tier one operators globally to manage subscriber profile data across voice and data networks.
Openwave Mobility’s cloud-based vUDR goes above and beyond traditional UDR systems.  Built-in federation and replication means that network applications can read and write data from any data center or data silo, and while the NFV infrastructure is typically built using commodity servers that provide 99.9% availability at best, by using proprietary software processes, Openwave Mobility&8217;s vUDR is able to deliver 99.999% (five-nines) availability on commodity virtual machines.  vUDR is nevertheless lightweight and agile, and it has enabled our customers to on-board new applications in just two weeks, compared to the average subscriber data provisioning integration that can take nine months.
Openwave Mobility’s vUDR, has been validated within the Mirantis OpenStack environment. It provides the crucial SDM element for NFV clouds so that operators who deploy it can truly realize the agility that NFV promises.
The post What should operators consider when deploying NFV appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Creating and accessing a Kubernetes cluster on OpenStack, part 3: Run the application

The post Creating and accessing a Kubernetes cluster on OpenStack, part 3: Run the application appeared first on Mirantis | The Pure Play OpenStack Company.
Finally, you&;re ready to actually interact with the Kubernetes API that you installed. The general process goes like this:

Define the security credentials for accessing your applications.
Deploy a containerized app to the cluster.
Expose the app to the outside world so you can access it.

Let&8217;s see how that works.
Define security parameters for your Kubernetes app
The first thing that you need to understand is that while we have a cluster of machines that are tied together with the Kubernetes API, it can support multiple environments, or contexts, each with its own security credentials.
For example, if you were to create an application with a context that relies on a specific certificate authority, I could then create a second one that relies on another certificate authority. In this way, we both control our own destiny, but neither of us gets to see the other&8217;s application.
The process goes like this:

First, we need to create a new certificate authority which will be used to sign the rest of our certificates. Create it with these commands:
$ sudo openssl genrsa -out ca-key.pem 2048
$ sudo openssl req -x509 -new -nodes -key ca-key.pem -days 10000 -out ca.pem -subj “/CN=kube-ca”

At this point you should have two files: ca-key.pem and ca.pem. You&8217;ll use them to create the cluster administrator keypair. To do that, you&8217;ll create a private key (admin-key.pem), then create a certificate signing request (admin.csr), then sign it to create the public key (admin.pem).
$ sudo openssl genrsa -out admin-key.pem 2048
$ sudo openssl req -new -key admin-key.pem -out admin.csr -subj “/CN=kube-admin”sudo openssl x509 -req -in admin.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out admin.pem -days 365

Now that you have these files, you can use them to configure the Kubernetes client.
Download and configure the Kubernetes client

Start by downloading the kubectl client on your machine. In this case, we&8217;re using linux; adjust appropriately for your OS.
$ curl -O https://storage.googleapis.com/kubernetes-release/release/v1.4.3/bin/linux/amd64/kubectl

Make kubectl executable:
$ chmod +x kubectl

Move it to your path:
$ sudo mv kubectl /usr/local/bin/kubectl

Now it&8217;s time to set the default cluster. To do that, you&8217;ll want to use the URL that you got from the environment deployment log. Also, make sure you provide the full location of the ca.pem file, as in:
$ kubectl config set-cluster default-cluster –server=[KUBERNETES_API_URL] –certificate-authority=[FULL-PATH-TO]/ca.pem
In my case, this works out to:
$ kubectl config set-cluster default-cluster –server=http://172.18.237.137:8080 –certificate-authority=/home/ubuntu/ca.pem

Next you need to tell kubectl where to find the credentials, as in:
$ kubectl config set-credentials default-admin –certificate-authority=[FULL-PATH-TO]/ca.pem –client-key=[FULL-PATH-TO]/admin-key.pem –client-certificate=[FULL-PATH-TO]/admin.pem
Again, in my case this works out to:
$ kubectl config set-credentials default-admin –certificate-authority=/home/ubuntu/ca.pem –client-key=/home/ubuntu/admin-key.pem –client-certificate=/home/ubuntu/admin.pem

Now you need to set the context so kubectl knows to use those credentials:
$ kubectl config set-context default-system –cluster=default-cluster –user=default-admin
$ kubectl config use-context default-system

Now you should be able to see the cluster:
$ kubectl cluster-info

Kubernetes master is running at http://172.18.237.137:8080
To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump’.

Terrific!  Now we just need to go ahead and run something on it.
Running an app on Kubernetes
Running an app on Kubernetes is pretty simple and is related to firing up a container. We&8217;ll go into the details of what everything means later, but for now, just follow along.

Start by creating a deployment that runs the nginx web server:
$ kubectl run my-nginx –image=nginx –replicas=2 –port=80

deployment “my-nginx” created

Be default, containers are only visible to other members of the cluster. To expose your service to the public internet, run:
$ kubectl expose deployment my-nginx –target-port=80 –type=NodePort

service “my-nginx” exposed

OK, so now it&8217;s exposed, but where?  We used the NodePort type, which means that the external IP is just the IP of the node that it&8217;s running on, as you can see if you get a list of services:
$kubectl get services

NAME         CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
kubernetes   11.1.0.1      <none>        443/TCP   3d
my-nginx     11.1.116.61   <nodes>       80/TCP    18s

So we know that the &;nodes&; referenced here are kube-2 and kube-3 (remember, kube-1 is the API server), and we can get their IP addresses from the Instances page&;

&8230; but that doesn&8217;t tell us what the actual port number is.  To get that, we can describe the actual service itself:
$ kubectl describe services my-nginx

Name:                   my-nginx
Namespace:              default
Labels:                 run=my-nginx
Selector:               run=my-nginx
Type:                   NodePort
IP:                     11.1.116.61
Port:                   <unset> 80/TCP
NodePort:               <unset> 32386/TCP
Endpoints:              10.200.41.2:80,10.200.9.2:80
Session Affinity:       None
No events.

So the service is available on port 32386 of whatever machine you hit.  But if you try to access it, something&8217;s still not right:
$ curl http://172.18.237.138:32386

curl: (7) Failed to connect to 172.18.237.138 port 32386: Connection timed out

The problem here is that by default, this port is closed, blocked by the default security group.  To solve this problem, create a new security group you can apply to the Kubernetes nodes.  Start by choosing Project->Compute->Access & Security->+Create Security Group.
Specify a name for the group and click Create Security Group.
Click Manage Rules for the new group.

By default, there&8217;s no access in; we need to change that.  Click +Add Rule.

In this case, we want a Custom TCP Rule that allows Ingress on port 32386 (or whatever port Kubernetes assigned the NodePort). You  can specify access only from certain IP addresses, but we&8217;ll leave that open in this case. Click Add to finish adding the rule.

Now that you have a functioning security group you need to add it to the instances Kubernetes is using as worker nodes &; in this case, the kube-2 and kube-3 nodes.  Start by clicking the small triangle on the button at the end of the line for each instance and choosing Edit Security Groups.
You should see the new security group in the left-hand panel; click the plus sign (+) to add it to the instance:

Click Save to save the changes.

Add the security group to all worker nodes in the cluster.
Now you can try again:
$ curl http://172.18.237.138:32386

<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
   body {
       width: 35em;
       margin: 0 auto;
       font-family: Tahoma, Verdana, Arial, sans-serif;
   }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href=”http://nginx.org/”>nginx.org</a>.<br/>
Commercial support is available at
<a href=”http://nginx.com/”>nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
As you can see, you can now access the Nginx container you deployed on the Kubernetes cluster.

Coming up, we&8217;ll look at some of the more useful things you can do with containers and with Kubernetes. Got something you&8217;d like to see?  Let us know in the comments below.
The post Creating and accessing a Kubernetes cluster on OpenStack, part 3: Run the application appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

New Dockercast episode and interview with Docker Captain Laura Frank

We recently had the opportunity to catch up with the amazing Laura Frank. Laura is a developer focused on making tools for other developers.As an engineer at Codeship, she works on improving the Docker infrastructure and overall experience for users on Codeship. Previously, she worked on several open source projects to support Docker in the early stages of the project, including Panamax and ImageLayers. She currently lives in Berlin.
Laura is also a Docker Captain, a distinction that Docker awards select members of the community that are experts in their field and passionate about sharing their Docker knowledge with others.
As we do with all of these podcasts, we begin with a little bit of history of &;How did you get here?” Then we dive into the Codeship offering and how it optimizes its delivery flow by using Docker containers for everything.  We then end up with a “What&;s the coolest Docker story you have?”  I hope you enjoy  &; please feel free to comment and leave suggestions.
 

In addition to the questions covered in the podcast, we’ve had the chance to ask Laura for a couple additional questions below.
How has Docker impacted what you do on a daily basis?
I’m lucky to work with Docker every day in my role as an engineer at Codeship. In addition to appreciating  the technical aspects of Docker, I really enjoy seeing the different ways the Docker ecosystem as a whole empowers engineering teams to move faster. Docker is really impactful at two levels: we can use Docker to simplify the way we build and distribute software. But we can also solve problems in more unique ways because containerization is more accessible. It’s not just about running a production application in containers; you can use Docker to provide a distributed system of containers in order to scale up and down and handle task processing in interesting ways. To me, Docker is really about reducing friction in the development process and allowing engineers to focus on the stuff we’re best at &; solving complex problems in interesting ways.
As a Docker Captain, how do you share that learning with the community?
I’m usually in front of a crowd, talking through a set of problems that can be solved with Docker. There are lots of great ways to share information with others, from writing a blog post or presenting a webinar, to answering questions at a meetup. I’m very hands on when it comes to helping people wrap their heads around the questions they have when using Docker. I think the best way to help is to open my laptop and work through the issues together.
Since Docker has is such a complex and vast ecosystem, it’s important that Captains, and all of us who lead different areas of the Docker community, understand that each person has different levels of expertise with different components. The goal isn’t to impress people with how smart you are or what cool things you’ve built; the goal is to help your peers become better at what they do. But, the most important point is that everyone has something to contribute to the community.
Who are you when you’re not online?
I really love to get far away from computers when I’m not at work. I think there are so many other interesting parts of me that aren’t related to the work I do in the Docker community, and are separate from me as a technologist. You have to strike the right balance to stay focused and healthy. I love to adventure outdoors &8212; canoeing and kayaking in the summer in addition to, running around the city, hiking, and camping. Eliminating distractions and giving my brain some time to recover helps me think more clearly and strategically during the week.
How did you first get involved with Docker?
In 2013, I worked at HP Cloud on an infrastructure engineering team, and someone shared Solomon’s lightning talk from PyCon in an IRC or HipChat channel. I remember being really intrigued by the technical complexity and greater vision that he expressed. Later, my boss from HP left to join CenturyLink Labs, where he was building out a team to work on Docker-related developer tools, and a handful of us went with him. It was a huge gamble. There wasn’t much in the way of dev tools built around Docker, and those projects were really fun and exciting to work on, because we were just figuring out everything as we went along. My team was behind Panamax, ImageLayers, Lorry, and Dray, to name a few. If someone were to take me back to 2013 and tell me that this weirdly obscure new project would be the thing I spend 100% of my time working with, I wouldn’t have believed them, but I’m really glad it’s true.
If you could switch your job with anyone else, whose job would you want?
I’d be a pilot. I think it also shares common qualities with my role as an engineer &8212; I love the high-level view and seeing lots of complex systems working together. Plus, I think I’d look pretty cool in a tactical jumpsuit. Maybe I’ll float that idea by the rest of the engineers on my team as a possible dress code update.
Do you have a favorite quote?
“Don’t half-ass two things. Whole-ass one thing” &8211; Ron Swanson. It’s really tempting to try to learn everything about everything, especially related to technology that is constantly changing. The Docker world can be pretty chaotic. Sometimes it’s better to slow down, focus on one component of the ecosystem, and rely on the expertise of your peers for guidance in other areas. The Docker Community is great place to see this in action, because you simply can’t do it all yourself. You have to rely on the contributions of others. And you know, finish unloading the dishwasher before starting to clean the bathroom. Ron Swanson is a wise man in all areas of life.
 
The post New Dockercast episode and interview with Docker Captain Laura Frank appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/