Introduction to YAML: Creating a Kubernetes deployment

The post Introduction to YAML: Creating a Kubernetes deployment appeared first on Mirantis | The Pure Play OpenStack Company.
In previous articles, we&;ve been talking about how to use Kubernetes to spin up resources. So far, we&8217;ve been working exclusively on the command line, but there&8217;s an easier and more useful way to do it: creating configuration files using YAML. In this article, we&8217;ll look at how YAML works and use it to define first a Kubernetes Pod, and then a Kubernetes Deployment.
YAML Basics
It&8217;s difficult to escape YAML if you&8217;re doing anything related to many software fields &; particularly Kubernetes, SDN, and OpenStack. YAML, which stands for Yet Another Markup Language, or YAML Ain&8217;t Markup Language (depending who you ask) is a human-readable text-based format for specifying configuration-type information. For example, in this article, we&8217;ll pick apart the YAML definitions for creating first a Pod, and then a Deployment.
Using YAML for K8s definitions gives you a number of advantages, including:

Convenience: You&8217;ll no longer have to add all of your parameters to the command line
Maintenance: YAML files can be added to source control, so you can track changes
Flexibility: You&8217;ll be able to create much more complex structures using YAML than you can on the command line

YAML is a superset of JSON, which means that any valid JSON file is also a valid YAML file. So on the one hand, if you know JSON and you&8217;re only ever going to write your own YAML (as opposed to reading other people&8217;s) you&8217;re all set.  On the other hand, that&8217;s not very likely, unfortunately.  Even if you&8217;re only trying to find examples on the web, they&8217;re most likely in (non-JSON) YAML, so we might as well get used to it.  Still, there may be situations where the JSON format is more convenient, so it&8217;s good to know that it&8217;s available to you.
Fortunately, there are only two types of structures you need to know about in YAML:

Lists
Maps

That&8217;s it. You might have maps of lists and lists of maps, and so on, but if you&8217;ve got those two structures down, you&8217;re all set. That&8217;s not to say there aren&8217;t more complex things you can do, but in general, this is all you need to get started.
YAML Maps
Let&8217;s start by looking at YAML maps.  Maps let you associate name-value pairs, which of course is convenient when you&8217;re trying to set up configuration information.  For example, you might have a config file that starts like this:

apiVersion: v1
kind: Pod
The first line is a separator, and is optional unless you&8217;re trying to define multiple structures in a single file. From there, as you can see, we have two values, v1 and Pod, mapped to two keys, apiVersion and kind.
This kind of thing is pretty simple, of course, and you can think of it in terms of its JSON equivalent:
{
“apiVersion”: “v1″,
“kind”: “Pod”
}
Notice that in our YAML version, the quotation marks are optional; the processor can tell that you&8217;re looking at a string based on the formatting.
You can also specify more complicated structures by creating a key that maps to another map, rather than a string, as in:

apiVersion: v1
kind: Pod
metadata:
 name: rss-site
 labels:
   app: web
In this case, we have a key, metadata, that has as its value a map with 2 more keys, name and labels. The labels key itself has a map as its value. You can nest these as far as you want to.
The YAML processor knows how all of these pieces relate to each other because we&8217;ve indented the lines. In this example I&8217;ve used 2 spaces for readability, but the number of spaces doesn&8217;t matter &8212; as long as it&8217;s at least 1, and as long as you&8217;re CONSISTENT.  For example, name and labels are at the same indentation level, so the processor knows they&8217;re both part of the same map; it knows that app is a value for labels because it&8217;s indented further.
Quick note: NEVER use tabs in a YAML file.
So if we were to translate this to JSON, it would look like this:
{
“apiVersion”: “v1″,
“kind”: “Pod”,
“metadata”: {
“name”: “rss-site”,
“labels”: {
“app”: “web”
}
}
}
Now let&8217;s look at lists.
YAML lists
YAML lists are literally a sequence of objects.  For example:
args
 – sleep
 – “1000”
 – message
 – “Bring back Firefly!”
As you can see here, you can have virtually any number of items in a list, which is defined as items that start with a dash (-) indented from the parent.  So in JSON, this would be:
{
“args”: [“sleep”, “1000”, “message”, “Bring back Firefly!”]
}
And of course, members of the list can also be maps:

apiVersion: v1
kind: Pod
metadata:
 name: rss-site
 labels:
   app: web
spec:
 containers:
   – name: front-end
     image: nginx
     ports:
       – containerPort: 80
   – name: rss-reader
     image: nickchase/rss-php-nginx:v1
     ports:
       – containerPort: 88
So as you can see here, we have a list of containers &;objects&;, each of which consists of a name, an image, and a list of ports.  Each list item under ports is itself a map that lists the containerPort and its value.
For completeness, let&8217;s quickly look at the JSON equivalent:
{
“apiVersion”: “v1″,
“kind”: “Pod”,
“metadata”: {
“name”: “rss-site”,
“labels”: {
“app”: “web”
}
},
“spec”: {
“containers”: [{
“name”: “front-end”,
“image”: “nginx”,
“ports”: [{
“containerPort”: “80”
}]
},
{
“name”: “rss-reader”,
“image”: “nickchase/rss-php-nginx:v1″,
“ports”: [{
“containerPort”: “88”
}]
}]
}
}
As you can see, we&8217;re starting to get pretty complex, and we haven&8217;t even gotten into anything particularly complicated! No wonder YAML is replacing JSON so fast.
So let&8217;s review.  We have:

maps, which are groups of name-value pairs
lists, which are individual items
maps of maps
maps of lists
lists of lists
lists of maps

Basically, whatever structure you want to put together, you can do it with those two structures.  
Creating a Pod using YAML
OK, so now that we&8217;ve got the basics out of the way, let&8217;s look at putting this to use. We&8217;re going to first create a Pod, then a Deployment, using YAML.
If you haven&8217;t set up your cluster and kubectl, go ahead and check out this article series on setting up Kubernetes before you go on.  It&8217;s OK, we&8217;ll wait&;.

Back already?  Great!  Let&8217;s start with a Pod.
Creating the pod file
In our previous example, we described a simple Pod using YAML:
&8212;
apiVersion: v1
kind: Pod
metadata:
 name: rss-site
 labels:
   app: web
spec:
 containers:
   &; name: front-end
     image: nginx
     ports:
       &8211; containerPort: 80
   &8211; name: rss-reader
     image: nickchase/rss-php-nginx:v1
     ports:
       &8211; containerPort: 88
Taking it apart one piece at a time, we start with the API version; here it&8217;s just v1. (When we get to deployments, we&8217;ll have to specify a different version because Deployments don&8217;t exist in v1.)
Next, we&8217;re specifying that we want to create a Pod; we might specify instead a Deployment, Job, Service, and so on, depending on what we&8217;re trying to achieve.
Next we specify the metadata. Here we&8217;re specifying the name of the Pod, as well as the label we&8217;ll use to identify the pod to Kubernetes.
Finally, we&8217;ll specify the actual objects that make up the pod. The spec property includes any containers, storage volumes, or other pieces that Kubernetes needs to know about, as well as properties such as whether to restart the container if it fails. You can find a complete list of Kubernetes Pod properties in the Kubernetes API specification, but let&8217;s take a closer look at a typical container definition:
&8230;
spec:
 containers:
   &8211; name: front-end
     image: nginx
     ports:
       &8211; containerPort: 80
   &8211; name: rss-reader
&8230;
In this case, we have a simple, fairly minimal definition: a name (front-end), the image on which it&8217;s based (nginx), and one port on which the container will listen internally (80).  Of these, only the name is really required, but in general, if you want it to do anything useful, you&8217;ll need more information.
You can also specify more complex properties, such as a command to run when the container starts, arguments it should use, a working directory, or whether to pull a new copy of the image every time it&8217;s instantiated.  You can also specify even deeper information, such as the location of the container&8217;s exit log.  Here are the properties you can set for a Container:

name
image
command
args
workingDir
ports
env
resources
volumeMounts
livenessProbe
readinessProbe
livecycle
terminationMessagePath
imagePullPolicy
securityContext
stdin
stdinOnce
tty

Now let&8217;s go ahead and actually create the pod.
Creating the pod using the YAML file
The first step, of course, is to go ahead and create a text file.   Call it pod.yaml and add the following text, just as we specified it earlier:
&8212;
apiVersion: v1
kind: Pod
metadata:
 name: rss-site
 labels:
   app: web
spec:
 containers:
   &8211; name: front-end
     image: nginx
     ports:
       &8211; containerPort: 80
   &8211; name: rss-reader
     image: nickchase/rss-php-nginx:v1
     ports:
       &8211; containerPort: 88
Save the file, and tell Kubernetes to create its contents:
> kubectl create -f pod.yaml
pod “rss-site” created
As you can see, K8s references the name we gave the Pod.  You can see that if you ask for a list of the pods:
> kubectl get pods
NAME       READY     STATUS              RESTARTS   AGE
rss-site   0/2       ContainerCreating   0          6s
If you check early enough, you can see that the pod is still being created.  After a few seconds, you should see the containers running:
> kubectl get pods
NAME       READY     STATUS    RESTARTS   AGE
rss-site   2/2       Running   0          14s
From here, you can test out the Pod (just as we did in the previous article), but ultimately we want to create a Deployment, so let&8217;s go ahead and delete it so there aren&8217;t any name conflicts:
> kubectl delete pod rss-site
pod “rss-site” deleted
Troubleshooting pod creation
Sometimes, of course, things don&8217;t go as you expect. Maybe you&8217;ve got a networking issue, or you&8217;ve mistyped something in your YAML file.  You might see an error like this:
> kubectl get pods
NAME       READY     STATUS         RESTARTS   AGE
rss-site   1/2       ErrImagePull   0          9s
In this case, we can see that one of our containers started up just fine, but there was a problem with the other.  To track down the problem, we can ask Kubernetes for more information on the Pod:
> kubectl describe pod rss-site
Name:           rss-site
Namespace:      default
Node:           10.0.10.7/10.0.10.7
Start Time:     Sun, 08 Jan 2017 08:36:47 +0000
Labels:         app=web
Status:         Pending
IP:             10.200.18.2
Controllers:    <none>
Containers:
 front-end:
   Container ID:               docker://a42edaa6dfbfdf161f3df5bc6af05e740b97fd9ac3d35317a6dcda77b0310759
   Image:                      nginx
   Image ID:                   docker://sha256:01f818af747d88b4ebca7cdabd0c581e406e0e790be72678d257735fad84a15f
   Port:                       80/TCP
   State:                      Running
     Started:                  Sun, 08 Jan 2017 08:36:49 +0000
   Ready:                      True
   Restart Count:              0
   Environment Variables:      <none>
 rss-reader:
   Container ID:
   Image:                      nickchase/rss-php-nginx
   Image ID:
   Port:                       88/TCP
   State:                      Waiting
    Reason:                   ErrImagePull
   Ready:                      False
   Restart Count:              0
   Environment Variables:      <none>
Conditions:
 Type          Status
 Initialized   True
 Ready         False
 PodScheduled  True
No volumes.
QoS Tier:       BestEffort
Events:
 FirstSeen     LastSeen        Count   From                    SubobjectPath  Type             Reason                  Message
 ———     ——–        —–   —-                    ————-  ——– ——                  ——-
 45s           45s             1       {default-scheduler }                   Normal           Scheduled               Successfully assigned rss-site to 10.0.10.7
 44s           44s             1       {kubelet 10.0.10.7}     spec.containers{front-end}      Normal          Pulling                 pulling image “nginx”
 45s           43s             2       {kubelet 10.0.10.7}                    Warning          MissingClusterDNS       kubelet does not have ClusterDNS IP configured and cannot create Pod using “ClusterFirst” policy. Falling back to DNSDefault policy.
 43s           43s             1       {kubelet 10.0.10.7}     spec.containers{front-end}      Normal          Pulled                  Successfully pulled image “nginx”
 43s           43s             1       {kubelet 10.0.10.7}     spec.containers{front-end}      Normal          Created                 Created container with docker id a42edaa6dfbf
 43s           43s             1       {kubelet 10.0.10.7}     spec.containers{front-end}      Normal          Started                 Started container with docker id a42edaa6dfbf
 43s           29s             2       {kubelet 10.0.10.7}     spec.containers{rss-reader}     Normal          Pulling                 pulling image “nickchase/rss-php-nginx”
 42s           26s             2       {kubelet 10.0.10.7}     spec.containers{rss-reader}     Warning         Failed                  Failed to pull image “nickchase/rss-php-nginx”: Tag latest not found in repository docker.io/nickchase/rss-php-nginx
 42s           26s             2       {kubelet 10.0.10.7}                    Warning          FailedSync              Error syncing pod, skipping: failed to “StartContainer” for “rss-reader” with ErrImagePull: “Tag latest not found in repository docker.io/nickchase/rss-php-nginx”

 41s   12s     2       {kubelet 10.0.10.7}     spec.containers{rss-reader}    Normal   BackOff         Back-off pulling image “nickchase/rss-php-nginx”
 41s   12s     2       {kubelet 10.0.10.7}                                    Warning  FailedSync      Error syncing pod, skipping: failed to “StartContainer” for “rss-reader” with ImagePullBackOff: “Back-off pulling image “nickchase/rss-php-nginx””
As you can see, there&8217;s a lot of information here, but we&8217;re most interested in the Events &8212; specifically, once the warnings and errors start showing up.  From here I was able to quickly see that I&8217;d forgotten to add the :v1 tag to my image, so it was looking for the :latest tag, which didn&8217;t exist.  
To fix the problem, I first deleted the Pod, then fixed the YAML file and started again. Instead, I could have fixed the repo so that Kubernetes could find what it was looking for, and it would have continued on as though nothing had happened,.
Now that we&8217;ve successfully gotten a Pod running, let&8217;s look at doing the same for a Deployment.
Creating a Deployment using YAML
Finally, we&8217;re down to creating the actual Deployment.  Before we do that, though, it&8217;s worth understanding what it is we&8217;re actually doing.
K8s, remember, manages container-based resources. In the case of a Deployment, you&8217;re creating a set of resources to be managed. For example, where we created a single instance of the Pod in the previous example, we might create a Deployment to tell Kubernetes to manage a set of replicas of that Pod &8212; literally, a ReplicaSet &8212; to make sure that a certain number of them are always available.  So we might start our Deployment definition like this:
&8212;
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: rss-site
spec:
 replicas: 2
Here we&8217;re specifying the apiVersion as extensions/v1beta1 &8212; remember, Deployments aren&8217;t in v1, as Pods were &8212; and that we want a Deployment. Next we specify the name. We can also specify any other metadata we want, but let&8217;s keep things simple for now.
Finally, we get into the spec. In the Pod spec, we gave information about what actually went into the Pod; we&8217;ll do the same thing here with the Deployment. We&8217;ll start, in this case, by saying that whatever Pods we deploy, we always want to have 2 replicas. You can set this number however you like, of course, and you can also set properties such as the selector that defines the Pods affected by this Deployment, or the minimum number of seconds a pod must be up without any errors before it&8217;s considered &8220;ready&8221;.  You can find a full list of the Deployment specification properties in the Kuberenetes v1beta1 API reference.
OK, so now that we know we want 2 replicas, we need to answer the question: &8220;Replicas of what?&8221;  They&8217;re defined by templates:
&8212;
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: rss-site
spec:
 replicas: 2
 template:
   metadata:
     labels:
       app: web
   spec:
     containers:
       &8211; name: front-end
         image: nginx
         ports:
           &8211; containerPort: 80
       &8211; name: rss-reader
         image: nickchase/rss-php-nginx:v1
         ports:
           &8211; containerPort: 88
Look familiar?  It should; it&8217;s virtually identical to the Pod definition in the previous section, and that&8217;s by design. Templates are simply definitions of objects to be replicated &8212; objects that might, in other circumstances, by created on their own.
Now let&8217;s go ahead and create the deployment.  Add the YAML to a file called deployment.yaml and point Kubernetes at it:
> kubectl create -f deployment.yaml
deployment “rss-site” created
To see how it&8217;s doing, we can check on the deployments list:
> kubectl get deployments
NAME       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
rss-site   2         2         2            1           7s
As you can see, Kubernetes has started both replicas, but only one is available. You can check the event log by describing the Deployment, as before:
> kubectl describe deployment rss-site
Name:                   rss-site
Namespace:              default
CreationTimestamp:      Mon, 09 Jan 2017 17:42:14 +0000=
Labels:                 app=web
Selector:               app=web
Replicas:               2 updated | 2 total | 1 available | 1 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
OldReplicaSets:         <none>
NewReplicaSet:          rss-site-4056856218 (2/2 replicas created)
Events:
 FirstSeen     LastSeen        Count   From                            SubobjectPath   Type            Reason                  Message
 ———     ——–        —–   —-                            ————-   ——–        ——                  ——-
 46s           46s             1       {deployment-controller }               Normal           ScalingReplicaSet       Scaled up replica set rss-site-4056856218 to 2
As you can see here, there&8217;s no problem, it just hasn&8217;t finished scaling up yet. Another few seconds, and we can see that both Pods are running:
> kubectl get deployments
NAME       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
rss-site   2         2         2            2           1m
What we&8217;ve seen so far
OK, so let&8217;s review. We&8217;ve basically covered three topics:

YAML is a human-readable text-based format that let&8217;s you easily specify configuration-type information by using a combination of maps of name-value pairs and lists of items (and nested versions of each).
YAML is the most convenient way to work with Kubernetes objects, and in this article we looked at creating Pods and Deployments.
You can get more information on running (or should-be-running) objects by asking Kubernetes to describe them.

So that&8217;s our basic YAML tutorial. We&8217;re going to be tackling a great deal of Kubernetes-related content in the coming months, so if there&8217;s something specific you want to hear about, let us know in the comments, or tweet us at @MirantisIT.
The post Introduction to YAML: Creating a Kubernetes deployment appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

6 short sentences that sparked the creation of a cognitive app

It started on a busy morning expressway in an unfamiliar city. Three of us were on our way to a meeting that was a little further away than we thought.
While watching the driving app, we suddenly realized that we were in the wrong lane to make a sudden turn. Waiting precariously in the wrong lane, with the turn signal on, we were expecting the worst.
What happened next was something that took us all by surprise: a random act of kindness. A large, black pickup truck was letting us over. From the front seat, I heard, “That would have never happened where I live.”
The discussion continued, with no one quite ready to give up that positive feeling just yet. Someone asked, “Wouldn’t it be nice if we could give him a tip?”
Just then, an idea was born out of six short sentences:
Hey Team,
Driving into the lab today, we were in the wrong lane and Prasad had to cut in line. A nice person allowed us in, and Simon commented that this would never have happened in my city. In fact, everyone would be trying to cut in and everyone else would be trying to keep them from cutting in. I commented that we need a way of tipping the nice person for being courteous. Which got me to thinking, why don&;t we have a way to do this?  Maybe people would be encouraged to be more helpful — seems like it&8217;s something that wouldn&8217;t be too hard to do with the Internet of Things, cell phones, and PayPal or Bitcoin.
As part of a newly formed team, we felt the need to have a regularly scheduled informal meeting to bring everyone together to talk about ideas.  It didn’t have to be related to the work, just a creative time to toss ideas around, and in the process, learn more about the members of our team.
Then, by serendipity, we received notice about a Connect to Cloud Cognitive Build initiative kicking off. Four of us took on the challenge, and the wheels started turning. What if we created an app that would reward random acts of kindness? What would that look like? How could we be part of creating a gratitude network?
The idea quickly took off. It wasn’t a hard sell to create something pretty cool that could also make the world a better place. Everyone seemed to have an emotional response to the idea of a new way to express gratitude. But to be able to submit it into the Connect to Cloud Cognitive Build initiative, we quickly needed to move from “what if?” to “how would?”
The ideas started to flow as we asked more focused questions. What industry did it make the most sense to hook into to understand pain points that this app might solve? What would be a value add to customers and provide a interactive experience right at that point of random kindness?
The answer was the insurance industry. After all, a considerate driver is a safer driver.
With technology industry tools at hand, we had a head start for understanding what might be possible. We knew pretty quickly how we could create a prototype. But there needed to be a Cognitive component, which opened a new window of possibility regarding what services we could provide to customers.
We brainstormed about all the cognitive APIs we had at our fingertips.  What components would help us? Which were must haves? An architecture began to evolve as we solved each small problem.
The app is starting to take shape as our team begins to design and compile the component pieces. Our core team of Padma Chukka, Soad Abuelnaga, Neil Delima and myself, plus several incredible volunteers, are excited about the opportunities we’re discovering on this journey. The energy that it has brought to the team is a huge side benefit. Most of all, we never lose sight of those six quick sentences and the chance to make the world a tiny bit more courteous.
Learn more about how IBM is helping clients take advantage of the digital economy.
Padma Chukka, Soad Abu El-Naga and Neil Delima contributed to this article.
The post 6 short sentences that sparked the creation of a cognitive app appeared first on news.
Quelle: Thoughts on Cloud

How cloud brokerage matches the right cloud to your workload

I’ve had a lot of conversations this past year with clients who have wanted to deploy new or move existing workloads into the cloud.
Asking a few fundamental questions about the application’s operational requirements can turn a simple case of deploying code and data to a cloud environment into something rather more complex.
For example, one client wanted to keep legacy data storage costs to a minimum, but was concerned about potential data access costs. Another wanted to move an application with lots of complex, time-dependent and business-critical interfaces. Yet another was looking for some very stringent service-level agreements (SLAs) from their provider.
While a cloud solution can accommodate each of these examples, a different approach was needed in each case. A lower cost for data storage and access could be achieved by deploying to a cloud which was not the client’s de facto choice. Complex interfaces could be accommodated through IBM Bluemix Private Cloud Local. Stringent SLAs could be addressed using a managed cloud service, such as IBM Cloud Managed Services. Each of these cases highlighted a need to be flexible and that hybrid cloud is a reality for many organizations.
These examples were single applications and environments, where fast decisions were not needed. They allowed for a short comparison study to be carried out, but there are companies that need make fast, daily cloud hosting decisions on a large scale. In those cases, some form of intelligent automation is a must, and organizations look to cloud brokerage tools.
The IBM Cloud Matrix is sophisticated brokerage software that enables organizations to gain insight into where their workloads would best be placed. Cloud Matrix will take the required characteristics of a given system and use cognitive processes to determine a score for each proposed hosting environment, be it a traditional data center, a private cloud or a public provider, including IBM Cloud, AWS and Azure among others.
Cloud Matrix can use an organization’s cloud pricing model, too, to ensure that results match up to the correct pricing. Scores for hosting the desired workload in each provider’s environment are presented, and the organization can then make an informed decision based on suitability for the workload’s requirements, along with the costs of building and running the environment in a vendor’s cloud. The report can be repeated as needed.
For organizations that run particularly cost-sensitive workloads, brokerage tools backed with automated orchestration can enable fast movement between providers to ensure that the lowest hosting cost is maintained. This use case highlights the need for open standards in cloud, too.
Determining the right run-time environment for a given workload is important if deployment to the cloud is going to be a success. Find out more about brokerage and IBM Cloud Matrix or contact us to arrange a meeting with an IBM Cloud Advisor.
The post How cloud brokerage matches the right cloud to your workload appeared first on news.
Quelle: Thoughts on Cloud

Recent blog posts

I’ve been out for a few weeks, but the blog posts from the community kept coming.

Containers on the CERN cloud by Tim Bell

We have recently made the Container-Engine-as-a-Service (Magnum) available in production at CERN as part of the CERN IT department services for the LHC experiments and other CERN communities. This gives the OpenStack cloud users Kubernetes, Mesos and Docker Swarm on demand within the accounting, quota and project permissions structures already implemented for virtual machines.We shared the latest news on the service with the CERN technical staff (link). This is the follow up on the tests presented at the OpenStack Barcelona (link) and covered in the blog from IBM.

Read more at http://tm3.org/d6

ANNOUNCE: New libvirt project Go XML parser model by Daniel Berrange

Shortly before christmas, I announced the availability of new Go bindings for the libvirt API. This post announces a companion package for dealing with XML parsing/formatting in Go. The master repository is available on the libvirt GIT server, but it is expected that Go projects will consume it via an import of the github mirror, since the Go ecosystem is heavilty github focused (e.g. godoc.org can’t produce docs for stuff hosted on libvirt.org git)

Read more at http://tm3.org/d7

Red Hat OpenStack Platform 10 is here! So what’s new? by Marcos Garcia – Principal Technical Marketing Manager

It’s that time of the year. We all look back at 2016, think about the good and bad things, and wish that Santa brings us the gifts we deserve. We, at Red Hat, are really proud to bring you a present for this holiday season: a new version of Red Hat OpenStack Platform, version 10 (press release and release notes). This is our best release ever, so we’ve named it our first Long Life release (up to 5 years support), and this blog post will show you why this will be the perfect gift for your private cloud project.

Read more at http://tm3.org/d8

Comparing OpenStack Neutron ML2+OVS and OVN – Control Plane by russellbryant

We have done a lot of performance testing of OVN over time, but one major thing missing has been an apples-to-apples comparison with the current OVS-based OpenStack Neutron backend (ML2+OVS).  I’ve been working with a group of people to compare the two OpenStack Neutron backends.  This is the first piece of those results: the control plane.  Later posts will discuss data plane performance.

Read more at http://tm3.org/d9
Quelle: RDO

Cross-Cluster Image Promotion Techniques

Many organizations decide to have multiple container clusters to segregate different environments. This leads to the problem of how to move container images created in one cluster to another cluster. The need to move images across cluster typically arises when one needs to implement a promotion process where the next environment for the given app is not in the same cluster as the current environment. This situation is common regardless of the container cluster manager and delivery pipeline being used. In this article, I will assume using OpenShift as the container cluster manager and Jenkins as the delivery pipeline tool.
Quelle: OpenShift

Streaming your next all-hands meeting? Prep execs for their video debut

A CEO may be a great strategist and leader in the office, perhaps even a great public speaker. Yet many executives find it challenging to communicate their passion and charisma when speaking in front of a video camera. Too often, the result is a bland performance that doesn&;t inspire employees&8217; confidence. That’s a problem, since video is becoming an essential way for executives to communicate with their global workforces.
Brian Burkhart, President at presentation development firm SquarePlanet and an instructor at Northwestern University&8217;s Farley Center for Entrepreneurship and Innovation, says bad on-screen performances happen because a live audience provides a palpable energy that the mechanical lens of a camera can&8217;t provide.
“There is something about live human interaction that you simply can&8217;t recreate with a camera,&; Burkhart says.
He says that business leaders can avoid a dud performance by following a few tips:
Don’t just prepare between meetings
Burkhart says that the best way to ensure a great performance is by spending the necessary time preparing. “I see many CEOs walk into the room where the camera is set up with their email buzzing, phones vibrating and a very full calendar. They ask, ‘Now, what is it that I&8217;m supposed to talk about?’ right before the camera is turned on.&8221; That tells Burkhart that the CEO is not fully present.
It&8217;s all about having the right mindset, he says. The trick is to view the speaking engagement as an opportunity to connect with an audience by sharing the most important messages.
“It&8217;s a little bit like making a soup. Yes, you can prepare the dish in 30 minutes, but if you let it simmer for six hours, then it&8217;s going to be way better,&8221; says Burkhart. “When CEOs spend time marinating on the message, it becomes fully authentic instead of just reading a script.&8221;
Bring energy and excitement on camera
The presenter must compensate for the lower excitement and energy levels of video compared to those of a live event. To pull it off, Burkhart offers three tips:

Bring the energy: If a CEO gives 50 percent of her energy in a live speech, then they should give 100 percent on camera. Small changes, such as standing up instead of sitting behind a desk, can make a big difference in the energy the audience perceives.
Talk loudly: Presenters often assume that since they are wearing a microphone, they can use their normal voice. This isn&8217;t true. Speaking louder translates into a higher energy level.
Be animated:Use voice inflection, facial expressions and body language to convey energy and excitement. Smile bigger than you think you should. If a CEO has a look of dread, terror or pain as she presents, audiences will notice.

Video, even live video streams, rarely disappear. Those video streams are often recorded for reuse later, or shared on YouTube or the company&8217;s website. But with the right mindset and energy, busy executives can ensure the energy translates into an effective, engaging presentation.

“The bottom line is really truly understanding who you are trying to connect to on the other side of the cameras,&8221; says Burkhart. &;It&8217;s not just a lens and metal, but an audience of human beings.&8221;
Learn more about IBM Cloud video solutions.
The post Streaming your next all-hands meeting? Prep execs for their video debut appeared first on news.
Quelle: Thoughts on Cloud

Don’t go it alone in the team sport of retail digital transformation

The reaction of IT teams to the ever changing &;bimodal IT&; landscape has been interesting to watch over the past several years at the National Retail Federation’s (NRF) BIG Show.
There have certainly been winners and losers, but not always the ones you might expect. There has been a surge away from centralized IT in past years, in favor of routing new projects to embedded, shadow IT teams or completely outsourced digital projects.
However, those who are succeeding in building a truly winning omnichannel strategy are doing so with the complete inclusion of centralized IT. For these winners, the experience of CTO and CIO teams has been essential.
I often talk to IT directors of large retailers. These are the people who traditionally ran 12 -18-month implementation projects. They now find themselves with a stark decision: be agile or be benched. Instead of sitting on the sidelines while other players from elsewhere in the business took charge, they are becoming the new change agents with a playbook to drive the digital agenda.
What has changed in the past few years is the attitude of the central IT teams to embrace the problem at hand. With a new acceptance of agile principles and the new reality of cloud and hybrid, these same IT teams have a pivotal role to play. They are helping the teams charged with rapid build out and transient projects, where delivery is measured in weeks.
Some recent very public security breaches have helped put the wind at the backs of once-beleaguered CTOs in making the case with boards to have central IT at the heart of every new build out. The move to a world where central IT retains control of the core systems — either on premises or the cloud, while working in partnership with shadow IT — is a newly emerging and powerful trend which ultimately will make everyone better off.
As enterprises react to the opportunities that cloud and digital bring, their IT architectures built over decades face their greatest ever challenge: supporting a new digital world where the connectivity is handled by a whole new generation of empowered users — rookies, if you will —  coming along with diverse skillsets.
For some, this could be categorized as API development tooling, but the further from the data center one looks, the more this morphs into something more fluid. It’s simply part of the business landscape. For the iPad generation who can connect their home world together — to switch on their lights from their smartphone while automatically publishing pics to their social channel of choice — it looks odd that enterprises are unable to apply this level of connectivity to the apps that make up their business landscape.
This broadening connectivity and user landscape changes the game, driving a forever-expanding and critical role for integration software. Integration is a fundamental element of any good team, handling the complexities of connecting and making sense of the data that digital teams need. Whether on the cloud or in the data center, integration is becoming significantly more powerful and ubiquitous, serving a surprising range of user experiences.
We&;re driving a new generation of tooling aimed at and promoting collaboration between the spectrum of digital teams driving the omnichannel agenda in leading retailers.
Have you seen the future yet? Come and talk it through with me at NRF or join the discussion here.
The post Don’t go it alone in the team sport of retail digital transformation appeared first on news.
Quelle: Thoughts on Cloud