Creating and accessing a Kubernetes cluster on OpenStack, part 3: Run the application

The post Creating and accessing a Kubernetes cluster on OpenStack, part 3: Run the application appeared first on Mirantis | The Pure Play OpenStack Company.
Finally, you&;re ready to actually interact with the Kubernetes API that you installed. The general process goes like this:

Define the security credentials for accessing your applications.
Deploy a containerized app to the cluster.
Expose the app to the outside world so you can access it.

Let&8217;s see how that works.
Define security parameters for your Kubernetes app
The first thing that you need to understand is that while we have a cluster of machines that are tied together with the Kubernetes API, it can support multiple environments, or contexts, each with its own security credentials.
For example, if you were to create an application with a context that relies on a specific certificate authority, I could then create a second one that relies on another certificate authority. In this way, we both control our own destiny, but neither of us gets to see the other&8217;s application.
The process goes like this:

First, we need to create a new certificate authority which will be used to sign the rest of our certificates. Create it with these commands:
$ sudo openssl genrsa -out ca-key.pem 2048
$ sudo openssl req -x509 -new -nodes -key ca-key.pem -days 10000 -out ca.pem -subj “/CN=kube-ca”

At this point you should have two files: ca-key.pem and ca.pem. You&8217;ll use them to create the cluster administrator keypair. To do that, you&8217;ll create a private key (admin-key.pem), then create a certificate signing request (admin.csr), then sign it to create the public key (admin.pem).
$ sudo openssl genrsa -out admin-key.pem 2048
$ sudo openssl req -new -key admin-key.pem -out admin.csr -subj “/CN=kube-admin”sudo openssl x509 -req -in admin.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out admin.pem -days 365

Now that you have these files, you can use them to configure the Kubernetes client.
Download and configure the Kubernetes client

Start by downloading the kubectl client on your machine. In this case, we&8217;re using linux; adjust appropriately for your OS.
$ curl -O https://storage.googleapis.com/kubernetes-release/release/v1.4.3/bin/linux/amd64/kubectl

Make kubectl executable:
$ chmod +x kubectl

Move it to your path:
$ sudo mv kubectl /usr/local/bin/kubectl

Now it&8217;s time to set the default cluster. To do that, you&8217;ll want to use the URL that you got from the environment deployment log. Also, make sure you provide the full location of the ca.pem file, as in:
$ kubectl config set-cluster default-cluster –server=[KUBERNETES_API_URL] –certificate-authority=[FULL-PATH-TO]/ca.pem
In my case, this works out to:
$ kubectl config set-cluster default-cluster –server=http://172.18.237.137:8080 –certificate-authority=/home/ubuntu/ca.pem

Next you need to tell kubectl where to find the credentials, as in:
$ kubectl config set-credentials default-admin –certificate-authority=[FULL-PATH-TO]/ca.pem –client-key=[FULL-PATH-TO]/admin-key.pem –client-certificate=[FULL-PATH-TO]/admin.pem
Again, in my case this works out to:
$ kubectl config set-credentials default-admin –certificate-authority=/home/ubuntu/ca.pem –client-key=/home/ubuntu/admin-key.pem –client-certificate=/home/ubuntu/admin.pem

Now you need to set the context so kubectl knows to use those credentials:
$ kubectl config set-context default-system –cluster=default-cluster –user=default-admin
$ kubectl config use-context default-system

Now you should be able to see the cluster:
$ kubectl cluster-info

Kubernetes master is running at http://172.18.237.137:8080
To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump’.

Terrific!  Now we just need to go ahead and run something on it.
Running an app on Kubernetes
Running an app on Kubernetes is pretty simple and is related to firing up a container. We&8217;ll go into the details of what everything means later, but for now, just follow along.

Start by creating a deployment that runs the nginx web server:
$ kubectl run my-nginx –image=nginx –replicas=2 –port=80

deployment “my-nginx” created

Be default, containers are only visible to other members of the cluster. To expose your service to the public internet, run:
$ kubectl expose deployment my-nginx –target-port=80 –type=NodePort

service “my-nginx” exposed

OK, so now it&8217;s exposed, but where?  We used the NodePort type, which means that the external IP is just the IP of the node that it&8217;s running on, as you can see if you get a list of services:
$kubectl get services

NAME         CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
kubernetes   11.1.0.1      <none>        443/TCP   3d
my-nginx     11.1.116.61   <nodes>       80/TCP    18s

So we know that the &;nodes&; referenced here are kube-2 and kube-3 (remember, kube-1 is the API server), and we can get their IP addresses from the Instances page&;

&8230; but that doesn&8217;t tell us what the actual port number is.  To get that, we can describe the actual service itself:
$ kubectl describe services my-nginx

Name:                   my-nginx
Namespace:              default
Labels:                 run=my-nginx
Selector:               run=my-nginx
Type:                   NodePort
IP:                     11.1.116.61
Port:                   <unset> 80/TCP
NodePort:               <unset> 32386/TCP
Endpoints:              10.200.41.2:80,10.200.9.2:80
Session Affinity:       None
No events.

So the service is available on port 32386 of whatever machine you hit.  But if you try to access it, something&8217;s still not right:
$ curl http://172.18.237.138:32386

curl: (7) Failed to connect to 172.18.237.138 port 32386: Connection timed out

The problem here is that by default, this port is closed, blocked by the default security group.  To solve this problem, create a new security group you can apply to the Kubernetes nodes.  Start by choosing Project->Compute->Access & Security->+Create Security Group.
Specify a name for the group and click Create Security Group.
Click Manage Rules for the new group.

By default, there&8217;s no access in; we need to change that.  Click +Add Rule.

In this case, we want a Custom TCP Rule that allows Ingress on port 32386 (or whatever port Kubernetes assigned the NodePort). You  can specify access only from certain IP addresses, but we&8217;ll leave that open in this case. Click Add to finish adding the rule.

Now that you have a functioning security group you need to add it to the instances Kubernetes is using as worker nodes &; in this case, the kube-2 and kube-3 nodes.  Start by clicking the small triangle on the button at the end of the line for each instance and choosing Edit Security Groups.
You should see the new security group in the left-hand panel; click the plus sign (+) to add it to the instance:

Click Save to save the changes.

Add the security group to all worker nodes in the cluster.
Now you can try again:
$ curl http://172.18.237.138:32386

<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
   body {
       width: 35em;
       margin: 0 auto;
       font-family: Tahoma, Verdana, Arial, sans-serif;
   }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href=”http://nginx.org/”>nginx.org</a>.<br/>
Commercial support is available at
<a href=”http://nginx.com/”>nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
As you can see, you can now access the Nginx container you deployed on the Kubernetes cluster.

Coming up, we&8217;ll look at some of the more useful things you can do with containers and with Kubernetes. Got something you&8217;d like to see?  Let us know in the comments below.
The post Creating and accessing a Kubernetes cluster on OpenStack, part 3: Run the application appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

CloudNativeCon and KubeCon: What we learned

Imagine yourself on a surfboard. You’re alone. You’re paddling out to farther into the sea and you’re ready to catch a giant wave. Only you look to your left, to your right and behind you, and you suddenly realize you’re not alone at all. There are countless other surfers who share your aim.
That’s how developers are feeling about cloud native application development and Kubernetes as excitement builds for the impending wave.
The excitement was apparent during the recent and KubeCon joint event in Seattle. More than 1,000 developers gathered to share ideas around the growing number of projects under the Cloud Native Compute Foundation (via Linux Foundation) banner. That includes Kubernetes, one of the foundation’s most significant and broadly adopted projects.

Despite the fact that it’s still relatively early days for Kube and cloud native computing, CNCF executive director Dan Kohn said there are plenty of reasons to be excited about cloud native.
In his opening keynote, Kohn highlighted these top advantages that cloud native offers:

Isolation. Containerizing applications ensures that you get the same version in development and production. Operations are simplified.
No lock-in. When you choose a vendor that relies on open technology, you’re not locked in to using that vendor.
Improved scalability. Cloud native provides the ability to scale your application to meet customer demand in real time.
Agility and maintainability. These factors are improved when applications are split into microservices.

It was apparent by the sessions alone that Kubernetes is already seeing enterprise adoption. Numerous big-name companies were presented as use cases.
Chris Aniszczyk, VP of developer programs for The Linux Foundation, shared some of the impressive growth numbers around the CNCF and Kube communities:

Now @cra wrapping up a busy 2 days with some impressive numbers! CloudNativeCon the hard way! @CloudNativeFdn @kelseyhightower pic.twitter.com/ySe5pNokjM
— Jeffrey Borek (@jeffborek) November 10, 2016

And if conference attendance is any indication, the community is poised to grow even more over the next few months. Next year’s CloudNativeCon events in Berlin and Austin are expected to double or triple the Seattle attendance number.
The IBM contribution to Kubernetes
The work IBM is doing with Kubernetes is twofold. First and foremost, IBM is helping the community understand its pain points and contribute its resources, as it does with dozens of open source projects. Second, IBM developers and technical leaders are working with internal product teams to fold in Kubernetes into the larger cloud ecosystem.
“Because Kubernetes is going to be such an important part of our infrastructure going forward, we want to make sure we contribute as much as we get out of it,” IBM Senior Technical Staff Member Doug Davis said at the CloudNativeCon conference. “We’re going to see more people coming to our team, and you’re going to see a bigger IBM presence within the community.”
IBM is also committed to helping the Kubernetes community interact and cooperate with other open source communities. Kubernetes technology provides plug points and extensibility points that allow it to be run on , for example.
Brad Topol, a Distinguished Engineer who leads IBM work in OpenStack, explained how the communities are working together:

At CloudNativeCon in Seattle @BradTopol discusses the relationship between OpenStack and CNCF. pic.twitter.com/o2wj8swTBo
— IBM Cloud (@IBMcloud) November 8, 2016

momentum continues
Serverless remained a hot topic at CloudNativeCon. IBMer Daniel Krook presented a keynote on the topic, including an overview of , the IBM open source, serverless offering that is available on Bluemix:

LIVE on : @DanielKrook talks OpenWhisk at CloudNativeCon. Slides: https://t.co/P51xrjVqFP https://t.co/dRJmHKiXcy
— IBM Cloud (@IBMcloud) November 9, 2016

Krook also joined in to provide a solid definition of “serverless,” something that tends to spark debate whenever the topic is broached:

The buzz around serverless continues at CloudNativeCon. @DanielKrook gives his definition of this emerging technology. pic.twitter.com/UzFhqtBnD0
— IBM Cloud (@IBMcloud) November 9, 2016

An update on the Open Container Initiative
In a lightning talk, Jeff Borek, Worldwide Program Director of Open Cloud Business Development, joined Microsoft Senior Program Manager Rob Dolin for an update on the OCI. The organization started in 2015 as a Linux Foundation project with the goal of creating open, industry standards around container formats and runtimes.
Watch their session here:

LIVE on Periscope: From CloudNativeCon, @JeffBorek & @RobDolin discuss the Open Container Initiative. https://t.co/rKpa4UpRcn
— IBM Cloud (@IBMcloud) November 9, 2016

Learn more: &;Why choose a serverless architecture?&;
The post CloudNativeCon and KubeCon: What we learned appeared first on news.
Quelle: Thoughts on Cloud

The tide rises at the Barcelona OpenStack Summit

The day before the opening of the OpenStack Summit in Barcelona, I already knew that one of the hot topics would be containers.
Lots of people are excited these days about containers and microservices, seeing them as the magic wand that will help IT keep up with the pace of our rapidly evolving world.
If you care about containers or platforms such as Cloud Foundry — which basically abstract you from the underlying infrastructure — you may wonder why on Earth you should care about OpenStack and a summit about it. Here’s a hint: as OpenStack COO Mark Collier mentioned in his initial keynote, the rate of adoption of container technologies is three times higher among OpenStack users than any others. That didn’t happen by accident.
Imagine you are a “container-focused” IT director. The first thing you’ll want is to deploy your container environment (Kubernetes, Swarm or Mesos). And you’ll want to do so from a console with just a click, as many times as you need, and, potentially, in different infrastructures. You can do that with OpenStack Magnum very easily. It’s not so easy without OpenStack.
Even if you want to do it on physical machines for performance reasons, you could do it — thanks to the Ironic bare metal provisioning program — exactly as in any virtualized environment.
Imagine also you need to extend your container clusters across several clouds, on premises or off premises, from one provider or several. Try doing that if each infrastructure is controlled by a different vendor tooling. You’ll wish they were all managed the same way, which is exactly what OpenStack does.
But OpenStack can provide not only a common cloud operating software to have equally managed environments, it can also ease network interconnection, too. Many of the newest features in OpenStack Newton have to do with enhancements in network connectivity.
In a nutshell, OpenStack can bring together infrastructures from different vendors because it is a real community effort. That’s its beauty and its power.
You may wonder if it is really true, tested and confirmed that OpenStacks from different providers are actually interoperable in the ways I’m describing. Indeed they are. One of the most spectacular moments of the summit was the live demo led by IBM Cloud Strategy General Manager Don Rippert, who challenged each provider to deploy the same exact application (same architecture, topology, security, and so on) in its particular OpenStack and test it. We are not talking about a couple of providers. We are talking about the major providers, nearly 20 of them, including RedHat, Huawei, Mirantis and IBM. And, believe me, it worked.
So you may be focused on containers because you want to focus on agile development. You may not care too much about the underlying infrastructure. But if you want to quickly deploy new container environments and have the ability to grow and expand your cloud to multi-cloud environments quickly, it’s wise to start caring about OpenStack.
As Don Rippert put it in his keynote: it’s not that the different vendors don’t want to compete against each other, but by collaborating in an effort like OpenStack, we are “rising the tide” to give a maximum benefit to our users, whatever their strategy.
Learn more about IBM Cloud open source technology.
The post The tide rises at the Barcelona OpenStack Summit appeared first on news.
Quelle: Thoughts on Cloud

Creating and accessing a Kubernetes cluster on OpenStack, part 2: Access the cluster

The post Creating and accessing a Kubernetes cluster on OpenStack, part 2: Access the cluster appeared first on Mirantis | The Pure Play OpenStack Company.
To access the Kubernetes cluster we created in part 1, we&;re going to create a Ubuntu VM (if you have a Ubuntu machine handy you can skip this step), then configure it to access the Kubernetes API we just deployed.
Create the client VM

Create a new VM by choosing Project->Compute->Intances->Launch Instance:

Fortunately you don&8217;t have to worry about obtaining an image, because you&8217;ll have the Ubuntu Kubernetes image that was downloaded as part of the Murano app. Click the plus sign (+) to choose it.  (You can choose another distro if you like, but these instructions assume you&8217;re using Ubuntu.)

You don&8217;t need a big server for this, but it needs to be big enough for the Ubuntu image we selected, so choose the m1.small flavor:

Chances are it&8217;s already on the network with the cluster, but that doesn&8217;t matter; we&8217;ll be using floating IPs anyway. Just make sure it&8217;s on a network, period.

Next make sure you have a key pair, because we need to log into this machine:

After it launches&;

Add a floating IP if necessary to access it by clicking the down arrow on the button at the end of the line and choosing Associate Floating IP.  If you don&8217;t have any floating IP addresses allocated, click the plus sign (+) to allocate a new one:

Choose the appropriate network and click Allocate IP:

Now add it to your VM:

You&8217;ll see the new Floating IP listed with the Instance:

Before you can log in, however, you&8217;ll need to make sure that the security group allows for SSH access. Choose Project->Compute->Access & Security and click Manage Rules for the default security group:

Click +Add Rule:

Under Rule, choose SSH at the bottom and click Add.

You&8217;ll see the new rule on the Manage Rules page:

Now use your SSH client to go ahead and log in using the username ubuntu and the private key you specified when you created the VM.

Now you&8217;re ready to actually deploy containers to the cluster.

The post Creating and accessing a Kubernetes cluster on OpenStack, part 2: Access the cluster appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

[Slides Uploaded] OpenShift Commons Gathering Wrap-Up

This past week’s OpenShift Commons Gathering in Seattle brought together experts and thought leaders from all over the world to discuss the container technologies, best practices for cloud-native application developers and the open source software project that underpin the OpenShift ecosystem to help take the OpenShift ecosystem to the next level in cloud-native computing. 200+ developers, DevOps professionals, and sysadmins came together to explore the next steps in making container technologies successful, scalable and secure.
Quelle: OpenShift

Dynamic advertising gets the cognitive treatment

Brands are spending more on native advertising than ever before — a lot more — to create targeted, minimally invasive online advertising experiences for consumers.
Business Insider Intelligence reports that native advertising, which assumes the look and feel of content that surrounds it, is the fastest-growing digital advertising category. The same report also projects that spending on native advertising will grow to $21 billion in 2018, up from $4.7 billion in 2013.
The real game-changer for brands that want to make a meaningful connection with audiences in digital channels will be the future marriage of artificial intelligence and native advertising in video content. In this future — likely only two to three years away — dynamic and highly personalized advertising takes on an entirely new meaning.
Advertising&;s giant leap toward science
IBM Watson&8217;s cognitive capabilities, once incorporated into advertisers&8217; video platforms, will enable advertisers to personalize marketing messages across channels, even within the video stream. The key is the ability to accumulate data about a specific viewer&8217;s preferences and integrate that data from external sources, such as social media and advertisers&8217; other marketing tools. ​​
If Watson knows a consumer recently bought a refrigerator, for instance, then it wouldn&8217;t show that consumer advertising for refrigerators. Instead, Watson might serve up an ad for a product to put in the new fridge, such as soda. And because Watson could determine — based on purchase history — loyalty to a certain soda brand, so the consumer won&8217;t see any rivals&8217; ads. Watson will be able to dynamically swap in a product they love — such as Coke for Pepsi — into the video the consumer is watching to create a powerful, personalized brand experience.
For brands, the value of such a scenario is clear: they can be seamlessly front-and-center in a consumer&8217;s entertainment experience, facilitating a positive and lasting association between brand and content. Media and entertainment companies will benefit, too, because consumers will feel more personally connected to the video content they create.
360-degree user profiles
The ability to deliver highly-targeted online video advertising is here. Brands can already use Watson analytics tools and intelligence to enable this for any business and campaign, creating direct advertising that will resonate with customers. Watson intelligence can also be integrated with other digital marketing tools, such as email or text, to deliver personalized advertising and marketing messages.
Many brands are already experimenting with Watson&8217;s cognitive capabilities — facial recognition, audio recognition, tone analytics, personality insights and more — to better understand the needs and perceptions of consumers. Chevrolet recently tapped Watson for a “global positivity system&; campaign to analyze people&8217;s social media feeds, for example. The North Face is among a growing list of retailers using Watson AI capabilities to make product recommendations. Video providers are now exploring ways to use Watson&8217;s intelligence to deliver more relevant content to viewers.
Through these efforts, brands are starting to develop 360-degree profiles of users that will help them better understand what their customers say, how they feel and how they interact with the company and its products. These comprehensive profiles are essential to making the dynamic and highly personalized advertising of the future a reality in all digital channels, including video.
Learn more about IBM Cloud Video.
The post Dynamic advertising gets the cognitive treatment appeared first on news.
Quelle: Thoughts on Cloud

Conquering impossible goals with real-time analytics

“Past data only said, &;go faster&; or &8216;ride better,’&; Kelly Catlin, Olympic Cyclist and Silver Medalist, shared with the audience at IBM World of Watson event on 24 October. In other words, the feedback generated from all her analytics data sources — the speed, cadence, power meters on her bicycle — was generally useless to this former mountain bike racer who wanted to improve her track cycling performance by 4.5 percent to capture a medal at a medal at the 2016 Rio Olympic Games.

USA Cycling Women&8217;s Team Pursuit

While I am by no means an Olympic level athlete, I knew exactly what Kelly meant. I’ve logged over 300 miles in running races over 8 years, and just in this past year started to see some small improvements in my 5Ks and half-marathons. Suddenly, I started asking, “How much faster could I run a half marathon? Could I translate these improvements to longer distances?” I downloaded all my historical race information into an excel chart. I looked at my Runkeeper and Strava training runs. Despite all this data, I was stuck. &;What should I do to improve?&8221; I asked a coach. He said, “Run more during the week.”
But I wanted to know more. How much capacity do I really have? How much does my asthma limit me? Should I only run in certain climates? During which segments of a race should I speed up or slow down? Just like Kelly, who spent four hours per session reviewing data, I understood how historical data had limited impact on improving current performance.
According to Derek Bouchard-Hall, CEO of USA Cycling, “At the elite level, a 3 percent performance improvement in 12 months is attainable but very difficult. For the USA Women’s Team Pursuit Team, they had only 11 months and needed 4.5 percent improvement which would require them to perform at a new world record time (4.12/15.4 Lap Average). The coach could account for the 3 percent in physiological improvement but needed technology to bring the other 1.5 percent. He focused in two areas: equipment (bike/tire, wind tunnel training) and real-time analytic training insights.”

How exactly could real-time analytics insight change performance?
According to Kelly, “Now, we can make executable changes.” She and her teammates now know when to make a transition of who is leading the group, how best to make that transition, and which times of the race to pick up cadence.
The result: USA Women’s Team Pursuit finished in the race in 4:12:454 to secure the silver medal behind Great Britain, finishing in 4:10:236.
The introduction of data sets and technology did not alone lead to Team USA’s incredible improvement. Instead, it was the combination of well-defined goals, strategic implementation of technology, and actionable, timely recommendations that led to their strong performance and results.
As you consider how to improve an area of your business, keep in mind these three things from the USA Cycling project with IBM Bluemix:

Set well-defined goals. Or, as business expert Stephen Covey would say, “always begin with the end in mind.” USA Cycling clearly articulated they needed to increase performance by 4.5 percent, and that would take more than a coach.
Choice and implementation of technology matters. Choose the tools that will not only deliver analytics data and insights, but do so in a timely and relevant manner for your business. Explore how to get started with IBM Bluemix.
Data alone doesn’t equal guidance. You must review the data, and with your colleagues, your coach, your running buddy, set clear, executable actions.

The IBM Bluemix Garage Method can help you define your ideas and bring a culture of innovation agility to your cloud development.
A version of this post originally appeared on the IBM Bluemix blog.
The post Conquering impossible goals with real-time analytics appeared first on news.
Quelle: Thoughts on Cloud

Creating and accessing a Kubernetes cluster on OpenStack, part 1: Create the cluster

The post Creating and accessing a Kubernetes cluster on OpenStack, part 1: Create the cluster appeared first on Mirantis | The Pure Play OpenStack Company.
In honor of this week&;s Kubecon, we&8217;re bringing you information on how to get started with the Kubernetes container orchestration engine.  On Monday we explained what Kubernetes is. Now let&8217;s show you how to actually use it.
In this three part series, we&8217;ll take you through the steps to run an Nginx container on Kubernetes over OpenStack, including:

Deploying a Kubernetes cluster with Murano
Configuring OpenStack security to make a Kubernetes cluster usable from within OpenStack
Downloading and configuring the Kubernetes client
Creating a Kubernetes application
Running an application on Kubernetes

Let&8217;s get started.
Create the Kubernetes cluster with Murano
The first step is to get a cluster created. There are several ways to do that, but the easiest is to use a Murano Package. If you don&8217;t have Murano handy, you can get access to it in several ways, but the easiest is to deploy Mirantis OpenStack with Murano.
Import the Kubernetes cluster app
The first step is to get the actual Kubernetes cluster app, which is available on the OpenStack Foundation&8217;s Community App Catalog.  Follow these steps:

Log into Horizon and go to Applications->Manage->Packages.
Go to the Community App Catalog and choose Murano Apps -> Kubernetes Cluster to get the Kubernetes Cluster App.  You&8217;re looking for the URL for the package itself. In this case, that&8217;s http://storage.apps.openstack.org/apps/com.mirantis.docker.kubernetes.KubernetesCluster.zip.
Back in Horizon, click Import Package.
For the Package Source, choose URL, and add the URL from step 2, then click Next:
Murano will automatically start downloading the images it needs, then mark them for use with Murano; you won&8217;t have to do anything there but click Import and wait. To see the images downloading, choose Project->Images. If the images didn&8217;t already exist, you&8217;ll see them Saving:
Once they&8217;re finished saving, you&8217;ll see that their status has changed to Active:

Next, we&8217;ll deploying an environment that includes the Kubernetes master and minions.
Create the Kubernetes Murano environment

In Horizon, choose Applications->Browse. You should see the new app under Recent Activity.
To make things simple, click the Quick Deploy button.
Keep all the defaults, then scroll down and click Next.
Choose the Debian image and click Create.
Horizon will automatically take you to the new environment. At this point, it&8217;s been created, but not deployed:
You can add other things if you want, but for now, click Deploy This Environment. Goes through a number of steps, creating VMs, networks, security groups, and so on. You can see that on the main environment page, or by checking the logs:
When deployment is complete, you&8217;ll see the status change to Ready:
All that&8217;s great, but where do you access the cluster?  Click Latest Deployment Log to see the IP address assigned to the cluster:

Now, you&8217;ll notice that there are references to (in this case) 4 different nodes: gateway-1, kube-1, kube-2, and kube-3.  You can see these instances if you go to Project->Compute->Instances.  Notice that the Kubernetes API is running on kube-1.
In part 2, you&8217;ll actually access the cluster.
The post Creating and accessing a Kubernetes cluster on OpenStack, part 1: Create the cluster appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Notes from the Barcelona Summit: OpenStack in the Service of Science

The post Notes from the Barcelona Summit: OpenStack in the Service of Science appeared first on Mirantis | The Pure Play OpenStack Company.
Every summit, we see new use cases showing how OpenStack-based clouds help scientists to move their research forward by providing big data processing and analysis, and the summit in Barcelona was no exception. The attendees were undoubtedly amazed by the scale and value of the results of the research projects in the presented areas of nuclear physics, astronomy, and medicine. We&;ve gotten strangely familiar with data levels presented in petabytes (1,000,000 GB), but zettabytes (1,000,000,000,000 GB)?  That&8217;s new.  Add to that the use of hundreds of thousands of CPU cores and you have some seriously Big Data.
First, Tim Bell, who seems to be a permanent OpenStack speaker, updated the audience on the state of cloud infrastructure at CERN. He noted that the scientists there receive 0.5 PB of data daily as a result of monitoring a billion collisions of particles in the Large Hadron Collider. This huge amount of data is processed using more than 190,000 OpenStack cores.

Next, Dr. Rosie Bolton, from Cambridge University, explained how researchers explore our Universe to find the origins of galaxy formation and dark matter using a giant software defined radio telescope called the Square Kilometer Array, which is geographically located in South Africa and Australia. This telescope produces 1.3 ZB and stores 1 PB of data every day to be processed and stored in the OpenStack cloud.

To continue, Dr. Paul Calleja, from Cambridge University, explained how researchers use the created Bio-medical cloud to collect patient data from hospitals, and store and analyse that data to come up with new medical treatments. For example, the project developed a statistical model that processes patients’ medical records in real time during surgical procedures and helps to cut surgical site infection rates by 58%.
He also presented another use case in which a cloud platform called OpenCB uses the Hadoop infrastructure for next-generation big data analytics that will be used by Genomics England to study the genomes of 100,000 people in the UK. OpenCB is already being used to analyse the genomes of 10,000 rare disease patients.
He also talked about using the Bio-medical cloud forcomputing and storing the data obtained from brain scanning facilities in a summit keynote. 

Research isn&8217;t all that OpenStack is used for in academia. Students study, for example, networking and security, doing labs in OpenStack clouds. Universities build their supercomputers based on the OpenStack platform to help both students and teachers to do their Doctoral and Master research projects. Add to that the fact that it&8217;s open source and it&8217;s a no-lock-in choice for institutions that often have limited budgets.
There&8217;s a lot of focus these days on OpenStack as an enterprise tool, but remember, it was originally designed, in part, by NASA, so it&8217;s no wonder that OpenStack has so many followers in academic and scientific circles, and that goes on today.
The post Notes from the Barcelona Summit: OpenStack in the Service of Science appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

3 retailers transform their IT environments with cloud managed hosting

The speed of retail is faster than ever.
As technological innovation fuels change in consumer habits and unprecedented growth in global markets, a strategic approach to IT is increasingly important for success in retail. Today’s customers enjoy a wide variety of buying options, which means that service outages and other technology-related issues may cause   more damage than lost sales. These problems can significantly harm a retailer’s reputation, leading long-term customers to take their business elsewhere.
Many retail companies are partnering with cloud managed hosting providers to transform their IT infrastructures into environments that can help them respond faster, more efficiently and more securely to the demands of the modern marketplace.
Here are three examples of retailers from around the world who are deploying cloud managed hosting solutions to help them stay ahead of the competition:
Global retailer achieves business growth goals
A global retailer based in the US needed a more flexible IT environment to enable faster responses to new opportunities and unexpected requests. As a young company, the retailer couldn’t take on any major capital expenditures or hire a large technical team to achieve its growth objectives.
The company extended its systems environment and added levels of service, cloud and management functionality. By using cloud managed services for its Oracle applications, the company could focus on growing its business rather than managing IT, helping get products and services to market faster and more securely than before.
The new infrastructure helped the retailer drive a better customer experience by scaling capacity up or down as needed and rapidly adjusting its business model in response to sudden market changes, all while avoiding a large, upfront capital expenditure.
Telecom retailer saves millions
After completing a large merger, a telecommunications retailer based in the United Kingdom needed to transform several siloed infrastructures into a consolidated IT environment.
To help reduce costs and better handle fluctuations in customer demand, the retailer deployed its new IT infrastructure in a hybrid cloud environment. This deployment enables the company to scale capacity as needed for seasonal shopping spikes such as Black Friday or a new mobile phone launch.
By consolidating its infrastructure services with cloud managed services, the retailer expects to save millions of dollars within the first two years of implementation.
Danish retailer frees up IT staff for innovation
A Danish retailer with many stores, distribution centers and suppliers located across several countries sought to gain a competitive edge through a better deployment of the SAP applications that support employees and customers in its physical and digital retail spaces.
To streamline operations, the company partnered with a cloud managed hosting provider to design a customized infrastructure-as-a-service (IaaS) private cloud solution to facilitate the processing power, reliability and scalability to support its SAP environment.
The retailer now has access to expert skill sets that help optimize uptime, support the evolution of its SAP environment and promote growth in new channels while removing the burden of service management from in-house IT staff. This improved efficiency also helps the retailer reduce operational costs while placing more focus on improving customer experiences.
Learn more about how a cloud managed hosting solution can drive value in across industries.
The post 3 retailers transform their IT environments with cloud managed hosting appeared first on news.
Quelle: Thoughts on Cloud