OpenStack Developer Mailing List Digest March 18-24

SuccessBot Says

Yolanda [1]: Wiki problems have been fixed, it&;s up and running
johnthetubaguy [2]: First few patches adding real docs for policy have now merged in Nova. A much improved sample file [3].
Tell us yours via OpenStack IRC channels with message “ <message>”
All: [4]

Release Naming for R

It&8217;s time to pick a name for our “R” release.
The assoicated summit will be in Vancouver, so the geographic location has been chosen as “British Colombia”.
Rules:

Each release name must start with the letter of the ISO basic Latin alphabet following the initial letter of the previous release, starting with the initial release of &;Austin&;. After &8220;Z&8221;, the next name should start with &8220;A&8221; again.
The name must be composed only of the 26 characters of the ISO basic Latin alphabet. Names which can be transliterated into this character set are also acceptable.
The name must refer to the physical or human geography of the region encompassing the location of the OpenStack design summit for the corresponding release. The exact boundaries of the geographic region under consideration must be declared before the opening of nominations, as part of the initiation of the selection process.
The name must be a single word with a maximum of 10 characters. Words that describe the feature should not be included, so &8220;Foo City&8221; or &8220;Foo Peak&8221; would both be eligible as &8220;Foo&8221;.

Full thread [5]

Moving Gnocchi out

The project Gnocchi which has been tagged independent since it&8217;s inception has potential outside of OpenStack.
Being part of the big tent helped the project be built, but there is a belief that it restrains its adoption outside of OpenStack.
The team has decided to move it out of OpenStack [6].

In addition out of the OpenStack infrastructure.

Gnocchi will continue thrive and be used by OpenStack such as Ceilometer.
Full thread [7]

POST /api-wg/news

Guides under review:

Define pagination guidelines (recently rebooted) [8]
Create a new set of api stability guidelines [9]
Microversions: add next_min_version field in version body [10]
Mention max length limit information for tags [11]
Add API capabilities discovery guideline [12]
WIP: microversion architecture archival doc (very early; not yet ready for review) [13]

Full thread [14]

 
Quelle: openstack.org

Twitter turns to Watson to stop abuse before it starts

Twitter isn&;t taking bullying on its platform sitting down.
In remarks at IBM InterConnect this week, Twitter Vice President of Data Strategy Chris Moody said stopping abuse is the company&8217;s number one priority, though he admitted it is &;a very, very hard challenge.&;
Along with updates to its policies, another way Twitter is facing down that challenge is by bringing Watson in to analyze the wording of tweets.
&8220;Watson is really good at understanding nuances in language and intention,” Moody said. “What we want to do is be able to identify abuse patterns early and stop this behavior before it starts.”
Moody added that the early testing of using Watson&8217;s Tone Analyzer technology, which is available through IBM Bluemix, to identify abusive language is very promising. He said he&8217;d like to return to next year&8217;s InterConnect to share results.
For more of Moody&8217;s remarks, check out the full story on GeekWire or watch the video on IBMGO.
The post Twitter turns to Watson to stop abuse before it starts appeared first on news.
Quelle: Thoughts on Cloud

How Watson helps H&R Block deliver engaging customer experiences

Just under three-fourths of US citizens get tax refunds every year, according to H&R Block CEO Bill Cobb. For H&R Block customers, the number is higher; it&;s closer to 85 percent.
Now that IBM Watson is helping H&R Block tax professionals guide customers through the filing process, the company is aiming to make that number rise even further.
Cobb joined IBM CEO Ginni Rometty on stage at IBM InterConnect Tuesday to explain just how H&R Block teamed up with IBM to get Watson working on taxes and how the whole process works.
&;I think this is one of the best examples of two brands coming together where they worked seamlessly,&; Cobb said after showing the ad that aired during this year&8217;s big game. Rometty added that H&R Block is &8220;a wonderful exemplar of continuous transformation.&8221;
Cobb shared that, after the 2016 tax season, H&R Block research found that customers were looking for more engaging experiences. So he called IBM on his landline phone and asked how Watson could make that happen while still keeping tax professionals at the center of customer relationships. In June 2016, teams from both companies were working on a solution. Just eight months later, ads for the service were running on TV.
&8220;Anyone who says IBM doesn&8217;t work quickly, I&8217;m here to tell you, IBM works fast,&8221; Cobb said.
The cognitive interview
Here&8217;s how the process works: a customer walks into an H&R Block office and sits down in front of a screen&;where previously they usually just watched a tax professional type away. A tax professional begins the usual interview, asking about life events, potential deductions and possible credits.
Throughout that process, Watson is listening in, referencing 600 million data points and the entire US tax code, creating a &8220;knowledge graph,&8221; which outlines all the areas where there might be a savings.
After the interview, Watson displays a massive chart of all the possible deductions and credits, and the tax professional goes through that chart with the customer, explaining all the different ways to increase the refund.
Positive response
Even before H&R Block with Watson was branded, when it was just a pilot program, customer satisfaction was ticking up, Cobb said. Now it&8217;s rising even more.
Tax professionals are responding positively, too, he said.
&8220;This makes them feel like they&8217;re really on the cutting edge,&8221; Cobb said.
Cobb said Watson is &8220;a beautiful fit for the nature of our business&8221; and is likely to expand into other areas of H&R Block&8217;s services, such as digital tax preparation.
Learn more about Watson on IBM Cloud.
The post How Watson helps H&;R Block deliver engaging customer experiences appeared first on news.
Quelle: Thoughts on Cloud

Highlights from IBM InterConnect 2017: Day two

Yesterday, I posted a recap of the first day of IBM InterConnect. Today, I’m following up with more updates from day two in Las Vegas.
If you didn’t get a chance to see the event on the ground, you can watch the conference on demand using IBMGO.
The main event of the day was the keynote from IBM President and CEO Ginni Rometty. Attendees heard about how cloud and cognitive can come together to change the way we work and help solve difficult challenges.
[View the story &;Highlights from Day 2 at IBM InterConnect 2017&; on Storify]
The post Highlights from IBM InterConnect 2017: Day two appeared first on news.
Quelle: Thoughts on Cloud

How NBC Universal sped delivery and cut costs with DevOps

If businesses find the success with that NBC Universal has, it’s safe to say that it’ll be sticking around.
At IBM InterConnect Monday, Angel Diaz, IBM vice president of developer technology and advocacy, told the crowd, “We are living in a technology-fueled business revolution.”
He was referring, of course, to DevOps, the approach to building software and applications that breaks down barriers between developers, IT staff and operation managers in an agile, iterative environment. It’s about tapping into the collective skill of what Diaz refers to as “the business technical pulse.”
“It’s all about the people; the mastery of the machine and the method,” Diaz said.
One organization that has mastered the machine and the method is NBC Universal. John Comas, who manages the company’s platform DevOps, was on hand to share his account of his company’s journey.
The approach
In a joint session at InterConnect titled “DevOps: The New Reality for Enterprise Transformation,” Comas said his company implemented DevOps to “modernize our technology to align with the business strategy.”
“DevOps gives us the agility to keep up with changes in the marketplace,” he said, “and it enables us to instantly respond to the ever-changing business requirements. Most importantly, it allows us to remain competitive with our corporate rivals.”
He told the crowd that he approached a DevOps culture at NBC Universal through what are commonly referred to as “The 5 C’s”:

With DevOps, @NBCUniversal is &;developing faster and more efficiently than ever and at much lower costs,” says John Comas. pic.twitter.com/JgPXmzs8eP
— IBM Cloud (@IBMcloud) March 20, 2017

Continuous integration
Continuous delivery
Continuous testing
Continuous feedback
Continuous monitoring

“At its core, DevOps takes software development and systems integration and combines them together using agile methodology,” Comas said.
In his team’s continuous integration, developers commit code to the software configuration management and merge with the main line multiple times per day. Every commit results in a build. In continuous delivery, the same build is deployed to every environment, from development to production, and the team delivers smaller releases more often.
With continuous feedback, his team can provide “the pulse of the application development project” in real time, Comas said.
Continuous monitoring gives his team the ability to immediately alert the development team of any operational disruptions.
Comas said that NBC Universal’s software delivery life cycle was built on and powered through the IBM UrbanCode suite.
“It’s what I like to call ‘the central nexus of our DevOps,’” he said.
“We want to provide our consumers with the most comprehensive, robust, state-of-the-art, bleeding-edge DevOps capabilities in the industry,” he added. “We want to build software as efficiently as possible.”
The results
With DevOps, Comas said his team improved the quality of the code. He said the team is developing code “faster and more efficiently than ever and at much lower costs.”
His organization also brought together siloed teams: software development, quality assurance and technology operations.
But the real proof is in the numbers. For its Universal Orlando project, DevOps helped the business:

Reduce app deployment time from 2.5 weeks to 20 minutes
Reduce time for 1,000 test suite from 6-8 weeks to three hours
Instantly provision production-like test environments with Skytap through UrbanCode

Get started on your own journey
If you’re looking to get started with DevOps, the Bluemix Garage Method combines practices from design thinking, agile development, lean startup and DevOps to build innovative solutions.
“Anyone can learn from the experiences that we’ve had at building this stuff together along with the open source communities,” Diaz said, “by understanding the practices in the Bluemix Garage Method.”
Find out more about how you can get started with the Bluemix Garage Method here.
The post How NBC Universal sped delivery and cut costs with DevOps appeared first on news.
Quelle: Thoughts on Cloud

YouTube Says It Wrongly Blocked Some LGBT Videos In "Restricted Mode"

The video site&;s &;restricted mode&; aims to filter sensitive content, but several LGBT vloggers and artists say it went too far.

YouTube apologized on Monday after several prominent LGBT video creators accused the site of censoring their videos with a filtering mechanism that flags and hides content as inappropriate.

YouTube apologized on Monday after several prominent LGBT video creators accused the site of censoring their videos with a filtering mechanism that flags and hides content as inappropriate.

AP / YouTube

The site’s “restricted mode” lets users filter out “potentially objectionable content,” the platform says, but some vloggers said it’s actually hiding pro-LGBT material.

The site's "restricted mode" lets users filter out "potentially objectionable content," the platform says, but some vloggers said it's actually hiding pro-LGBT material.

YouTube

Videos ranging from a makeup lesson for trans women to an LGBT couple reciting wedding vows were no longer visible after the filter was enacted.

Videos ranging from a makeup lesson for trans women to an LGBT couple reciting wedding vows were no longer visible after the filter was enacted.

YouTube


View Entire List ›

Quelle: <a href="YouTube Says It Wrongly Blocked Some LGBT Videos In "Restricted Mode"“>BuzzFeed

Using Kubernetes Helm to install applications

The post Using Kubernetes Helm to install applications appeared first on Mirantis | Pure Play Open Cloud.

After reading this introduction to Kubernetes Helm, you will know how to:

Install Helm
Configure Helm
Use Helm to determine available packages
Use Helm to install a software package
Retrieve a Kubernetes Secret
Use Helm to delete an application
Use Helm to roll back changes to an application

Difficulty is a relative thing. Deploying an application using containers can be much easier than trying to manage deployments of a traditional application over different environments, but trying to manage and scale multiple containers manually is much more difficult than orchestrating them using Kubernetes.  But even managing Kubernetes applications looks difficult compared to, say, &;apt-get install mysql&;. Fortunately, the container ecosystem has now evolved to that level of simplicity. Enter Helm.
Helm is a Kubernetes-based package installer. It manages Kubernetes &8220;charts&8221;, which are &8220;preconfigured packages of Kubernetes resources.&8221;  Helm enables you to easily install packages, make revisions, and even roll back complex changes.
Next week, my colleague Maciej Kwiek will be giving a talk at Kubecon about Boosting Helm with AppController, so we thought this might be a good time to give you an introduction to what it is and how it works.
Let&;s take a quick look at how to install, configure, and utilize Helm.
Install Helm
Installing Helm is actually pretty straightforward.  Follow these steps:

Download the latest version of Helm from https://github.com/kubernetes/helm/releases.  (Note that if you are using an older version of Kubernetes (1.4 or below) you might have to downgrade Helm due to breaking changes.)
Unpack the archive:
$ gunzip helm-v2.2.3-darwin-amd64.tar.gz
$ tar -xvf helm-v2.2.3-darwin-amd64.tar
x darwin-amd64/
x darwin-amd64/helm
x darwin-amd64/LICENSE
x darwin-amd64/README.md
Next move the helm executable to your path:
$ mv dar*/helm /usr/local/bin/.

Finally, initialize helm to both set up the local environment and to install the server portion, Tiller, on your cluster.  (Helm will use the default cluster for Kubernetes, unless you tell it otherwise.)
$ helm init
Creating /Users/nchase/.helm
Creating /Users/nchase/.helm/repository
Creating /Users/nchase/.helm/repository/cache
Creating /Users/nchase/.helm/repository/local
Creating /Users/nchase/.helm/plugins
Creating /Users/nchase/.helm/starters
Creating /Users/nchase/.helm/repository/repositories.yaml
Writing to /Users/nchase/.helm/repository/cache/stable-index.yaml
$HELM_HOME has been configured at /Users/nchase/.helm.

Tiller (the helm server side component) has been instilled into your Kubernetes Cluster.
Happy Helming!

Note that you can also upgrade the Tiller component using:
helm init –upgrade
That&8217;s all it takes to install Helm itself; now let&8217;s look at using it to install an application.
Install an application with Helm
One of the things that Helm does is enable authors to create and distribute their own applications using charts; to get a full list of the charts that are available, you can simply ask:
$ helm search
NAME                          VERSION DESCRIPTION                                       
stable/aws-cluster-autoscaler 0.2.1   Scales worker nodes within autoscaling groups.    
stable/chaoskube              0.5.0   Chaoskube periodically kills random pods in you…
stable/chronograf             0.1.2   Open-source web application written in Go and R…

In our case, we&8217;re going to install MySQL from the stable/mysql chart. Follow these steps:

First update the repo, just as you&8217;d do with apt-get update:
$ helm repo update
Hang tight while we grab the latest from your chart repositories…
…Skip local chart repository
Writing to /Users/nchase/.helm/repository/cache/stable-index.yaml
…Successfully got an update from the “stable” chart repository
Update Complete. ⎈ Happy Helming!⎈

Next, we&8217;ll do the actual install:
$ helm install stable/mysql
This command produces a lot of output, so let&8217;s take it one step at a time.  First, we get information about the release that&8217;s been deployed:
NAME:   lucky-wildebeest
LAST DEPLOYED: Thu Mar 16 16:13:50 2017
NAMESPACE: default
STATUS: DEPLOYED
As you can see, it&8217;s called lucky-wildebeest, and it&8217;s been successfully DEPLOYED.
Your release will, of course, have a different name. Next, we get the resources that were actually deployed by the stable/mysql chart:
RESOURCES:
==> v1/Secret
NAME                    TYPE    DATA  AGE
lucky-wildebeest-mysql  Opaque  2     0s

==> v1/PersistentVolumeClaim
NAME                    STATUS  VOLUME                                    CAPACITY  ACCESSMODES  AGE
lucky-wildebeest-mysql  Bound   pvc-11ebe330-0a85-11e7-9bb2-5ec65a93c5f1  8Gi       RWO          0s

==> v1/Service
NAME                    CLUSTER-IP  EXTERNAL-IP  PORT(S)   AGE
lucky-wildebeest-mysql  10.0.0.13   <none>       3306/TCP  0s

==> extensions/v1beta1/Deployment
NAME                    DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
lucky-wildebeest-mysql  1        1        1           0          0s
This is a good example because we can see that this chart configures multiple types of resources: a Secret (for passwords), a persistent volume (to store the actual data), a Service (to serve requests) and a Deployment (to manage it all).
The chart also enables the developer to add notes:
NOTES:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
lucky-wildebeest-mysql.default.svc.cluster.local

To get your root password run:
   kubectl get secret –namespace default lucky-wildebeest-mysql -o jsonpath=”{.data.mysql-root-password}” | base64 –decode; echo

To connect to your database:
Run an Ubuntu pod that you can use as a client:
   kubectl run -i –tty ubuntu –image=ubuntu:16.04 –restart=Never — bash -il

Install the mysql client:
   $ apt-get update && apt-get install mysql-client -y

Connect using the mysql cli, then provide your password:
$ mysql -h lucky-wildebeest-mysql -p

These notes are the basic documentation a user needs to use the actual application. There let&8217;s see how we put it all to use.
Connect to mysql
The first lines of the notes make it seem deceptively simple to connect to MySql:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
lucky-wildebeest-mysql.default.svc.cluster.local
Before you can do anything with that information, however, you need to do two things: get the root password for the database, and get a working client with network access to the pod hosting it.
Get the mysql password
Most of the time, you&8217;ll be able to get the root password by simply executing the code the developer has left you:
$ kubectl get secret –namespace default lucky-wildebeest-mysql -o jsonpath=”{.data.mysql-root-password}” | base64 –decode; echo
DBTzmbAikO
Some systems &; notably MacOS &8212; will give you an error:
$ kubectl get secret –namespace default lucky-wildebeest-mysql -o jsonpath=”{.data.mysql-root-password}” | base64 –decode; echo
Invalid character in input stream.
This is because of an error in base64 that adds an extraneous character. In this case, you will have to extract the password manually.  Basically, we&8217;re going to execute the same steps as this line of code, but one at a time.
Start by looking at the Secrets that Kubernetes is managing:
$ kubectl get secrets
NAME                     TYPE                                  DATA      AGE
default-token-0q3gy      kubernetes.io/service-account-token   3         145d
lucky-wildebeest-mysql   Opaque                                2         20m
It&8217;s the second, lucky-wildebeest-mysql that we&8217;re interested in. Let&8217;s look at the information it contains:
$ kubectl get secret lucky-wildebeest-mysql -o yaml
apiVersion: v1
data:
 mysql-password: a1p1THdRcTVrNg==
 mysql-root-password: REJUem1iQWlrTw==
kind: Secret
metadata:
 creationTimestamp: 2017-03-16T20:13:50Z
 labels:
   app: lucky-wildebeest-mysql
   chart: mysql-0.2.5
   heritage: Tiller
   release: lucky-wildebeest
 name: lucky-wildebeest-mysql
 namespace: default
 resourceVersion: “43613”
 selfLink: /api/v1/namespaces/default/secrets/lucky-wildebeest-mysql
 uid: 11eb29ed-0a85-11e7-9bb2-5ec65a93c5f1
type: Opaque
You probably already figured out where to look, but the developer&8217;s instructions told us the raw password data was here:
jsonpath=”{.data.mysql-root-password}”
So we&8217;re looking for this:
apiVersion: v1
data:
 mysql-password: a1p1THdRcTVrNg==
 mysql-root-password: REJUem1iQWlrTw==
kind: Secret
metadata:

Now we just have to go ahead and decode it:
$ echo “REJUem1iQWlrTw==” | base64 –decode
DBTzmbAikO
Finally!  So let&8217;s go ahead and connect to the database.
Create the mysql client
Now we have the password, but if we try to just connect iwt the mysql client on any old machine, we&8217;ll find that there&8217;s no connectivity outside of the cluster.  For example, if I try to connect with my local mysql client, I get an error:
$ ./mysql -h lucky-wildebeest-mysql.default.svc.cluster.local -p
Enter password:
ERROR 2005 (HY000): Unknown MySQL server host ‘lucky-wildebeest-mysql.default.svc.cluster.local’ (0)
So what we need to do is create a pod on which we can run the client.  Start by creating a new pod using the ubuntu:16.04 image:
$ kubectl run -i –tty ubuntu –image=ubuntu:16.04 –restart=Never

$ kubectl get pods
NAME                                      READY     STATUS             RESTARTS   AGE
hello-minikube-3015430129-43g6t           1/1       Running            0          1h
lucky-wildebeest-mysql-3326348642-b8kfc   1/1       Running            0          31m
ubuntu                                   1/1       Running            0          25s
When it&8217;s running, go ahead and attach to it:
$ kubectl attach ubuntu -i -t

Hit enter for command prompt
Next install the mysql client:
root@ubuntu2:/# apt-get update && apt-get install mysql-client -y
Get:1 http://archive.ubuntu.com/ubuntu xenial InRelease [247 kB]
Get:2 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]

Setting up mysql-client-5.7 (5.7.17-0ubuntu0.16.04.1) …
Setting up mysql-client (5.7.17-0ubuntu0.16.04.1) …
Processing triggers for libc-bin (2.23-0ubuntu5) …
Now we should be ready to actually connect. Remember to use the password we extracted in the previous step.
root@ubuntu2:/# mysql -h lucky-wildebeest-mysql -p
Enter password:

Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 410
Server version: 5.7.14 MySQL Community Server (GPL)

Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type ‘help;’ or ‘h’ for help. Type ‘c’ to clear the current input statement.
Of course you can do what you want here, but for now we&8217;ll go ahead and exit both the database and the container:
mysql> exit
Bye
root@ubuntu2:/# exit
logout
So we&8217;ve successfully installed an application &8212; in this case, MySql, using Helm.  But what else can Helm do?
Working with revisions
So now that you&8217;ve seen Helm in action, let&8217;s take a quick look at what you can actually do with it.  Helm is designed to let you install, upgrade, delete, and roll back revisions. We&8217;ll get into more details about upgrades in a later article on creating charts, but let&8217;s quickly look at deleting and rolling back revisions:
First off, each time you make a change with Helm, you&8217;re creating a Revision.  By deploying MySql, we created a Revision, which we can see in this list:
NAME              REVISION UPDATED                  STATUS CHART         NAMESPACE
lucky-wildebeest     1        Sun Mar 19 22:07:56 2017 DEPLOYEmysql-0.2.5   default  
operatic-starfish 2        Thu Mar 16 17:10:23 2017 DEPLOYEredmine-0.4.0 default  
As you can see, we created a revision called lucky-wildebeest, based on the mysql-0.2.5 chart, and its status is DEPLOYED.
We could also get back the information we got when it was first deployed by getting the status of the revision:
$ helm status intended-mule
LAST DEPLOYED: Sun Mar 19 22:07:56 2017
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Secret
NAME                 TYPE    DATA  AGE
intended-mule-mysql  Opaque  2     43m

==> v1/PersistentVolumeClaim
NAME                 STATUS  VOLUME                                    CAPACITY  ACCESSMODES  AGE
intended-mule-mysql  Bound   pvc-08e0027a-0d12-11e7-833b-5ec65a93c5f1  8Gi       RWO          43m

Now, if we wanted to, we could go ahead and delete the revision:
$ helm delete lucky-wildebeest
Now if you list all of the active revisions, it&8217;ll be gone.
$ helm ls
However, even though the revision s gone, you can still see the status:
$ helm status lucky-wildebeest
LAST DEPLOYED: Sun Mar 19 22:07:56 2017
NAMESPACE: default
STATUS: DELETED

NOTES:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
lucky-wildebeest-mysql.default.svc.cluster.local

To get your root password run:

   kubectl get secret –namespace default lucky-wildebeest-mysql -o jsonpath=”{.data.mysql-root-password}” | base64 –decode; echo

To connect to your database:

Run an Ubuntu pod that you can use as a client:

   kubectl run -i –tty ubuntu –image=ubuntu:16.04 –restart=Never — bash -il

Install the mysql client:

   $ apt-get update && apt-get install mysql-client -y

Connect using the mysql cli, then provide your password:

$ mysql -h lucky-wildebeest-mysql -p
OK, so what if we decide that we&8217;ve changed our mind, and we want to roll back that deletion?  Fortunately, Helm is designed for that.  We can specify that we want to rollback our application to a specific revision (in this case, 1).
$ helm rollback lucky-wildebeest 1
Rollback was a success! Happy Helming!
We can see that the application is back, and the revision has been incremented:
NAME              REVISION UPDATED                  STATUS CHART         NAMESPACE
lucky-wildebeest     2        Sun Mar 19 23:46:52 2017 DEPLOYEmysql-0.2.5   default  

We can also check the status:
$ helm status intended-mule
LAST DEPLOYED: Sun Mar 19 23:46:52 2017
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Secret
NAME                 TYPE    DATA  AGE
intended-mule-mysql  Opaque  2     21m

==> v1/PersistentVolumeClaim
NAME                 STATUS  VOLUME                                    CAPACITY  ACCESSMODES  AGE
intended-mule-mysql  Bound   pvc-dad1b896-0d1f-11e7-833b-5ec65a93c5f1  8Gi       RWO          21m

Next time, we&8217;ll talk about how to create charts for Helm.  Meanwhile, if you&8217;re going to be at Kubecon, don&8217;t forget Maciej Kwiek&8217;s talk on Boosting Helm with AppController.
The post Using Kubernetes Helm to install applications appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Open Technology Summit focuses on contributors

The Open Technology Summit, now in its fifth year, has become an annual state of the union for the established and budding open source projects that IBM supports.
The conclusion drawn at Sunday’s OTS during IBM InterConnect in Las Vegas is that the state of open tech is strong and getting stronger.
The event brought together leaders from some of today’s top open source projects: , Cloud Foundry, the Linux Foundation, JS Foundation and the Apache Software Foundation, plus the IBM leaders that support these projects.
“The open source community is only as good as the people who are contributing,” Willie Tejada, IBM Chief Developer Advocate, told the capacity crowd.

&;We’ve been systematically building an open innovation platform — cloud, , etc.” @angelluisdiaz https://t.co/HHMqWmi3v4 pic.twitter.com/945FkRbkZg
— IBM Cloud (@) March 20, 2017

Judging by the success stories shared on stage, contributor quality appears to be quite high. In short, the open source community is thriving.
Finding success in the open
The Linux Foundation has become one of the great success stories in open source, thanks largely to the huge number of contributors it has attracted. In his talk, the organization’s executive director, Jim Zemlin, told the crowd that across its various projects, contributors add a staggering 10,800 lines of code, remove 5,300 lines of code and modify 1,875 lines of code per day.
Zemlin called open source “the new norm” for software and application development.

&8220;Open source is now the new norm for software development.&; &; @jzemlin IBMOTS https://t.co/y3V3IGfcTK pic.twitter.com/83k9yLdJdf
— IBM Cloud (@IBMcloud) March 20, 2017

Cloud Foundry Foundation executive director Abby Kearns stressed her organization’s commitment to bringing forward greater diversity among its community.
“When I think about innovation, I think about diversity,” said Kearns, who took over as executive director four months ago. “We have the potential to change our industry, our countries and the world.”
Like Cloud Foundry, the OpenStack community has seen tremendous growth in its user community thanks to increased integration and cooperation with other open source communities. OpenStack Foundation executive director Jonathan Bryce and Lauren Sell, vice president of marketing and community services, shared their community’s pithy, tongue-in-cheek motto:

&8220;In 2014, there was 323 developers contributing to OpenStack. In 2016, we had 531.&8221; @jbryce IBMOTS ibminterconnect pic.twitter.com/6PxYzrVxsL
— IBM WebSphere (@IBMWebSphere) March 20, 2017

The community, which aims to create a single platform for bare metal servers, virtual machines and containers, has seen 5 million cores deployed on it. Contributors have jumped from 323 in 2014 to 531 in 2016.
Sell echoed several of the other speakers, when she noted that we’re living in a “multi-cloud world,” and that open technologies are enabling it.
IBM: Contributors, collaborators, solution providers
While it’s well known that IBM has helped start and lead many of the open source communities that it supports, the company also offers a robust set of unique capabilities around these technologies. The company is constantly working to expand its offerings around open technologies.
For example, IBM Cloud Platform Vice President and CTO Jason McGee previewed the announcement that Kubernetes is now available on IBM Bluemix Container Service.
“This service lets us bring together the power of that project and all of the amazing technology in the engine with Docker and the orchestration layer with Kubernetes and combine it with the power of cloud-based delivery,” McGee said.
David Kenny, senior vice president, IBM Watson and Cloud Platform, also spoke about “the power of the community to move the technology faster and to consume it and learn from it.”
“We’re very much committed as IBM to be participants,” he said. “Certainly IBM Cloud and IBM Watson are two pretty big initiatives at IBM these days, and both of those have come together around the belief that open source is a key part of our platform.”

“IBMCloud and Watson have come together around the belief that is a key part of our platform.” &8211; @davidwkenny IBMOTS pic.twitter.com/gU9DCzMsoC
— Kevin J. Allen (@KevJosephAllen) March 20, 2017

Moving forward as a community
Looking toward the future of open tech, it was clear that its success will depend on the next generation of contributors.
Tejada went so far as to call the open source movement a religion. “The most important piece is to understand the core premises of the religion.” He identified those as:

Embrace the new face of development
Acknowledge and adapt to the new methodologies of application development
Seize the opportunity to do more with less at an accelerated rate

For more on IBM work in open technology, visit developerWorks Open.
The post Open Technology Summit focuses on contributors appeared first on news.
Quelle: Thoughts on Cloud

Detours on the way to microservices

In 2008, I first heard Adrian Cockcroft of Netflix describe microservices as “fine-grained service oriented architecture.” I’d spent the previous six years wrestling with the more brutish, coarse-grained service-oriented architecture, its standards and so-called “best practices.” I knew then that unlike web-scale offerings such as Netflix, the road to microservices adoption by companies would have its roadblocks and detours.
It&;s not quite ten years later, and I am about to attend IBM InterConnect, where microservice adoption by business seems inescapable. What better time to consider these detours and how to avoid them?
Conway&8217;s law may be bi-directional
Melvin Conway introduced the idea that&8217;s become known as Conway&8217;s Law: &;Organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.&;
But I saw it occur in reverse: When enterprise software organizations first decide to adopt microservices and its disciplines, I observed development teams organize themselves around the (micro-) services being built. When constructing enterprise applications by coding and “wiring up” small, independently-operating services, the development organization seemed to adjust itself to fit the software architecture, thereby creating silos and organizational friction.
More than the sum of its parts
When an organization first adopts microservices in its architecture, there are resource shortages. People who are skilled in the ways of microservices find themselves stretched far too thin. And specific implementation languages, frameworks or platforms can be in short supply. There&8217;s loss of momentum, attention and effective use of time because the “experts” must continually switch context and change the focus of their attention.
As is usually the case with resource shortage, the issue is one of prioritization: When there are hundreds or even thousands of microservices to build and maintain, how are allocations of scarce resources going to be made? Who makes them and on what basis?
The cloud-native management tax
The adoption of microservices requires a variety of specialized, independent platforms to which developer, test and operations teams must attend. Many of these come with their own forms of management and management tooling. In one case, I looked through the list of management interfaces and tools for a newly-minted, cloud-native application to discover more than forty separate management tools being used. These tools were covering: the different programming languages; authentication; authorization; reporting; databases; caches; platform libraries; service dependencies; pipeline dependencies; security threat model; audits; workflow; log aggregation and much more. The full list was astonishing.
The benefits of cloud-native architecture do not come without a price: organizations will need additional management tooling and the costs of becoming skilled in those management tools.
Carrying forward the technical debt
When a company embraces cloud migration or digital transformation, a team may be chartered to re-architect and re-implement an existing, monolithic application, its associated data, external dependencies and technical interconnections. Too often, I discovered that the shortcuts and hard-coded aspects of the existing application were being re-implemented as well. There seemed to be a part of the process that was missing when the objective was to migrate an application.
In an upcoming blog post, I&8217;ll consider some of the common detours and look to what practices and technologies are being used to avoid them.
Join me and other industry experts as we explore the world of microserivces at IBM InterConnect on March 19 – 23, 2017.
 
The post Detours on the way to microservices appeared first on news.
Quelle: Thoughts on Cloud