Honda and Watson team up for safe driving

At IBM InterConnect, IBM CEO Ginni Rometty talked about the three principles of augmented intelligence in this AI era: service to mankind, transparency, and skills.
She also discussed how IBM Watson can make us a better version of ourselves. There’s no better example of this than Honda R&D&;s Driver Coaching System prototype that helps new as well as older drivers learn how to spot and avoid potentially dangerous road situations.
Honda R&D realized that both Japan and US markets have the same issues: an aging population and a growing share of young drivers. For both countries, it&8217;s these two groups who are most likely to be in deadly auto accidents.
Honda R&D has analyzed the behaviors, skills and judgments that take place in real time as experienced drivers successfully encounter dangerous situations. By understanding the behaviors of very skilled drivers — how they gauge and react to risk — they can apply that to all drivers. The interest is in elderly and new drivers especially prone to accidents and how lives can be saved through actively coaching these drivers.
Good driving deconstructed
Let’s assume an experienced driver can spot danger a few seconds faster than a novice driver. With the Driver Coaching System, it can give that extra reaction time to drivers of all experience levels. In addition, it will support a new driver who faces the anxiety of operating a car. Watson acts as a safe driving coach by engaging in an encouraging conversation as the new driver builds confidence in driving and learns to spot dangerous situations.
I love to drive. I own a fairly exotic car and often “open it up” to feel the raw exhilaration of speed, power and control. My mind clears and focuses solely on the drive and the extremes of the car: the calculus of the curves, the feel of acceleration, timing of shifting and braking. It takes concentration, awareness, risk evaluation, and reaction. I have experience. But does that make me a good driver?
Honda R&D has deconstructed what it takes to be a good driver.
How does it work?
The Driver Coaching System is continually monitoring the driving situation or “the scene” and evaluates the speed (overall and relative to other cars), distance to surrounding objects, adherence to the lane, and braking times and distances. Watson uses that information to offer real-time guidance and advice in Japanese.
It’s also gauging the driver and classifying the driver’s skill and mental state based on changing behaviors and conditions.
I learned the hard way that I’m not a “good driver” all the time. I wrecked a rental car in an accident leaving an airport in an unfamiliar city. In that situation, I was more like a novice driver with the anxiety of driving an unfamiliar car in an unfamiliar place.
The Driver Coaching System is detecting whether a driver is driving outside their norms and is perhaps anxious, distracted or tired. With this information, the prototype can classify the driver’s current state: whether that’s normal, they’re attentive or inattentive, and driving conservatively or aggressively. It adjusts its coaching to fit the driver’s state of mind.
The scene and the driver’s behavior determine the coaching Watson provides to the driver. The goal is for Watson’s coaching to be timely, friendly, supportive and welcomed. Watson is coaching the driver in Japanese.
How is Watson’s Japanese?
Japanese is a high-context language. In Japanese, meaning is expressed between the lines. The listener has to have the background knowledge of multiple dimensions to grasp the intended meaning of what’ being said. As Watson continues to improve how it speaks and understands Japanese, it has to truly appreciate and apply this cultural context. This is quite different from how Watson originally learned English.
“A conversation with Watson is getting more accurate and showing improved understanding of human intention,” said Yoshimitsu Akuta, chief engineer, Honda R&D.  “I look forward to more possibilities with the context of the Japanese language. I think both English and Japanese speakers will be excited to have conversations with Watson as their friend in the car.”

Rapid prototype
The team at Honda R&D, led by Akuta, used agile development and had a proof of concept in about two months. The Driving Coaching System was built on IBM Bluemix and uses Watson Conversation, Watson Natural Language Understanding and Watson Translator.
Build with these services and many more on the Watson Developer Cloud.
The post Honda and Watson team up for safe driving appeared first on news.
Quelle: Thoughts on Cloud

Unveiling IBM Cloud Product Insights to unlock value through cloud

Today at IBM InterConnect, the team pulled back the curtain on a new tool to help you find value in your data. It’s called IBM Cloud Product Insights. It’s a product that aims to help you connect your existing middleware infrastructure to the cloud. Why is this so important?
It’s anticipated that the future of business will be built in the cloud. According to a study by Technology Business Research, hybrid is the single greatest growth opportunity within cloud, with an amazing 32 percent projected growth among those surveyed.
Moving to a hybrid cloud strategy may be a critical step for your business, and IBM can help you get there. Many leaders know their organization needs to move to the cloud but the challenge is understanding where to start. Others may struggle to optimize and balance software deployments across their cloud-based and on-premises environments after adoption. Product Insights is designed to help you solve these issues. And here’s the good news: the first tier of Product Insights comes at no cost.
What IBM Cloud Product Insights can do for you
IBM Cloud Product Insights is a new product that provides near real-time information for your existing IBM on-premises software. It assists administrators by providing a full view of their deployed instances and usage data through cloud-based dashboards.
Product Insights features include:

Quick registration of IBM software products
Groupings of deployed products into cross-product dashboards to focus on specific environments
Insights into which products and versions are deployed and managed, all from a single console
Visibility into product usage to help with planning and optimization
Information on product usage and analytics

Combine these capabilities with the built-in intelligent recommendations and you can get insight into how to adjust your services to reduce costs and improve performance. You can quickly ramp up deployments using cloud services, get insight into licensing and usage and find additional software services to meet your needs.
IBM strives to help clients connect to the cloud and achieve that coveted point of presence, so that they can offer customers the perfect service at the perfect time. If your environment exists on-premises and you are ready to embrace the power of cloud, IBM will help bring you the insights, speed and flexibility to move services and quickly pursue opportunities important to your growth.
To learn more about IBM Cloud Product Insights, watch the intro video or visit the new website.
 
The post Unveiling IBM Cloud Product Insights to unlock value through cloud appeared first on news.
Quelle: Thoughts on Cloud

How NBC Universal sped delivery and cut costs with DevOps

If businesses find the success with that NBC Universal has, it’s safe to say that it’ll be sticking around.
At IBM InterConnect Monday, Angel Diaz, IBM vice president of developer technology and advocacy, told the crowd, “We are living in a technology-fueled business revolution.”
He was referring, of course, to DevOps, the approach to building software and applications that breaks down barriers between developers, IT staff and operation managers in an agile, iterative environment. It’s about tapping into the collective skill of what Diaz refers to as “the business technical pulse.”
“It’s all about the people; the mastery of the machine and the method,” Diaz said.
One organization that has mastered the machine and the method is NBC Universal. John Comas, who manages the company’s platform DevOps, was on hand to share his account of his company’s journey.
The approach
In a joint session at InterConnect titled “DevOps: The New Reality for Enterprise Transformation,” Comas said his company implemented DevOps to “modernize our technology to align with the business strategy.”
“DevOps gives us the agility to keep up with changes in the marketplace,” he said, “and it enables us to instantly respond to the ever-changing business requirements. Most importantly, it allows us to remain competitive with our corporate rivals.”
He told the crowd that he approached a DevOps culture at NBC Universal through what are commonly referred to as “The 5 C’s”:

With DevOps, @NBCUniversal is &;developing faster and more efficiently than ever and at much lower costs,” says John Comas. pic.twitter.com/JgPXmzs8eP
— IBM Cloud (@IBMcloud) March 20, 2017

Continuous integration
Continuous delivery
Continuous testing
Continuous feedback
Continuous monitoring

“At its core, DevOps takes software development and systems integration and combines them together using agile methodology,” Comas said.
In his team’s continuous integration, developers commit code to the software configuration management and merge with the main line multiple times per day. Every commit results in a build. In continuous delivery, the same build is deployed to every environment, from development to production, and the team delivers smaller releases more often.
With continuous feedback, his team can provide “the pulse of the application development project” in real time, Comas said.
Continuous monitoring gives his team the ability to immediately alert the development team of any operational disruptions.
Comas said that NBC Universal’s software delivery life cycle was built on and powered through the IBM UrbanCode suite.
“It’s what I like to call ‘the central nexus of our DevOps,’” he said.
“We want to provide our consumers with the most comprehensive, robust, state-of-the-art, bleeding-edge DevOps capabilities in the industry,” he added. “We want to build software as efficiently as possible.”
The results
With DevOps, Comas said his team improved the quality of the code. He said the team is developing code “faster and more efficiently than ever and at much lower costs.”
His organization also brought together siloed teams: software development, quality assurance and technology operations.
But the real proof is in the numbers. For its Universal Orlando project, DevOps helped the business:

Reduce app deployment time from 2.5 weeks to 20 minutes
Reduce time for 1,000 test suite from 6-8 weeks to three hours
Instantly provision production-like test environments with Skytap through UrbanCode

Get started on your own journey
If you’re looking to get started with DevOps, the Bluemix Garage Method combines practices from design thinking, agile development, lean startup and DevOps to build innovative solutions.
“Anyone can learn from the experiences that we’ve had at building this stuff together along with the open source communities,” Diaz said, “by understanding the practices in the Bluemix Garage Method.”
Find out more about how you can get started with the Bluemix Garage Method here.
The post How NBC Universal sped delivery and cut costs with DevOps appeared first on news.
Quelle: Thoughts on Cloud

AT&T and IBM partner for analytics with Watson

Today at IBM InterConnect, we learned that IBM is partnering with AT&T to support enterprise customers&; Internet of Things (IoT) with data insights.
This data is huge for business customers, but is only valuable with real-time, meaningful insights.
AT&T will be using a variety of IBM products including:

Watson IoT Platform: to build the next generation of connected industrial IoT devices and products that continuously learn from the physical world
IBM Watson Data Platform: which is the fastest data ingestion engine, combined with cognitive powered decision making to help uncover business insights and value from data, possibly from the weather, the road, social media, or customer sales data
IBM Machine Learning Service: used by AT&T to give their customers access to machine learning

Benefiting AT&T customers
Companies can use IoT data to predict their machine maintenance, but how does this impact AT&T customers?
For example, say an oil and gas company wants to detect unusual events in its wells. By using AT&T’s IoT network and the IBM Watson Data Platform, AT&T’s IoT analytics solutions will ingest data from hundreds of wells, creating the models necessary with appropriate machine learning libraries and open source technology to help predict potential failures or machine malfunctions. The company will be able to detect anomalies in less time and with greater accuracy.
“We have more than 30 million connections on our network today and that number continues to grow, primarily driven by enterprise adoption,” said Chris Penrose, president of IoT solutions at AT&T. “Integrating the IBM Watson Data Platform into our IoT capabilities will be huge for our enterprise customers.”
Bringing IoT innovations to market
The news today builds on existing collaborations between AT&T and IBM to deliver new IoT innovations to the market. The companies’ strategic alliance brings together leading wireless connectivity, advanced analytics and cognitive capabilities for AT&T’s enterprise customers to improve their business processes.
Stay tuned for further announcements live from IBM InterConnect.
Start your next IoT project.
A version of this article originally appeared on the IBM Internet of Things blog.
The post AT&;T and IBM partner for analytics with Watson appeared first on news.
Quelle: Thoughts on Cloud

AT&T and IBM partner for analytics with Watson

Today at IBM InterConnect, we learned that IBM is partnering with AT&T to support enterprise customers&; Internet of Things (IoT) with data insights.
This data is huge for business customers, but is only valuable with real-time, meaningful insights.
AT&T will be using a variety of IBM products including:

Watson IoT Platform: to build the next generation of connected industrial IoT devices and products that continuously learn from the physical world
IBM Watson Data Platform: which is the fastest data ingestion engine, combined with cognitive powered decision making to help uncover business insights and value from data, possibly from the weather, the road, social media, or customer sales data
IBM Machine Learning Service: used by AT&T to give their customers access to machine learning

Benefiting AT&T customers
Companies can use IoT data to predict their machine maintenance, but how does this impact AT&T customers?
For example, say an oil and gas company wants to detect unusual events in its wells. By using AT&T’s IoT network and the IBM Watson Data Platform, AT&T’s IoT analytics solutions will ingest data from hundreds of wells, creating the models necessary with appropriate machine learning libraries and open source technology to help predict potential failures or machine malfunctions. The company will be able to detect anomalies in less time and with greater accuracy.
“We have more than 30 million connections on our network today and that number continues to grow, primarily driven by enterprise adoption,” said Chris Penrose, president of IoT solutions at AT&T. “Integrating the IBM Watson Data Platform into our IoT capabilities will be huge for our enterprise customers.”
Bringing IoT innovations to market
The news today builds on existing collaborations between AT&T and IBM to deliver new IoT innovations to the market. The companies’ strategic alliance brings together leading wireless connectivity, advanced analytics and cognitive capabilities for AT&T’s enterprise customers to improve their business processes.
Stay tuned for further announcements live from IBM InterConnect.
Start your next IoT project.
A version of this article originally appeared on the IBM Internet of Things blog.
The post AT&;T and IBM partner for analytics with Watson appeared first on news.
Quelle: Thoughts on Cloud

Using Kubernetes Helm to install applications

The post Using Kubernetes Helm to install applications appeared first on Mirantis | Pure Play Open Cloud.

After reading this introduction to Kubernetes Helm, you will know how to:

Install Helm
Configure Helm
Use Helm to determine available packages
Use Helm to install a software package
Retrieve a Kubernetes Secret
Use Helm to delete an application
Use Helm to roll back changes to an application

Difficulty is a relative thing. Deploying an application using containers can be much easier than trying to manage deployments of a traditional application over different environments, but trying to manage and scale multiple containers manually is much more difficult than orchestrating them using Kubernetes.  But even managing Kubernetes applications looks difficult compared to, say, &;apt-get install mysql&;. Fortunately, the container ecosystem has now evolved to that level of simplicity. Enter Helm.
Helm is a Kubernetes-based package installer. It manages Kubernetes &8220;charts&8221;, which are &8220;preconfigured packages of Kubernetes resources.&8221;  Helm enables you to easily install packages, make revisions, and even roll back complex changes.
Next week, my colleague Maciej Kwiek will be giving a talk at Kubecon about Boosting Helm with AppController, so we thought this might be a good time to give you an introduction to what it is and how it works.
Let&;s take a quick look at how to install, configure, and utilize Helm.
Install Helm
Installing Helm is actually pretty straightforward.  Follow these steps:

Download the latest version of Helm from https://github.com/kubernetes/helm/releases.  (Note that if you are using an older version of Kubernetes (1.4 or below) you might have to downgrade Helm due to breaking changes.)
Unpack the archive:
$ gunzip helm-v2.2.3-darwin-amd64.tar.gz
$ tar -xvf helm-v2.2.3-darwin-amd64.tar
x darwin-amd64/
x darwin-amd64/helm
x darwin-amd64/LICENSE
x darwin-amd64/README.md
Next move the helm executable to your path:
$ mv dar*/helm /usr/local/bin/.

Finally, initialize helm to both set up the local environment and to install the server portion, Tiller, on your cluster.  (Helm will use the default cluster for Kubernetes, unless you tell it otherwise.)
$ helm init
Creating /Users/nchase/.helm
Creating /Users/nchase/.helm/repository
Creating /Users/nchase/.helm/repository/cache
Creating /Users/nchase/.helm/repository/local
Creating /Users/nchase/.helm/plugins
Creating /Users/nchase/.helm/starters
Creating /Users/nchase/.helm/repository/repositories.yaml
Writing to /Users/nchase/.helm/repository/cache/stable-index.yaml
$HELM_HOME has been configured at /Users/nchase/.helm.

Tiller (the helm server side component) has been instilled into your Kubernetes Cluster.
Happy Helming!

Note that you can also upgrade the Tiller component using:
helm init –upgrade
That&8217;s all it takes to install Helm itself; now let&8217;s look at using it to install an application.
Install an application with Helm
One of the things that Helm does is enable authors to create and distribute their own applications using charts; to get a full list of the charts that are available, you can simply ask:
$ helm search
NAME                          VERSION DESCRIPTION                                       
stable/aws-cluster-autoscaler 0.2.1   Scales worker nodes within autoscaling groups.    
stable/chaoskube              0.5.0   Chaoskube periodically kills random pods in you…
stable/chronograf             0.1.2   Open-source web application written in Go and R…

In our case, we&8217;re going to install MySQL from the stable/mysql chart. Follow these steps:

First update the repo, just as you&8217;d do with apt-get update:
$ helm repo update
Hang tight while we grab the latest from your chart repositories…
…Skip local chart repository
Writing to /Users/nchase/.helm/repository/cache/stable-index.yaml
…Successfully got an update from the “stable” chart repository
Update Complete. ⎈ Happy Helming!⎈

Next, we&8217;ll do the actual install:
$ helm install stable/mysql
This command produces a lot of output, so let&8217;s take it one step at a time.  First, we get information about the release that&8217;s been deployed:
NAME:   lucky-wildebeest
LAST DEPLOYED: Thu Mar 16 16:13:50 2017
NAMESPACE: default
STATUS: DEPLOYED
As you can see, it&8217;s called lucky-wildebeest, and it&8217;s been successfully DEPLOYED.
Your release will, of course, have a different name. Next, we get the resources that were actually deployed by the stable/mysql chart:
RESOURCES:
==> v1/Secret
NAME                    TYPE    DATA  AGE
lucky-wildebeest-mysql  Opaque  2     0s

==> v1/PersistentVolumeClaim
NAME                    STATUS  VOLUME                                    CAPACITY  ACCESSMODES  AGE
lucky-wildebeest-mysql  Bound   pvc-11ebe330-0a85-11e7-9bb2-5ec65a93c5f1  8Gi       RWO          0s

==> v1/Service
NAME                    CLUSTER-IP  EXTERNAL-IP  PORT(S)   AGE
lucky-wildebeest-mysql  10.0.0.13   <none>       3306/TCP  0s

==> extensions/v1beta1/Deployment
NAME                    DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
lucky-wildebeest-mysql  1        1        1           0          0s
This is a good example because we can see that this chart configures multiple types of resources: a Secret (for passwords), a persistent volume (to store the actual data), a Service (to serve requests) and a Deployment (to manage it all).
The chart also enables the developer to add notes:
NOTES:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
lucky-wildebeest-mysql.default.svc.cluster.local

To get your root password run:
   kubectl get secret –namespace default lucky-wildebeest-mysql -o jsonpath=”{.data.mysql-root-password}” | base64 –decode; echo

To connect to your database:
Run an Ubuntu pod that you can use as a client:
   kubectl run -i –tty ubuntu –image=ubuntu:16.04 –restart=Never — bash -il

Install the mysql client:
   $ apt-get update && apt-get install mysql-client -y

Connect using the mysql cli, then provide your password:
$ mysql -h lucky-wildebeest-mysql -p

These notes are the basic documentation a user needs to use the actual application. There let&8217;s see how we put it all to use.
Connect to mysql
The first lines of the notes make it seem deceptively simple to connect to MySql:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
lucky-wildebeest-mysql.default.svc.cluster.local
Before you can do anything with that information, however, you need to do two things: get the root password for the database, and get a working client with network access to the pod hosting it.
Get the mysql password
Most of the time, you&8217;ll be able to get the root password by simply executing the code the developer has left you:
$ kubectl get secret –namespace default lucky-wildebeest-mysql -o jsonpath=”{.data.mysql-root-password}” | base64 –decode; echo
DBTzmbAikO
Some systems &; notably MacOS &8212; will give you an error:
$ kubectl get secret –namespace default lucky-wildebeest-mysql -o jsonpath=”{.data.mysql-root-password}” | base64 –decode; echo
Invalid character in input stream.
This is because of an error in base64 that adds an extraneous character. In this case, you will have to extract the password manually.  Basically, we&8217;re going to execute the same steps as this line of code, but one at a time.
Start by looking at the Secrets that Kubernetes is managing:
$ kubectl get secrets
NAME                     TYPE                                  DATA      AGE
default-token-0q3gy      kubernetes.io/service-account-token   3         145d
lucky-wildebeest-mysql   Opaque                                2         20m
It&8217;s the second, lucky-wildebeest-mysql that we&8217;re interested in. Let&8217;s look at the information it contains:
$ kubectl get secret lucky-wildebeest-mysql -o yaml
apiVersion: v1
data:
 mysql-password: a1p1THdRcTVrNg==
 mysql-root-password: REJUem1iQWlrTw==
kind: Secret
metadata:
 creationTimestamp: 2017-03-16T20:13:50Z
 labels:
   app: lucky-wildebeest-mysql
   chart: mysql-0.2.5
   heritage: Tiller
   release: lucky-wildebeest
 name: lucky-wildebeest-mysql
 namespace: default
 resourceVersion: “43613”
 selfLink: /api/v1/namespaces/default/secrets/lucky-wildebeest-mysql
 uid: 11eb29ed-0a85-11e7-9bb2-5ec65a93c5f1
type: Opaque
You probably already figured out where to look, but the developer&8217;s instructions told us the raw password data was here:
jsonpath=”{.data.mysql-root-password}”
So we&8217;re looking for this:
apiVersion: v1
data:
 mysql-password: a1p1THdRcTVrNg==
 mysql-root-password: REJUem1iQWlrTw==
kind: Secret
metadata:

Now we just have to go ahead and decode it:
$ echo “REJUem1iQWlrTw==” | base64 –decode
DBTzmbAikO
Finally!  So let&8217;s go ahead and connect to the database.
Create the mysql client
Now we have the password, but if we try to just connect iwt the mysql client on any old machine, we&8217;ll find that there&8217;s no connectivity outside of the cluster.  For example, if I try to connect with my local mysql client, I get an error:
$ ./mysql -h lucky-wildebeest-mysql.default.svc.cluster.local -p
Enter password:
ERROR 2005 (HY000): Unknown MySQL server host ‘lucky-wildebeest-mysql.default.svc.cluster.local’ (0)
So what we need to do is create a pod on which we can run the client.  Start by creating a new pod using the ubuntu:16.04 image:
$ kubectl run -i –tty ubuntu –image=ubuntu:16.04 –restart=Never

$ kubectl get pods
NAME                                      READY     STATUS             RESTARTS   AGE
hello-minikube-3015430129-43g6t           1/1       Running            0          1h
lucky-wildebeest-mysql-3326348642-b8kfc   1/1       Running            0          31m
ubuntu                                   1/1       Running            0          25s
When it&8217;s running, go ahead and attach to it:
$ kubectl attach ubuntu -i -t

Hit enter for command prompt
Next install the mysql client:
root@ubuntu2:/# apt-get update && apt-get install mysql-client -y
Get:1 http://archive.ubuntu.com/ubuntu xenial InRelease [247 kB]
Get:2 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]

Setting up mysql-client-5.7 (5.7.17-0ubuntu0.16.04.1) …
Setting up mysql-client (5.7.17-0ubuntu0.16.04.1) …
Processing triggers for libc-bin (2.23-0ubuntu5) …
Now we should be ready to actually connect. Remember to use the password we extracted in the previous step.
root@ubuntu2:/# mysql -h lucky-wildebeest-mysql -p
Enter password:

Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 410
Server version: 5.7.14 MySQL Community Server (GPL)

Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type ‘help;’ or ‘h’ for help. Type ‘c’ to clear the current input statement.
Of course you can do what you want here, but for now we&8217;ll go ahead and exit both the database and the container:
mysql> exit
Bye
root@ubuntu2:/# exit
logout
So we&8217;ve successfully installed an application &8212; in this case, MySql, using Helm.  But what else can Helm do?
Working with revisions
So now that you&8217;ve seen Helm in action, let&8217;s take a quick look at what you can actually do with it.  Helm is designed to let you install, upgrade, delete, and roll back revisions. We&8217;ll get into more details about upgrades in a later article on creating charts, but let&8217;s quickly look at deleting and rolling back revisions:
First off, each time you make a change with Helm, you&8217;re creating a Revision.  By deploying MySql, we created a Revision, which we can see in this list:
NAME              REVISION UPDATED                  STATUS CHART         NAMESPACE
lucky-wildebeest     1        Sun Mar 19 22:07:56 2017 DEPLOYEmysql-0.2.5   default  
operatic-starfish 2        Thu Mar 16 17:10:23 2017 DEPLOYEredmine-0.4.0 default  
As you can see, we created a revision called lucky-wildebeest, based on the mysql-0.2.5 chart, and its status is DEPLOYED.
We could also get back the information we got when it was first deployed by getting the status of the revision:
$ helm status intended-mule
LAST DEPLOYED: Sun Mar 19 22:07:56 2017
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Secret
NAME                 TYPE    DATA  AGE
intended-mule-mysql  Opaque  2     43m

==> v1/PersistentVolumeClaim
NAME                 STATUS  VOLUME                                    CAPACITY  ACCESSMODES  AGE
intended-mule-mysql  Bound   pvc-08e0027a-0d12-11e7-833b-5ec65a93c5f1  8Gi       RWO          43m

Now, if we wanted to, we could go ahead and delete the revision:
$ helm delete lucky-wildebeest
Now if you list all of the active revisions, it&8217;ll be gone.
$ helm ls
However, even though the revision s gone, you can still see the status:
$ helm status lucky-wildebeest
LAST DEPLOYED: Sun Mar 19 22:07:56 2017
NAMESPACE: default
STATUS: DELETED

NOTES:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
lucky-wildebeest-mysql.default.svc.cluster.local

To get your root password run:

   kubectl get secret –namespace default lucky-wildebeest-mysql -o jsonpath=”{.data.mysql-root-password}” | base64 –decode; echo

To connect to your database:

Run an Ubuntu pod that you can use as a client:

   kubectl run -i –tty ubuntu –image=ubuntu:16.04 –restart=Never — bash -il

Install the mysql client:

   $ apt-get update && apt-get install mysql-client -y

Connect using the mysql cli, then provide your password:

$ mysql -h lucky-wildebeest-mysql -p
OK, so what if we decide that we&8217;ve changed our mind, and we want to roll back that deletion?  Fortunately, Helm is designed for that.  We can specify that we want to rollback our application to a specific revision (in this case, 1).
$ helm rollback lucky-wildebeest 1
Rollback was a success! Happy Helming!
We can see that the application is back, and the revision has been incremented:
NAME              REVISION UPDATED                  STATUS CHART         NAMESPACE
lucky-wildebeest     2        Sun Mar 19 23:46:52 2017 DEPLOYEmysql-0.2.5   default  

We can also check the status:
$ helm status intended-mule
LAST DEPLOYED: Sun Mar 19 23:46:52 2017
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Secret
NAME                 TYPE    DATA  AGE
intended-mule-mysql  Opaque  2     21m

==> v1/PersistentVolumeClaim
NAME                 STATUS  VOLUME                                    CAPACITY  ACCESSMODES  AGE
intended-mule-mysql  Bound   pvc-dad1b896-0d1f-11e7-833b-5ec65a93c5f1  8Gi       RWO          21m

Next time, we&8217;ll talk about how to create charts for Helm.  Meanwhile, if you&8217;re going to be at Kubecon, don&8217;t forget Maciej Kwiek&8217;s talk on Boosting Helm with AppController.
The post Using Kubernetes Helm to install applications appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Come prove your building skills with WebSphere

Remember how awesome your building block structures were as a kid? Sure, others followed the instructions and built “by-the-book” pieces, but not you. Your designs were completely original, and therefore the greatest contraptions of all, complete with trap doors and aircraft vehicles that could transform into submarines.
You didn’t know this then: that life has led you to a career in microservices. That’s why you have what it takes to build the best original building block creation of all time.
You just have to prove it. Here’s how: Visit the IBM WebSphere booth at the concourse during InterConnect 2017 to participate in our WebSphere microservices building contest.
Wait, what are microservices?
Glad you asked. Microservices are an architecture style in which developers can build large, complex software applications using many small components known as microservices. They are independently deployable and loosely coupled, making each service easier to develop, deploy and scale. These characteristics are why microservices architectures are gaining traction for developing and delivering cloud-native workloads across public, private, and hybrid cloud application environments.
That’s great, but what do microservices have to do with toy building blocks?
The toy building blocks represent a microservices architecture, which are, by nature, comprised of loosely coupled, smaller pieces, making each service easier to scale and the ability to develop and deploy services independently. IBM WebSphere Application Server makes it easy to build and deploy these microservices across any cloud environment.
How does the contest work?

Starting Monday, 20 March, come to the IBM WebSphere booth at the concourse anytime during concourse hours to build your original creation. All toy building blocks will be provided. You can come back as often as you’d like — during concourse hours, no sneaking in at midnight — to make modifications to your structure.
The contest will close at 1 PM PT on Wednesday, 22 March. No building will happen after this time. From there, our panel of judges will decide the winners. Creations will be judged on originality, may only use the toy building blocks provided, fit within an 18”x18” space and be no taller than four feet.
The winners will be announced starting at 3 PM on Wednesday, 22 March on the open mic stage at the IBM WebSphere booth. The decisions of the contest judges are final. No takebacks.
Let’s get to what’s important — what can I win?
Our top four winners will receive two floor seat wristbands to Wednesday night’s Zac Brown Band concert.
Social Superstar Award
You don’t have to be a building master to win prizes. Instead, you can be our Social Superstar and still win two floor seat wristbands to Zac Brown Band. Here’s how to win: post a picture of your creation, using our hashtag and . Your name will be added to a drawing for the Social Superstar prize each time you do.
Bring your A-game, originality, and creativity and come visit the IBM WebSphere booth during Concourse hours:
Monday, 20 March:  5 PM – 7 PM PT
Tuesday, 21 March: 11 AM – 7:30 PM PT
Wednesday, 22 March: 9 AM – 5 PM PT (contest closes at 1 PM PT)
The post Come prove your building skills with WebSphere appeared first on news.
Quelle: Thoughts on Cloud

IBM Cloud, HyTrust and Intel cloud offering helps ensure security and data compliance

New regulations call for new solutions.
On 25 May, 2018, the new General Data Protection Regulation (GDPR) goes into effect in the European Union (EU), with sharper teeth than any other compliance regulation to date. With tighter controls and higher penalties, the new law enforces data sovereignty like never before, forever impacting the way EU and multinational organizations handle private data. It’s likely GDPR will set a new standard that other regulatory bodies will be inspired and compelled to follow.
The impact of GDPR is broader than one may think. It applies to any organization that does business in the EU. Companies must ensure data sovereignty and provide the exact location of a client’s data at any point in time. It requires that corporations keep said data within specific geographic limits. The penalty for not complying with the GDPR regulations is a fine of a staggering 4 percent of overall, worldwide corporate revenue.
This restriction will have many companies considering their approach to data sovereignty and how to store sensitive customer data in the region. This may include seeking cloud-based solutions for remote coverage, as their data centers may not be in region or require upgrades to meet GDPR requirements.

Barriers to cloud adoption
The rapidly approaching compliance changes bring potential concerns for organizations looking to use the agility, scalability and efficiency of the cloud. As it stands, many cloud providers aren’t prepared for GDPR compliance and may not have the infrastructure needed to meet the new requirements. According to the June 2016 Netskope Cloud Report on readiness in the cloud, up to 75 percent of all apps used in enterprises are out of compliance with these impending rules.

When moving to a cloud environment, security and compliance of sensitive data is ultimately the organization’s responsibility. So how can organizations make use of all of the great benefits that come from cloud infrastructure without putting their sensitive data at risk and their auditors on high alert? They must implement security protocols such as policy tagging, privileged access controls, automated compliance templates, forensic logging, data geo-fencing, encryption and key management. Beyond just “checking the box,” security solutions should be easy to deploy, flexible and scalable.
Simplifying the path to cloud adoption
The challenges can seem insurmountable, but they don’t have to be.
IBM is proud to announce IBM Cloud Secure Virtualization, which is specifically focused on addressing the concerns of security and compliance for enterprises. Created on single-tenant IBM Bluemix bare-metal servers on IBM Cloud, it is the first cloud offering to leverage HyTrust and Intel TXT security technologies to solve for compliance by tagging and enforcing set policies, offering forensic logging and low-latency encryption (with Intel AES-NI) and key management. Enabled by Intel TXT, it uses geo-fencing at the microchip level to ensure integrity for the workload and contain its geographic boundaries. This ensures a client’s data is where it’s required and can’t be accessed by those without appropriate credentials.
IBM Cloud Secure Virtualization eases the path to cloud adoption with automation that ranges from deployment to ongoing management, supporting security policies and meeting compliance requirements – all with continuous visibility and control of the cloud environment.
IBM, HyTrust and Intel have teamed up to develop and launch this unique offering to deliver security and compliance in the cloud, addressing concerns and facilitating organizations’ adoption of the cloud and its inherent benefits. IBM Cloud Secure Virtualization will be offered in two different options, both focused on creating a secure, trusted environment for running production workloads, protecting client data and reducing audit risk.
It offers the agility and benefits of cloud while spanning many important verticals. Organizations can protect various types of PII data across healthcare, financial and retail segments. With the reporting capability offered by the HyTrust DataControl and CloudControl features, organizations have visibility and documentation of their environment status, thus reducing overall risk.
IBM Cloud has built a strong partnership with Intel and HyTrust to bring a comprehensive solution that not only reduces the barriers to cloud adoption, but does so with additional capabilities that help organizations meet GDPR requirements, as well as HIPAA, PCI and more.
Learn more about the partnership and offering.
The post IBM Cloud, HyTrust and Intel cloud offering helps ensure security and data compliance appeared first on news.
Quelle: Thoughts on Cloud

Redefine digital productivity: Announcing IBM Digital Business Assistant

Understanding information is more important than ever, but many of the tools organizations use aren’t exactly built for this complex and changing digital age.
High-value employees are overloaded with:

Data, scattered across many tools, including enterprise applications (enterprise resource planning [ERP], customer relationship management [CRM], and support systems), spreadsheets, calendars, apps, social media and email
Endless routine tasks, such as sifting through email clutter, adding countless action items to various lists, finding information on demand and keeping up with updates to collaboration channels
Constant interruptions, such as meetings, phone calls, instant messages and other endless requests that ruin to-do lists

Workers aren’t imagining being spread too thin; it’s really happening. The McKinsey Global Institute estimates that by 2020, there’s likely to be a shortage of approximately 40 million high- skilled workers and 45 million medium-skill workers.
At the same time, IDC reports that digital data is expected to surge to 160 zettabytes by 2025. With a shortage of workers to manage this mounting data, employees need to work smarter.
To help make that happen, IBM is announcing a solution: IBM Digital Business Assistant. It’s a customizable, intelligent personal assistant that integrates with the existing data sources that matter to users, helping to optimize productivity. Powered by analytics and IBM Watson, the digital assistant can proactively detect complex situations, integrate information from diverse sources, and make smart and actionable recommendations based on context.
While many digital assistants are difficult to use and configure or have trouble handling complex demands, IBM Digital Business Assistant is different.
Ease of use
IBM Digital Business Assistant empowers business users to rapidly create that the personal assistant can understand. It can then automatically detect and respond to complex business situations without the need for IT intervention.
Users can configure the tool to either take immediate action or make recommendations for next steps. This can save workers hours of time sifting through information and numerous tools. Improved efficiency, better decisions and customer experiences are some of the potential benefits.

Customized by users, for users
Many productivity tools focus on improving collaboration and organizing information, but these capabilities alone aren’t customized for a specific user’s key performance indicators and preferred tools and processes for working.
IBM Digital Business Assistant enables employees to integrate information from sources they use and customize actions and notifications. The tool even learns their interactions and makes proactive recommendations based on context.
Connectors that will be available in IBM Digital Business Assistant include:

Pre-built skills to accelerate adoption
IBM Digital Business Assistant helps workers easily build on productivity assets created by others through a catalog of customizable, pre-built skills. This helps users get started quickly, without relying on IT. Because this capability lends itself to easy scalability, it’s particularly useful for departments and business partners.
Here’s an example of how it works:
Scenario: Alice is a relationship manager for dozens of customers. She’s faced with an overwhelming amount of data in disparate locations, including new sales opportunities, trouble tickets, product logs, CRM updates and more. Depending on the relationship between these pieces of information, she could need to take any one of dozens of actions.
The IT department can’t configure and reconfigure a personalized solution to help her recognize important events, automate responses and advise her on best courses of action. But now she can do it herself with IBM Digital Business Assistant.
Alice goes into the IBM Digital Business Assistant catalog, where she sees a pre-built skill for “Spot Opportunities.”

She clicks to personalize this skill for the customers in her territory. Alice wants to be notified when one of her customers has a new cross-sell opportunity but also has existing support tickets that could affect the success of that sale.

IBM Digital Business Assistant will now notify Alice when it detects this situation, saving her from manually synthesizing the information.

Sign up to participate in the free beta version of IBM Digital Business Assistant and send us your feedback.
And be sure to check us out at IBM InterConnect at booth , starting 20 March, 2017.
The post Redefine digital productivity: Announcing IBM Digital Business Assistant appeared first on news.
Quelle: Thoughts on Cloud