Check Out Rob Szumski’s Keynote on the Kubernetes Operator Community at KubeCon Barcelona

Check out this great keynote talk from our very own Rob Szumski, at this past week’s KubeCon Barcelona. This is hot off the presses, so to speak, as the CNCF has been uploading all of the amazing talks from that show over on it’s YouTube page, today. Hope this keeps you occupied over the long […]
The post Check Out Rob Szumski’s Keynote on the Kubernetes Operator Community at KubeCon Barcelona appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Walkthrough : Customizing VM Post Post Provisioning Task

Red Hat Cloudforms provides several ways to customize virtual machine provisioning,  the out-of-the-box VM Provisioning State Machine has multiple steps through which VM provisioning request crawl, among them one of the step is PostProvison, this step is used to perform post-provisioning activities on the provisioned virtual system.  In this article, I will explain how to customize PostProvison method using an example of ‘add an additional disk to the VM’ use case.

Use case
VMWare Template from which virtual machines suppose to be provisioned has two disks attached, we would like to use this template to provision virtual machine but with an option to ask users if they would like to add an additional disk to the provisioned virtual machine, if the user opts for the option, he should also be able to set the size for the additional disk.
Upon successful VM provisioning, there should be total of three disks attached to the virtual machine, the third disk size is chosen by the user.
Implementing the use case

In order to implement this use case, several modifications and customizations are needed,    Here is the list of actions that need to be performed.

Create a Service Catalog with an option for end users to choose if an additional disk to be attached the provisioned virtual machine or not, if selected yes, prompt to enter the disk size.
Create a method either in Ruby or Ansible Playbook which will add an additional disk to provisioned vm.
Extend the default provisioning state machine and add that method as part of statemachine.

Lets see step by step procedure of implementing the same.
 
STEP 1: DIALOG CREATION
 
In order to create a service catalog, first, you need to create service dialog by navigating to  Automation -> Automate -> Customization -> Service Dialog.

In this dialog, the end user will enter the vm name, size of disk (in gb) and check additional disk required checkbox, if checkbox is not checked than it will not add an  additional disk.
 
Things need to be considered while passing dialog information:
 
a) VM Name: this information is passed so that newly provisioned vm should be of the same name that is being passed by an end user. This is letting us assign names to VMs at provisioning time.

Here the Label can be anything but the Name should be “vm_name”. The reason for that is vm_name, which is key_name recognized by the CatalogItemInitialization.
 b)  Size of Disk: In this box, the end user needs to pass the disk size in GB.

Here, Label is for the end user visibility but we should be concerned with the value passed in the Name box. This value is used further in our ruby method.
 
c) Additional Disk required: This is a checkbox if is being marked by end user than only new additional disk would be added else it will skip the request.

Again,  Label is for end user visibility but we should be concerned with value passed in Name box. This value is  further used in our ruby method.
 
Once Dialog is prepared, Lets got to next step i.e. creation of ruby method.
 
STEP 2: METHOD CREATION
 
A Ruby method has been created which will add a disk to the newly provisioned vm.

In the above method, the VM information is fetched from the “prov” object which is persistent till the provisioning get completed. Prov object could be provisioning request, provisioning task or template object i.e.
 
prov=$evm.root[‘miq_provision_request’] $evm.root[‘miq_provision’] $evm.root[‘miq_provision_request_template’]
 
It’s a $evm.root which make the value persistent so to transit the value from one state to another state in the provisioning state machine.
 
Now to add an additional disk in the newly provisioned vm, for that you can find the vm object from prov object as this object has associations to other objects which are defined in the automate service models (“vmdb/lib/miq_automation_engine/service_models”)
 
For example, in our method we find the vm object as
 
Vm = prov.vm (vm has direct association with prov object as defined in automate model)
 
It will first validate if “additional disk required” is checked or not. For that, first we need to fetch the dialog value for “check_additonal_disk” like below:
 
additional_disk = prov.get_option(:dialog_check_additional_disk)
 
It will return the output either in t or f if its “t” it is checked or if it is “f”  it is not checked.
 If it is checked, then add disk method will be executed.
 
To add an additional disk, “addDisk” method has been used with VM object. This method expects below  parameters so to get it executed i.e.
 

Storage_name
Size in mb
Optional values like:
Dependent Disk
Thin provisioned

 
In above method,, I  fetched the size information value dynamically from end users i.e.
 
size = prov.get_option(:dialog_size).to_i
 
During service dialog creation, the name value for disk size is “size” and tot value we need to prefix with dialog that’s why it is fetched as dialog_size.
 
And storage name is chosen the same where vm resides i.e. vm.storage_name.
 
Once the method created, you can validate it. If validate successfully, then save it and create an instance for your add_disk method, just like below:

After Instance Creation, Next step is to Extend the Provisioning state machine.
 
STEP 3: EXTENDING STATE MACHINE
 
The first thing that we must do is copy the ManageIQ/Infrastructure/VM/Provisioning/StateMachines/VMProvision_VM/Provision VM from Template (template) state machine instance into our own custom domain so that we can edit the schema.
 
Now we edit the schema of the copied class:

Add the new states i.e. AddDisk. Make sure it is added after Post provision state so that VM should be provisioned before adding an additional disk.

STEP 4. Add Our New Instance to the Copied State Machine
Now we edit our copied Provision VM from Template state machine instance to add the AddDisk instance URIs to the appropriate steps (see Adding the instance URIs to the provisioning state machine).
.
 
STEP 5:  Order a Service Catalog so to provision new vm with additional disk.

Here we see the second disk attached to the virtual machine. Our modified VM provisioning workflow has been successful.
 
Wrapping it up
In this walkthrough, we demonstrated how vm provisioning state machine can be extended to perform custom Post Provisioning tasks, we used add disk example to demonstrate this flexibility but you may extend it as per your requirement. Some examples, taking snapshot of vm immediately upon provisioning, registering  the virtual machine with third party tool etc,
virtually unlimited use cases.
 
Quelle: CloudForms

Mirantis Introduces Bring-Your-Own-Distribution Support for Kubernetes

The post Mirantis Introduces Bring-Your-Own-Distribution Support for Kubernetes appeared first on Mirantis | Pure Play Open Cloud.
The company will offer SLA-backed support to enterprise development teams that choose to work with conformant and vendor-neutral distributions of Kubernetes
KubeCon Europe, Barcelona, Spain, May 22, 2019 — Today, Mirantis announced Mirantis
Enterprise Support for Kubernetes, a “Bring-Your-Own-Distro” (BYOD) support offering for brownfield Kubernetes implementations.
“The idea of monetizing open source software through an opinionated, pre-packaged distribution is a construct of the IT-driven world that we lived in 20 years ago,” said Boris Renski, Mirantis co-founder and CMO. “Today we live in the developer-driven world and Kubernetes is built for developers first, and IT second. Developers don’t need a third party vendor to push a pre-packaged, opinionated Kubernetes their way. All they need is occasional high-quality support from open source software experts. This is what we aim to deliver with BYOD Kubernetes support.”
The new support option enables customers to use any conformant Kubernetes distribution and complementary technology, as long as it complies with general constraints outlined in the Mirantis service agreement.
BYOD support is a precursor to the Kubernetes-as-a-Service (KaaS) software that Mirantis will be demonstrating at KubeCon, currently in beta. Mirantis KaaS can be used to orchestrate brownfield K8s clusters and will address key challenges with running Kubernetes on-premises with pure open source software, including:
Distribution-agnostic K8s cluster management capabilities utilizing Cluster API and Kubespray, with self-service API and web-based UI;
Control and delegate access to K8s clusters and namespaces using existing Identity Providers with IAM integration based on Keycloak
Backend-agnostic Load Balancing and Storage capabilities for K8s through integration with OpenStack Octavia and Cinder APIs
Native integration with Istio service mesh and Harbor image registry
About Mirantis
Mirantis helps enterprises and telcos address key challenges with running Kubernetes on-premises with pure open source software. The company employs a unique build-operate-transfer delivery model to bring its flagship product, Mirantis Cloud Platform (MCP), to customers. MCP features full-stack enterprise support for Kubernetes and OpenStack and helps companies run optimized hybrid environments supporting traditional and distributed microservices-based applications in production at scale.
To date, Mirantis has helped more than 200 enterprises and service providers build and operate some of the largest open clouds in the world. Its customers include iconic brands such as Adobe, Comcast, Reliance Jio, State Farm, STC, Vodafone, Volkswagen, and Wells Fargo. Learn more at www.mirantis.com.
###
Contact information:
Joseph Eckert for Mirantis
jeckertflak@gmail.com
The post Mirantis Introduces Bring-Your-Own-Distribution Support for Kubernetes appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Highlights Video of Red Hat Summit Keynotes

If you missed Red Hat Summit, you should not despair: we’ve compiled a highlights video that captures the breadth and depth of what’s happening in the Red Hat OpenShift Ecosystem and beyond. From Microsoft CEO Satya Nadella, to IBM CEO Ginni Rometty, the keynotes at the show demonstrated the widespread support and enthusiasm Red Hat […]
The post Highlights Video of Red Hat Summit Keynotes appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Software Defined Storage: The Next Killer App for Cloud

  It’s never going to be possible to completely disconnect software from hardware. Indeed, hardware development is having a bit of a rebirth as young developers rediscover things like the 6502, homebrew computing, and 8-bit assembly languages. If this keeps going, in 20 years developers will reminisce fondly and build hobby projects in early IoT […]
The post Software Defined Storage: The Next Killer App for Cloud appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Intro to NoSQL database apps, Part 1: Everything you ever wanted to know but were afraid to ask

The post Intro to NoSQL database apps, Part 1: Everything you ever wanted to know but were afraid to ask appeared first on Mirantis | Pure Play Open Cloud.
When a company is small — say, just a few people, not
much revenue — it’s natural to keep track of everything in simple spreadsheets or
user-friendly databases such as Microsoft Access.  As you grow, you realize that
your needs go beyond what Access was designed for, and you start thinking about large
commercial databases such as SQL Server, or Oracle, or one of the open source databases
such as MySQL.
But eventually, when you get big enough, you discover
that even those databases aren’t beefy enough to handle the job.  Maybe it’s the
type of data, or the volume of data, or your new cloud-based architecture, but you
realize it’s time.
You need to start learning about NoSQL.
Can’t wait to jump right in?  Join us for me for a crash course on NoSQL and Cassandra.
NoSQL originally meant that literally: “No Structured
Query Language”.  These days, though, it actually means “Not Only Structured Query
Language”, because a NoSQL database tends to have much broader applications and
flexibility than RDBMS.
There are more than 225 different NoSQL databases,
including the more well known open source projects such as Cassandra, Redis, and Etcd,
cloud-based versions such as Amazon Web Services DynamoDB, and proprietary products
such as Oracle NoSQL. They’re all different, but they share various traits.
In this article, we’ll discuss what makes them different
from traditional relational databases and why you might want to take advantage of
them.

What is NoSQL and how is it
different from RDBMS
The most obvious way that a NoSQL database differs from
traditional relational databases is in the structure of the data.  An RDBMS
consists of tables of data structured according to a schema:

These tables are usually “normalized”, meaning that
specific pieces of data appear only once, and they’re linked together by keys.  So
if, for example, you wanted to see all of Alice’s skills, you could do it with an SQL
query:

SELECT EMPLOYEE.name, SKILLS.skill_name from SKILLS, EMPLOYEES where SKILLS.emp_id = EMPLOYEES.id and EMPLOYEES.name = ‘Alice’

This kind of query brings the data from both tables
together using a “join”.
Non-relational, or NoSQL databases, are different, in
that there is no defined schema. Data is added and defined as it comes in, much as
relationships are created and modified on the fly in an object oriented system, rather
than being pre-defined in a dictionary.  The structure of the data doesn’t matter.

Well, almost.
In fact, there are four different types of NoSQL
databases:

Key-value stores:  Key-value
stores, such as Redis and etcd, do exactly what it says on the tin; they store a
value associated with a key.  This might be something as simple as:
NewLymeOH = 44047
StatenIslandNY = 10314
BostonMA = 02134

Or it might be something more complicated, with
parameterized keys such as

Employee:1:firstname = Nick
Employee:1:lastname = Chase
Employee:2:firstname = Buddy
Employee:2:lastname = Rich

Wide Column: Wide Column databases,
such as Cassandra, are like a cross between a RDBMS and a key-value store, in that
they do have tables, and the tables consist of rows, but each row can have
different columns:

Sometimes these database will
support a query language similar to the SQL used with relational databases, but not
always.

Document: Document databases, such as MongoDB, store each entity as a single document
within the database. Like Wide Column databases, each document can have a completely
different structure, which is often represented as JSON.
Graph:  Graph databases, such as Neo4J, concentrate not so much on the elements of
the data itself but on the relationships between those elements.  They’re built
on the concept of nodes or entities, analogous to a row in a table, properties, or
information about each node, and edges, or relationships between nodes.
(Ironically, a Graph database is usually persisted using a Key-Value store, but
some actually use an RDBMS as their persistence layer.)

But structure isn’t the only difference between SQL and
NoSQL databases. One of the most important differences has to do with consistency.
RDBMSs are defined by the acronym ACID, an initialism for:

Atomicity: Either all operations in a transaction succeed or all of them
fail.
Consistency:  Consistency means that the database will always be in a working state,
with all constraints and triggers satisfied.
Isolation:  Another defining property of transactions is that once one begins, none of
the changes are visible from outside of that transaction until the transaction is
committed.
Durability: Once
a transaction is committed, the data is saved in such a way that it will not be lost,
even if there’s a crash or power failure.

NoSQL databases, on the other hand, are defined by the
acronym BASE (because developers love a good pun), which stands for:

Basically Available: NoSQL databases are architected to be highly available; with no single
point of failure, even if a node goes down, the database will still be
operational.
Soft state:  The
state of a NoSQL database can change without affecting the availability of
the
Eventual consistency:  A NoSQL database can accept a transaction even if it takes time —
usually on the order of milliseconds — for all nodes to reach a consistent
state.

There’s no need to change over to a NoSQL database all
at once; the two types can coexist quite nicely using a paradigm called polyglot
persistence. But why would you even want to consider it in the first place?

Why you’d want to use a NoSQL database
There are lots of reasons that you might find yourself
thinking about using one for the various NoSQL databases, including scalability and
cost, performance, and flexibility.
In a high impact environment, data streams  and
feeds can operate too quickly to allow for traditional transaction execution, which
requires a commit and flush to the database in order to make them permanent.
NoSQL databases, on the other hand, are designed to hold entries in memory and
persistent them when storage has had time to catch up, enabling you to be more “run and
gun” than an RDBMS.
NoSQL databases are designed so that they can be scaled
horizontally, which means that you can start with a single small server and scale up —
or down — as you need to, for a true Cloud native architecture. For example, let’s say
you know from the beginning that you want high availability, so you start with three
small servers to satisfy your requirement for redundancy.
These databases enable the system to continue
functioning should one or more nodes go down — unlike RDBMS, where the failure of a
single drive or server can bring down the entire application.  This means that if
usage begins to outstrip the capacity of your three servers, you can add more, and in
general they will be auto-discovered by the cluster and the data populated to the new
nodes.
On the other hand, when usage goes down, you can shut
down those additional nodes, and because the failure or disappearance of an individual
node doesn’t affect the performance of the system, your application keeps on
going.
Contrast this with a traditional RDBMS, which (usually)
can only scale upwards, meaning that to get better performance, you need a larger
machine.  As a result, you spend the vast majority of your time in one of two
states: either your machine is too small for the traffic you’re getting and your users
are getting a poor experience, or your machine is too big for the traffic you’re
getting and you’ve got spare capacity sitting around, wasting money.
As far as performance is concerned, NoSQL databases are
designed for very large datasets, and as a result, usually perform faster.  In
addition, many, such as Redis, are in-memory databases, which improves performance even
more.
Finally, there’s the NoSQL advantage itself. The ability
to have a “schemaless” system provides a number of benefits, including:
In systems with huge amounts of data, getting locked
into a particular schema can cause enormous problems later on.  Sure, you can
always change the schema later, but for large systems this can be extremely dangerous
and time consuming before you even consider changes to the application based on that
database.

NoSQL databases are a good choice when speed
and simplicity are more important than the ability to do transactions or immediate
consistency.
You can store unstructured or differently
structured data, because you don’t have to define a schema for every piece of data
that goes on. If you have large amounts of unstructured data, such as documents, this
means you can store it without alteration, leaving the original data
intact.
You can create a hierarchy of data that is
self-referential, or described by the data itself, enabling complicated structures
without complicated planning.

It’s important to remember, however, that not all NoSQL
databases are created equal.

Choosing the right NoSQL database
There is one significant drawback to NoSQL databases,
however. While RDBMS’s are generally the same — SQL has been fairly standardized for
decades, and it’s usually a matter of simply changing drivers to change the database
behind your application — NoSQL databases are all different, and it’s important to
know what you want before committing to one over another.
There are four major differences between NoSQL
databases:

Data model: As
we discussed earlier, there are four different kinds of NoSQL databases. If you’re
primarily dealing with large documents, you’ll be better off with a Document-oriented
database such as MongoDB, Couchbase, or CouchDB. If you have simple key-value pairs,
obviously a key-value store such as Redis, etcd, or MemcacheDB is your best bet. If
your data is very SQL-like, a wide-column database such as Cassandra or HBase should
be your focus. Finally, if you are very interested in the relationships between
various pieces of data, you’ll want a graph database such as Neo4J.
Architecture: While NoSQL databases are generally architected for scalability, they’re
not all implemented in the same way. Some, like MongoDB, use the Master/Slave model
where a single node acts as the database of record and other nodes assist. Others,
like Cassandra or etcd, are masterless systems, in which every node is exactly the
same. Depending on how you intend to operate your system, this may matter to
you.
Data distribution
model:  How is the data synchronized? With some
NoSQL databases, all nodes are read-write, taking data and replicating it out to all
other nodes. This method is an advantage when the application frequently writes to
the database, as latency can be reduced by sending write operations to the closest
node. Others designate a single node to accept write operations, and that node
replicates the data to others to speed up reads. This can be beneficial in situations
in which the application send makes changes to the database, but when writes are
made, you want to make sure they’re captured quickly.
API: If you’re
coming from the RDBMS world, one thing that might surprise you is the lack of
standardization in the ways in which you interact with the data in each database.
Some databases require the use of a specific API, where others make a SQL-like
query language available, such as Cassandra’s CQL.

In addition, different types of NoSQL databases have
different strengths and weaknesses.  For example, key-value stores have high
performance and flexibility, but support only low complexity in the data. A graph
database can handle high complexity but has only moderate performance.
If you’re thinking that this leads to the functional
equivalent of “vendor lockin”, you’re right.  So it is always a good idea to
thoroughly investigate before committing, even if that means doing a small proof of
concept first.
Next week, we’ll be starting a project demonstrating the
use of these NoSQL databases, building, and eventually containerizing, an application
built on Cassandra.The post Intro to NoSQL database apps, Part 1: Everything you ever wanted to know but were afraid to ask appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

IBM Cloud Garage helps Grupo Planetun improve auto inspection app capabilities

Investopedia describes “insurtech” (the term inspired by its commonly known cousin, “fintech”) as the use of technology to create savings and efficiency in the insurance industry. Investopedia also suggests that the insurance industry is ripe for innovation and disruption.
At Grupo Planetun, we know this to be especially true in Brazil. In the Brazilian insurance market, only 30 percent of the automotive market, 10 percent of the housing market, and two percent of cell phones are insured.
Grupo Planetun is an insurtech company in Brazil poised to take advantage of this growth opportunity. We know the big insurance companies we serve need to reduce costs and improve operations, which is why they seek to partner with us.
Innovating the auto inspection process
In 2017, we developed our App Web de Vistoria Prévia, or Preview Web App, that enables image capturing for auto inspections online. When we released the first version of the application, the primary innovation was that the insured individual could take and submit photos rather than needing to drive somewhere or wait for an insurance representative to come to their location.
Today in Brazil the insurance inspection process takes an average of five days, beginning to end. With our application, images can be sent to the insurance company in an average of five-and-half hours. This is a drastic reduction that is speeding overall inspection time.
Despite these gains, we learned by evaluating app use that 30 percent of customer photos submitted were not usable by insurance companies. For example, the photo might be diagonal, cropped incorrectly, or too dark. Or the customer might have submitted a selfie with the vehicle, which cannot be used for inspection.
We knew we needed to address the 30 percent of unusable photos, so we sought a way to provide immediate feedback to customers.
Infusing artificial intelligence into the app
We were introduced to IBM Watson offerings at Think Brasil in 2018. Following that introduction, we began to see how artificial intelligence (AI) could further the capabilities of our auto inspection app with image recognition.
We spent eight weeks with an IBM Garage team in São Paulo to automate our Preview Web App using the open source IBM Cloud Kubernetes service and Watson Visual Recognition on IBM Cloud. Now the solution can confirm or reject customer photos in real time.
The collaboration between the IBM team and our team of developers was crucial. In addition to our enhanced solution, we came away from our engagement with technical knowledge of the IBM Garage methodology for designing and building applications.
Shaking up the insurance market
Aside from the benefits of workflow transformation and user experience improvement, the project with the Garage team helped us reduce app management costs. By reducing the amount of unusable photos shared through the app, our team no longer needs to manually evaluate and flag those submissions.
Additionally, because the new version of Preview Web App is built on microservices and each system has its own API, we are free to offer our customers only what they need.
Through the Garage project, we saw that the agile methodology improved our workflow, so we adopted it internally in our organization as well. We came away from IBM Garage with technical knowledge about Watson, AI tools, image recognition and the Kubernetes database, all of which our developer team is replicating with our other employees.
By partnering with IBM, Grupo Planetun has brought radical change to the Brazilian insurance market. We are the first insurtech company to implement an image recognition methodology for insurance processes in Brazil and throughout South America. This is a major differentiator for our business and is driving company success.
The next step for Preview Web App will be to put the Watson Visual Recognition service to work sorting and pricing the amount of damage a vehicle has suffered in accident situations.
Read the case study for more details.
The post IBM Cloud Garage helps Grupo Planetun improve auto inspection app capabilities appeared first on Cloud computing news.
Quelle: Thoughts on Cloud