Lanka Bell and IBM team up to accelerate cloud in Sri Lanka

It just got easier for businesses, developers and government organizations in Sri Lanka to access all the benefits of cloud.
Telecommunications provider Lanka Bell and IBM announced a new agreement to offer public, private and hybrid IBM Cloud services in Sri Lanka, including workload migrations, disaster recovery and capacity expansion solutions. Services available will include infrastructure as a service (IaaS), platform as a service (PaaS), storage and virtual machines.
The offerings can be integrated using IBM Network Access Service solutions.
Lanka Bell hopes to &;help enterprise customers in the country to embrace cloud offerings quickly and easily,&; said Prasad Samarasinghe, the company&;s managing director. Samarasinghe noted that the agreement extends a 20-year partnership between IBM and Lanka Bell.
Learn more about the Lanka Bell and IBM partnership in Lanka Business Online&;s full article.
The post Lanka Bell and IBM team up to accelerate cloud in Sri Lanka appeared first on news.
Quelle: Thoughts on Cloud

More than 60 Red Hat-led sessions confirmed for OpenStack Summit Boston

This Spring&;s 2017 OpenStack Summit in Boston should be another great and educational event. The OpenStack Foundation has posted the final session agenda detailing the entire week&8217;s schedule of events. And once again Red Hat will be very busy during the four-day event, including delivering more than 60 sessions, from technology overviews to deep dive&8217;s around the OpenStack services for containers, storage, networking, compute, network functions virtualization (NFV), and much, much more. 

As a Headline sponsor this Fall, we also have a full day breakout room on Monday, where we plan to present additional product and strategy sessions. And we will have two keynote presenters on stage: President and CEO, Jim Whitehurst, and Vice President and Chief Technologist, Chris Wright. 
To learn more about Red Hat&8217;s general sessions, look at the details below. We&8217;ll add the agenda details of our breakout soon. Also, be sure to visit us at our booth in the center of the Marketplace to meet the team and check out our live demonstrations. Finally, we&8217;ll have Red Hat engineers, product managers, consultants, and executives in attendance, so be sure to talk to your Red Hat representative to schedule an in-person meeting while there.
And in case you haven&8217;t registered yet, visit our OpenStack Summit page for a discounted registration code to help get you to the event. We look forward to seeing you in Boston this April.
For more details on each session, click on the title below:
Monday sessions

Kuryr & Fuxi: delivering OpenStack networking and storage to Docker swarm containers
Antoni Segura Puimedon, Vikas Choudhary, and Hongbin Lu (Huawei)

Multi-cloud demo
Monty Taylor

Configure your cloud for recovery
Walter Bentley

Kubernetes and OpenStack at scale
Stephen Gordon

No longer considered an epic spell of transformation: in-place upgrade!
Krzysztof Janiszewski and Ken Holden

Fifty shades for enrollment: how to use Certmonger to win OpenStack
Ade Lee and Rob Crittenden

OpenStack and OVN &; what&8217;s new with OVS 2.7
Russell Bryant, Ben Pfaff (VMware), and Justin Pettit (VMware)

Federation with Keycloak and FreeIPA
Martin Lopes, Rodrigo Duarte Sousa, and John Dennis

7 &;must haves&; for highly effective Telco NFV deployments
Anita Tragler and Greg Smith (Juniper Networks, Inc.)

Containerizing OpenStack deployments: lessons learned from TripleO
Flavio Percoco

Project update &8211; Heat
Rabi Mishra, Zane Bitter, and Rico Lin (EasyStack)

Tuesday sessions

OpenStack Telemetry and the 10,000 instances
Julien Danjou and Alex Krzos

Mastering and troubleshooting NFV issues
Sadique Puthen and Jaison Raju

The Ceph power show &8211; hands-on with Ceph: Episode 2 &8211; &;The Jewel Story&8217;
Karan Singh, Daniel Messer, and Brent Compton

SmartNICs &8211; paving the way for 25G/40G/100G speed NFV deployments in OpenStack
Anita Tragler and Edwin Peer (Netronome)

Scaling NFV &8211; are containers the answer?
Azhar Sayeed

Free my organization to pursue cloud native infrastructure!
Dave Cain and Steve Black (East Carolina University)

Container networking using Kuryr &8211; a hands-on lab
Sudhir Kethamakka and Amol Chobe (Ericsson)

Using software-defined WAN implementation to turn on advanced connectivity services in OpenStack
Ali Kafel and Pratik Roychowdhury (OpenContrail)

Don&8217;t fail at scale: how to plan for, build, and operate a successful OpenStack cloud
David Costakos and Julio Villarreal Pelegrino

Red Hat OpenStack Certification Program
Allessandro Silva

OpenStack and OpenDaylight: an integrated IaaS for SDN and NFV
Nir Yechiel and Andre Fredette

Project update &8211; Kuryr
Antoni Segura Puimedon and Irena Berezovsky (Huawei)

Barbican workshop &8211; securing the cloud
Ade Lee, Fernando Diaz (IBM), Dave McCowan (Cisco Systems), Douglas Mendizabal (Rackspace), Kaitlin Farr (Johns Hopkins University)

Bridging the gap between deploying OpenStack as a cloud application and as a traditional application
James Slagle

Real time KVM and how it works
Eric Lajoie

Wednesday sessions

Projects Update &8211; Sahara
Telles Nobrega and Elise Gafford

Project update &8211; Mistral
Ryan Brady

Bite off more than you can chew, then chew it: OpenStack consumption models
Tyler Britten, Walter Bentley, and Jonathan Kelly (MetacloudCisco)

Hybrid messaging solutions for large scale OpenStack deployments
Kenneth Giusti and Andrew Smith

Project update &8211; Nova
Dan Smith, Jay Pipes (Mirantis), and Matt Riedermann (Huawei)

Hands-on to configure your cloud to be able to charge your users using official OpenStack components
Julien Danjou, Christophe Sautheir (Objectif Libre), and Maxime Cottret (Objectif Libre)

To OpenStack or not OpenStack; that is the question
Frank Wu

Distributed monitoring and analysis for telecom requirements
Tomofumi Hayashi, Yuki Kasuya (KDDI Research), and Ryota Mibu (NEC)

OVN support for multiple gateways and IPv6
Russell Bryant and Numan Siddique

Kuryr-Kubernetes: the seamless path to adding pods to your datacenter networking
Antoni Segura Puimedon, Irena Berezovsky (Huawei), and Ilya Chukhnakov (Mirantis)

Unlocking the performance secrets of Ceph object storage
Karan Singh, Kyle Bader, and Brent Compton

OVN hands-on tutorial part 1: introduction
Russell Bryant, Ben Pfaff (VMware), and Justin Pettit (VMware)

Kuberneterize your baremetal nodes in OpenStack!
Ken Savich and Darin Sorrentino

OVN hands-on tutorial part 2: advanced
Russell Bryant, Ben Pfaff (VMware), and Justin Pettit (VMware)

The Amazon effect on open source cloud business models
Flavio Percoco, Monty Taylor, Nati Shalom (GigaSpaces), and Yaron Haviv (Iguazio)

Neutron port binding and impact of unbound ports on DVR routers with floatingIP
Brian Haley and Swaminathan Vasudevan (HPE)

Upstream contribution &8211; give up or double down?
Assaf Muller

Hyper cool infrastructure
Randy Robbins

Strategic distributed and multisite OpenStack for business continuity and scalability use cases
Rob Young

Per API role-based access control
Adam Young and Kristi Nikolla (Massachusetts Open Cloud)

Logging work group BoF
Erno Kuvaja, Rochelle Grober, Hector Gonzalez Mendoza (Intel), Hieu LE (Fujistu) and Andrew Ukasick (AT&T)

Performance and scale analysis of OpenStack using Browbeat
 Alex Krzos, Sai Sindhur Malleni, and Joe Talerico

Scaling Nova: how CellsV2 affects your deployment
Dan Smith

Ambassador community report
Erwan Gallen, Lisa-Marie Namphy (OpenStack Ambassador), Akihiro Hasegawa (Equinix), Marton Kiss (Aptira), and Akira Yoshiyama (NEC)

Thursday sessions

Examining different ways to get involved: a look at open source
Rob Wilmoth

CephFS backed NFS share service for multi-tenant clouds
Victoria Martinez de la Cruz, Ramana Raja, and Tom Barron

Create your VM in a (almost) deterministic way &8211; a hands-on lab
Sudhir Kethamakka and Geetika Batra

RDO&8217;s continuous packaging platform
Matthieu Huin, Fabien Boucher, and Haikel Guemar (CentOS)

OpenDaylight Network Virtualization solution (NetVirt) with FD.io VPP data plane
Andre Fredette, Srikanth Vavilapalli (Ericsson), and Prem Sankar Gopanna (Ericsson)

Ceph snapshots for fun & profit
Gregory Farnum

Gnocchi and collectd for faster fault detection and maintenance
Julien Danjou and Emma Foley

Project update &8211; TripleO
Emillien Macchi, Flavio Percoco, and Steven Hardy

Project update &8211; Telemetry
Julien Danjou, Mehdi Abaakouk, and Gordon Chung (Huawei)

Turned up to 11: low latency Ceph block storage
Jason Dillaman, Yuan Zhou (INTC), and Tushar Gohad (Intel)

Who reads books anymore? Or writes them?
Michael Solberg and Ben Silverman (OnX Enterprise Solutions)

Pushing the boundaries of OpenStack &8211; wait, what are they again?
Walter Bentley

Multi-site OpenStack &8211; deployment option and challenges for a telco
Azhar Sayeed

Ceph project update
Sage Weil

 
Quelle: RedHat Stack

More than 60 Red Hat-led sessions confirmed for OpenStack Summit Boston

This Spring&;s 2017 OpenStack Summit in Boston should be another great and educational event. The OpenStack Foundation has posted the final session agenda detailing the entire week&8217;s schedule of events. And once again Red Hat will be very busy during the four-day event, including delivering more than 60 sessions, from technology overviews to deep dive&8217;s around the OpenStack services for containers, storage, networking, compute, network functions virtualization (NFV), and much, much more. 

As a Headline sponsor this Fall, we also have a full day breakout room on Monday, where we plan to present additional product and strategy sessions. And we will have two keynote presenters on stage: President and CEO, Jim Whitehurst, and Vice President and Chief Technologist, Chris Wright. 
To learn more about Red Hat&8217;s general sessions, look at the details below. We&8217;ll add the agenda details of our breakout soon. Also, be sure to visit us at our booth in the center of the Marketplace to meet the team and check out our live demonstrations. Finally, we&8217;ll have Red Hat engineers, product managers, consultants, and executives in attendance, so be sure to talk to your Red Hat representative to schedule an in-person meeting while there.
And in case you haven&8217;t registered yet, visit our OpenStack Summit page for a discounted registration code to help get you to the event. We look forward to seeing you in Boston this April.
For more details on each session, click on the title below:
Monday sessions

Kuryr & Fuxi: delivering OpenStack networking and storage to Docker swarm containers
Antoni Segura Puimedon, Vikas Choudhary, and Hongbin Lu (Huawei)

Multi-cloud demo
Monty Taylor

Configure your cloud for recovery
Walter Bentley

Kubernetes and OpenStack at scale
Stephen Gordon

No longer considered an epic spell of transformation: in-place upgrade!
Krzysztof Janiszewski and Ken Holden

Fifty shades for enrollment: how to use Certmonger to win OpenStack
Ade Lee and Rob Crittenden

OpenStack and OVN &; what&8217;s new with OVS 2.7
Russell Bryant, Ben Pfaff (VMware), and Justin Pettit (VMware)

Federation with Keycloak and FreeIPA
Martin Lopes, Rodrigo Duarte Sousa, and John Dennis

7 &;must haves&; for highly effective Telco NFV deployments
Anita Tragler and Greg Smith (Juniper Networks, Inc.)

Containerizing OpenStack deployments: lessons learned from TripleO
Flavio Percoco

Project update &8211; Heat
Rabi Mishra, Zane Bitter, and Rico Lin (EasyStack)

Tuesday sessions

OpenStack Telemetry and the 10,000 instances
Julien Danjou and Alex Krzos

Mastering and troubleshooting NFV issues
Sadique Puthen and Jaison Raju

The Ceph power show &8211; hands-on with Ceph: Episode 2 &8211; &;The Jewel Story&8217;
Karan Singh, Daniel Messer, and Brent Compton

SmartNICs &8211; paving the way for 25G/40G/100G speed NFV deployments in OpenStack
Anita Tragler and Edwin Peer (Netronome)

Scaling NFV &8211; are containers the answer?
Azhar Sayeed

Free my organization to pursue cloud native infrastructure!
Dave Cain and Steve Black (East Carolina University)

Container networking using Kuryr &8211; a hands-on lab
Sudhir Kethamakka and Amol Chobe (Ericsson)

Using software-defined WAN implementation to turn on advanced connectivity services in OpenStack
Ali Kafel and Pratik Roychowdhury (OpenContrail)

Don&8217;t fail at scale: how to plan for, build, and operate a successful OpenStack cloud
David Costakos and Julio Villarreal Pelegrino

Red Hat OpenStack Certification Program
Allessandro Silva

OpenStack and OpenDaylight: an integrated IaaS for SDN and NFV
Nir Yechiel and Andre Fredette

Project update &8211; Kuryr
Antoni Segura Puimedon and Irena Berezovsky (Huawei)

Barbican workshop &8211; securing the cloud
Ade Lee, Fernando Diaz (IBM), Dave McCowan (Cisco Systems), Douglas Mendizabal (Rackspace), Kaitlin Farr (Johns Hopkins University)

Bridging the gap between deploying OpenStack as a cloud application and as a traditional application
James Slagle

Real time KVM and how it works
Eric Lajoie

Wednesday sessions

Projects Update &8211; Sahara
Telles Nobrega and Elise Gafford

Project update &8211; Mistral
Ryan Brady

Bite off more than you can chew, then chew it: OpenStack consumption models
Tyler Britten, Walter Bentley, and Jonathan Kelly (MetacloudCisco)

Hybrid messaging solutions for large scale OpenStack deployments
Kenneth Giusti and Andrew Smith

Project update &8211; Nova
Dan Smith, Jay Pipes (Mirantis), and Matt Riedermann (Huawei)

Hands-on to configure your cloud to be able to charge your users using official OpenStack components
Julien Danjou, Christophe Sautheir (Objectif Libre), and Maxime Cottret (Objectif Libre)

To OpenStack or not OpenStack; that is the question
Frank Wu

Distributed monitoring and analysis for telecom requirements
Tomofumi Hayashi, Yuki Kasuya (KDDI Research), and Ryota Mibu (NEC)

OVN support for multiple gateways and IPv6
Russell Bryant and Numan Siddique

Kuryr-Kubernetes: the seamless path to adding pods to your datacenter networking
Antoni Segura Puimedon, Irena Berezovsky (Huawei), and Ilya Chukhnakov (Mirantis)

Unlocking the performance secrets of Ceph object storage
Karan Singh, Kyle Bader, and Brent Compton

OVN hands-on tutorial part 1: introduction
Russell Bryant, Ben Pfaff (VMware), and Justin Pettit (VMware)

Kuberneterize your baremetal nodes in OpenStack!
Ken Savich and Darin Sorrentino

OVN hands-on tutorial part 2: advanced
Russell Bryant, Ben Pfaff (VMware), and Justin Pettit (VMware)

The Amazon effect on open source cloud business models
Flavio Percoco, Monty Taylor, Nati Shalom (GigaSpaces), and Yaron Haviv (Iguazio)

Neutron port binding and impact of unbound ports on DVR routers with floatingIP
Brian Haley and Swaminathan Vasudevan (HPE)

Upstream contribution &8211; give up or double down?
Assaf Muller

Hyper cool infrastructure
Randy Robbins

Strategic distributed and multisite OpenStack for business continuity and scalability use cases
Rob Young

Per API role-based access control
Adam Young and Kristi Nikolla (Massachusetts Open Cloud)

Logging work group BoF
Erno Kuvaja, Rochelle Grober, Hector Gonzalez Mendoza (Intel), Hieu LE (Fujistu) and Andrew Ukasick (AT&T)

Performance and scale analysis of OpenStack using Browbeat
 Alex Krzos, Sai Sindhur Malleni, and Joe Talerico

Scaling Nova: how CellsV2 affects your deployment
Dan Smith

Ambassador community report
Erwan Gallen, Lisa-Marie Namphy (OpenStack Ambassador), Akihiro Hasegawa (Equinix), Marton Kiss (Aptira), and Akira Yoshiyama (NEC)

Thursday sessions

Examining different ways to get involved: a look at open source
Rob Wilmoth

CephFS backed NFS share service for multi-tenant clouds
Victoria Martinez de la Cruz, Ramana Raja, and Tom Barron

Create your VM in a (almost) deterministic way &8211; a hands-on lab
Sudhir Kethamakka and Geetika Batra

RDO&8217;s continuous packaging platform
Matthieu Huin, Fabien Boucher, and Haikel Guemar (CentOS)

OpenDaylight Network Virtualization solution (NetVirt) with FD.io VPP data plane
Andre Fredette, Srikanth Vavilapalli (Ericsson), and Prem Sankar Gopanna (Ericsson)

Ceph snapshots for fun & profit
Gregory Farnum

Gnocchi and collectd for faster fault detection and maintenance
Julien Danjou and Emma Foley

Project update &8211; TripleO
Emillien Macchi, Flavio Percoco, and Steven Hardy

Project update &8211; Telemetry
Julien Danjou, Mehdi Abaakouk, and Gordon Chung (Huawei)

Turned up to 11: low latency Ceph block storage
Jason Dillaman, Yuan Zhou (INTC), and Tushar Gohad (Intel)

Who reads books anymore? Or writes them?
Michael Solberg and Ben Silverman (OnX Enterprise Solutions)

Pushing the boundaries of OpenStack &8211; wait, what are they again?
Walter Bentley

Multi-site OpenStack &8211; deployment option and challenges for a telco
Azhar Sayeed

Ceph project update
Sage Weil

 
Quelle: RedHat Stack

How do you build 12-factor apps using Kubernetes?

The post How do you build 12-factor apps using Kubernetes? appeared first on Mirantis | Pure Play Open Cloud.
It&;s said that there are 12 factors that define a cloud-native application.  It&8217;s also said that Kubernetes is designed for cloud native computing. So how do you create a 12-factor application using Kubernetes?  Let&8217;s take a look at exactly what twelve factor apps are and how they relate to Kubernetes.
What is a 12 factor application?
The Twelve Factor App is a manifesto on architectures for Software as a Service created by Heroku. The idea is that in order to be really suited to SaaS and avoid problems with software erosion &; where over time an application that&8217;s not updated gets to be out of sync with the latest operating systems, security patches, and so on &8212; an app should follow these 12 principles:

Codebase
One codebase tracked in revision control, many deploys
Dependencies
Explicitly declare and isolate dependencies
Config
Store config in the environment
Backing services
Treat backing services as attached resources
Build, release, run
Strictly separate build and run stages
Processes
Execute the app as one or more stateless processes
Port binding
Export services via port binding
Concurrency
Scale out via the process model
Disposability
Maximize robustness with fast startup and graceful shutdown
Dev/prod parity
Keep development, staging, and production as similar as possible
Logs
Treat logs as event streams
Admin processes
Run admin/management tasks as one-off processes

Let&8217;s look at what all of this means in terms of Kubernetes applications.
Principle I. Codebase
Principle 1 of a 12 Factor App is &;One codebase tracked in revision control, many deploys&;.
For Kubernetes applications, this principle is actually embedded in the nature of container orchestration itself.  Typically, you create your code using a source control repository such as a git repo, then store specific versions of your images in the Docker Hub. When you define the containers to be orchestrated as part of a a Kubernetes Pod, Deployment, DaemonSet, you also specify a particular version of the image, as in:

spec:
     containers:
     – name: AcctApp
       image: acctApp:v3

In this way, you might have multiple versions of your application running in different deployments.
Applications can also behave differently depending on the configuration information with which they run.
Principle II. Dependencies
Principle 2 of a 12 Factor App is &8220;Explicitly declare and isolate dependencies&8221;.
Making sure that an application&8217;s dependencies are satisfied is something that is practically assumed. For a 12 factor app, that includes not just making sure that the application-specific libraries are available, but also not counting on, say, shelling out to the operating system and assuming system libraries such as curl will be there.  A 12 factor app must be self-contained.
That includes making sure that the application is isolated enough that it&8217;s not affected by conflicting libraries that might be installed on the host machine.
Fortunately, if an application does have any specific or unusual system requirements, both of these requirements are handily satisfied by containers; the container includes all of the dependencies on which the application relies, and also provides a reasonably isolated environment in which the container runs.  (Contrary to popular belief, container environments are not completely isolated, but for most situations, they are Good Enough.)
For applications that are modularized and depend on other components, such as an HTTP service and a log fetcher, Kubernetes provides a way to combine all of these pieces into a single Pod, for an environment that encapsulates those pieces appropriately.
Principle III. Config
Principle 3 of a 12 Factor App is &8220;Store config in the environment&8221;.
The idea behind this principle is that an application should be completely independent from its configuration. In other words, you should be able to move it to another environment without having to touch the source code.
Some developers achieve this goal by creating configuration files of some sort, specifying details such as directories, hostnames, and database credentials. This is an improvement, but it does carry the risk that someone will check a config file into the source control repository.
Instead, 12 factor apps store their configurations as environment variables; these are, as the manifesto says, &8220;unlikely to be checked into the repository by accident&8221;, and they&8217;re operating system independent.
Kubernetes enables you to specify environment variables in manifests via the Downward API, but as these manifests themselves do get checked int source control, that&8217;s not a complete solution.
Instead, you can specify that environment variables should be populated by the contents of Kubernetes ConfigMaps or Secrets, which can be kept separate from the application.  For example, you might define a Pod as:
apiVersion: v1
kind: Pod
metadata:
 name: secret-env-pod
spec:
 containers:
   – name: mycontainer
     image: redis
     env:
       – name: SECRET_USERNAME
         valueFrom:
           secretKeyRef:
             name: mysecret
             key: username
       – name: SECRET_PASSWORD
         valueFrom:
           secretKeyRef:
             name: mysecret
             key: password
       – name: CONFIG_VERSION
         valueFrom:
           configMapKeyRef:
             name: redis-app-config
             key: version.id
As you can see, this Pod receives three environment variables, SECRET_USERNAME, SECRET_PASSWORD, and CONFIG_VERSION, the first two from from referenced Kubernetes Secrets, and the third from a Kubernetes ConfigMap.  This enables you to keep them out of configuration files.
Of course, there&8217;s still a risk of someone mis-handling the files used to create these objects, but it&8217;s them together and institute secure handling policies than it is to weed out dozens of config files scattered around a deployment.
What&8217;s more, there are those in the community that point out that even environment variables are not necessarily safe for their own reasons.  For example, if an app crashes, it may save all of the environment variables to a log or even transmit them to another service.  Diogo Mónica points to a tool called Keywhiz you can use with Kubernetes, creating secure secret storage.
Principle IV. Backing services
Principle 4 of the 12 Factor App is &8220;Treat backing services as attached resources&8221;.
In a 12 Factor app, any services that are not part of the core application, such as databases, external storage, or message queues, should be accessed as a service &8212; via an HTTP or similar request &8212; and specified in the configuration, so that the source of the service can be changed without affecting the core code of the application.
For example, if your application uses a message queuing system, you should be able to change from RabbitMQ to ZeroMQ (or ActiveMQ or even something else) without having to change anything but configuration information.
This requirement has two implications for a Kubernetes-based application.
First, it means that you must think about how your applications take in (and give out) information. For example, if you have a backing database, you wouldn&8217;t want to have a local Mysql instance, even if you&8217;re replicating it to other instances. Instead, you would want to have a separate container that handles database operations, and make those operations callable via an API. This way, if you needed to change to, say, PostgreSQL or a remotely hosted MySQL server, you could create a new container image, update the Pod definition, and restart the Pod (or more likely the Deployment or StatefulSet managing it).  
Similarly, if you&8217;re storing credentials or address information in environment variables backed by a ConfigMap, you can change that information and replace the Pod.
Note that both of these examples assume that though you&8217;re not making any changes to the source code (or even the container image for the main application) you will need to replace the Pod; the ability to do this is actually another principle of a 12 Factor App.
Principle V. Build, release, run
Principle 5 of the 12 Factor App is &8220;Strictly separate build and run stages&8221;.
These days it&8217;s hard to imagine a situation where this is not true, but a twelve-factor app must have a separate build stage.  In other words, you should be able to build or compile the code, then combine that with specific configuration information to create a specific release, then deliberately run the release.
Releases should be identifiable.  You should be able to say, &;This deployment is running Release 1.14 of this application&8221; or something similar, the same way we say we&8217;re running &8220;the OpenStack Ocata release&8221; or &8220;Kubernetes 1.6&8221;.  They should also be immutable; any changes should lead to a new release.  If this sounds daunting, remember that when we say &8220;application&8221; we&8217;re no longer talking about large, monolithic releases.  Instead, we&8217;re talking about very specific microservices, each of which has its own release, and which can bump releases without causing errors in consuming services.
All of this is so that when the app is running, that &8220;run&8221; process can be completely automated. Twelve factor apps need to be capable of running in an automated fashion because they need to be capable of restarting should there be a problem.
Translating this to the Kubernetes realm, we&8217;ve already said that the application needs to be stored in source control, then built with all of its dependencies.  That&8217;s your build process.  We talked about separating out the configuration information, so that&8217;s what needs to be combined with the build to make a release. And the ability to automatically run the application &8212; or multiple copies of the application &8212; is precisely what Kubernetes constructs like Deployments, ReplicaSets, and DaemonSets do.
Principle VI. Processes
Principle 6 of the 12 Factor App is &8220;Execute the app as one or more stateless processes&8221;.
Stateless processes are a core idea behind cloud native applications. Every twelve-factor application needs to run in individual, share-nothing processes. That means that any time you need to persist information, it needs to be stored in a backing service such as a database.
If you&8217;re new to cloud application programming, this might be deceptively simple; many developers are used to &8220;sticky&8221; sessions, storing information in the session with the confidence that the next request will come to the same server. In a cloud application, however, you must never make that assumption.
Instead, if you&8217;re running a Kubernetes-based application that hosts multiple copies of a particular pod, you must assume that subsequent requests can go anywhere.  To solve this problem, you will want to use some sort of backing volume or database for persistence.
One caveat to this principle is that Kubernetes StatefulSets can enable you to create Pods with stable network identities, so that you can, theoretically, direct requests to a particular pod. Technically speaking, if the process doesn&8217;t actually store state, and the pod can be deleted and recreated and still function properly, it satisfies this requirement &8212; but that&8217;s probably pushing it a bit.
Principle VII. Port binding
Principle 7 of the 12 Factor App is &8220;Export services via port binding&8221;.
In an environment where we&8217;re assuming that different functionalities are handled by different processes, it&8217;s easy to make the connection that these functionalities should be available via a protocol such as HTTP, so it&8217;s common for applications to be run behind web servers such as Apache or Tomcat.  Twelve-factor apps, however, should not be depend on an additional application in that way; remember, every function should be in its own process, isolated from everything else. Instead, the 12 Factor App manifesto recommends adding a web server library or something similar to the app itself, so that the app can await requests on a defined port, whether it&8217;s using HTTP or another protocol.
In a Kubernetes-based application, this is done partly through the architecture of the application itself, and partly by making sure that the application has all of its dependencies as part of the creation of the base containers on which the application is built.
Principle VIII. Concurrency
Principle 8 of the 12 Factor App is to &8220;Scale out via the process model&8221;.
When you&8217;re writing a twelve-factor app, make sure that you&8217;re designing it to be scaled out, rather than scaled up. That means that in order to add more capacity, you should be able to add more instances rather than more memory or CPU to the machine on which the app is running. Note that this specifically means being able to start additional processes on additional machines, which is, fortunately, a key capability of Kubernetes.
Principle IX. Disposability
Principle 9 of the 12 Factor App is to &8220;Maximize robustness with fast startup and graceful shutdown&8221;.
It seems like this principle was tailor made for containers and Kubernetes-based applications. The idea that processes should be disposable means that at any time, an application can die and the user won&8217;t be affected, either because there are others to take its place, because it&8217;ll start right up again, or both.
Containers are built on this principle, of course, and Kubernetes structures that manage multiple instances and maintain a certain level of availability even in the face of problems, such as ReplicaSets, complete the picture.
Principle X. Dev/prod parity
Principle 10 of the 12 Factor App is &8220;Keep development, staging, and production as similar as possible&8221;.
This is another principle that seems like it should be obvious, but is deeper than most people think. On the surface level, it does mean that you should have identical development, staging, and production environments, inasmuch as that is possible. One way to accomplish this is through the use of Kubernetes namespaces, enabling you to (theoretically) run code on the same actual cluster against the same actual systems while still keeping environments separate. In some situations, you can also use tools such as Minikube or kubeadm-dind-cluster to create near-clones of production systems.
At a deeper level, though, as the Twelve-Factor App manifesto puts it, it&8217;s about three different types of &8220;gaps&8221;:

The time gap: A developer may work on code that takes days, weeks, or even months to go into production.

The personnel gap: Developers write code, ops engineers deploy it.

The tools gap: Developers may be using a stack like Nginx, SQLite, and OS X, while the production deploy uses Apache, MySQL, and Linux.

The goal here is to create a Continuous Integration/Continuous Deployment situation in which changes can go into production virtually immediately (after testing, of course!), deployed by the developers who wrote it so they can actually see it in production, using the same tools on which the code was actually written in order to minimize the possibility of compatibility errors between environments.
Some of these factors are outside of the realm of Kubernetes, of course; the personnel gap is a cultural issue, for example. The time and tools gaps, however, can be helped in two ways.
For the time gap, Kubernetes-based applications are, of course, based on containers, which themselves are based on images that are stored in version-control systems, so they lend themselves to CI/CD. They can also be updated via rolling updates that can be rolled back in case of problems, so they&8217;re well-suited to this kind of environment.
As far as the tools gap is concerned, the architecture of Kubernetes-based applications make it easier to manage, both by making local dependencies simple to include in the various images, and by modularizing the application in such a way that external backing services can be standardized.
Principle XI. Logs
Principle 11 of the 12 Factor App is to &8220;Treat logs as event streams&8221;.
While most traditional applications store log information in a file, the Twelve-Factor app directs it, instead, to stdout as a stream of events; it&8217;s the execution environment that&8217;s responsible for collecting those events. That might be as simple as redirecting stdout to a file, but in most cases it involves using a log router such as Fluentd and saving the logs to Hadoop or a service such as Splunk.
In Kubernetes, you have at least two choices for automatic logging capture: Stackdriver Logging if you&8217;re using Google Cloud, and Elasticsearch if you&8217;re not.  You can find more information on setting Kubernetes logging destinations here.
Principle XII. Admin processes
Principle 12 of the 12 Factor App is &8220;Run admin/management tasks as one-off processes&8221;.
This principle involves separating admin tasks such as migrating a database or inspecting records from the rest of the application. Even though they&8217;re separate, however, they must still be run in the same environment and against the same base code and configuration as the application, and their code must be shipped alongside the application to prevent drift.
You can implement this a number of different ways in Kubernetes-based applications, depending on the size and scale of the application itself. For example, for small tasks, you might use kubectl exec to operate on a specific container, or you can use the Kubernetes Job to run a self-contained application. For more complicated tasks that involve orchestration of changes, however, you can also use Kubernetes Helm charts.
How many of these factors did you hit?
Unless you&8217;re still building desktop applications, chances are you can feel good about hitting at least a few of these essential principles of twelve-factor apps. But chances are you also found at least one or two you can probably work a little harder on.
So we want to know: which of these factors are the biggest struggle for you? Where do you think you need to work harder, and what would make it easier for you?  Let us know in the comments.
Thanks to Jedrzej Nowak, Piotr Shamruk, Ivan Shvedunov, Tomasz Napierala and Lukasz Oles for reviewing this article!
Check out more Kubernetes resources.
The post How do you build 12-factor apps using Kubernetes? appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Comcast Business offers up IBM Cloud dedicated links

Comcast Business has partnered with IBM to offer its customers a direct, dedicated network link to IBM Cloud and its global network of 50 data centers in 19 countries.
Using those direct links, customers will be able to access the network at speeds of up to 10 gigabits per second.
The partnership gives enterprise customers &;more choices for connectivity so they can store data, optimize their workloads and execute mission-critical applications in the cloud, whether it be on-premise, off-premise or a combination of the two,&; said Jeff Lewis, vice president of data services at Comcast Business.
Enterprises can also gain greater speed, reliability and security with dedicated links than with a standard, open internet connection. Services will be backed by a service-level agreement.
&8220;Enterprises definitely need help with cloud implementations and anything that can make it easier for them is a good thing,&8221; Jack Gold, an analyst at J. Gold Associates, told ComputerWorld.
Read more in ComputerWorld&;s full article.
The post Comcast Business offers up IBM Cloud dedicated links appeared first on news.
Quelle: Thoughts on Cloud

Watson identifies the best shots at the Masters

Golf fans know great shots when they see them. And now, Watson does, too.
For this year&;s Masters Tournament, IBM — which has a long history with the Masters as a technology partner — is making use of Watson&8217;s Cognitive Highlights capability to find those memorable moments and spotlight them at Augusta National Golf Club and in cloud-based streaming video apps. It&8217;s a first for sporting events.
&;This year, they really wanted to take the Masters&8217; digital projects to a new level, so we began thinking about how we can have an immersive video experience and what would make that space even more impressive,&; said John Kent, program manager for the IBM worldwide sports and entertainment partnership group. &8220;That&8217;s how Watson became involved.&8221;
The Watson Cognitive Highlights technology uses factors including player gestures and crowd noise to pinpoint shots worthy of replay.
For more, check out ZDNet&;s full article.
The post Watson identifies the best shots at the Masters appeared first on news.
Quelle: Thoughts on Cloud

Cognitive computing and analytics come to mobile solutions for employees

The Drum caught up with Gareth Mackown, partner and European mobile leader at IBM Global Business Services, at the Mobile World Congress this week in Barcelona to ask him about how mobile solutions are becoming more vital for not only an enterprise&;s customers, but also employees.
&;Today, organizations are really being defined by the experiences they create,&; Mackown said in an interview. &8220;Often, you think of that in terms of customers, but more and more we&8217;re seeing employee experience being a really defining factor.&8221;
IBM partnered with Apple to transform employee experiences through mobility, he said, and it&8217;s just getting started. Internet of Things (IoT) technology, cognitive computing and analytics will make those mobile solutions &8220;even more critical&8221; for people working in all kinds of different fields.
Mackown pointed to the new IBM partnership with Santander, announced at Mobile World Congress. &8220;We&8217;re helping them design and develop a suite of business apps to help them transform the employee experience they have for their business customers.&8221;
The video below includes the interview with Mackown, along with mobile business leaders from several other large companies.

Find out more in The Drum&;s full article.
The post Cognitive computing and analytics come to mobile solutions for employees appeared first on news.
Quelle: Thoughts on Cloud

53 new things to look for in OpenStack Ocata

The post 53 new things to look for in OpenStack Ocata appeared first on Mirantis | Pure Play Open Cloud.
With a shortened development cycle, you&;d think we&8217;d have trouble finding 53 new features of interest in OpenStack Ocata, but with so many projects (more than 60!) under the Big Tent, we actually had a little bit of trouble narrowing things down. We did a live webinar talking about 157 new features, but here&8217;s our standard 53. (Thanks to the PTLs who helped us out with weeding it down from the full release notes!)
Nova (OpenStack Compute Service)

VM placement changes: The Nova filter scheduler will now use the Placement API to filter compute nodes based on CPU/RAM/Disk capacity.
High availability: Nova now uses Cells v2 for all deployments; currently implemented as single cells, the next release, Pike, will support multi-cell clouds.
Neutron is now the default networking option.
Upgrade capabilities: Use the new &;nova-status upgrade check&8217; CLI command to see what&8217;s required to upgrade to Ocata.

Keystone (OpenStack Identity Service)

Per-user Multi-Factor-Auth rules (MFA rules): You can now specify multiple forms of authentication before Keystone will issue a token.  For example, some users might just need a password, while others might have to provide a time-based one time password and an additional form of authentication.
Auto-provisioning for federated identity: When a user logs into a federated system, Keystone will dynamically create that user a role; previously, the user had to log into that system independently, which was confusing to users.
Validate an expired token: Finally, no more failures due to long-running operations such as uploading a snapshot. Each project can specify whether it will accept expired tokens, and just HOW expired those tokens can be.

Swift (OpenStack Object Storage)

Improved compatibility: Byteorder information is now included in Ring files to support machines with different endianness.
More flexibility: You can now configure the base of the URL base for static web.  You can also set the &;filename&; parameter in TempURLs and validate those TempURLs against a common prefix.
More data: If you&8217;re dealing with large objects, you can now use multi-range GETs and HTTP 416 responses.

Cinder (OpenStack Block Storage)

Active/Active HA: Cinder can now run in Active/Active clustered mode, preventing concurrent operation conflicts. Cinder will also handle mid-processing service failures better than in past releases.
New attach/detach APIs: If you&8217;ve been confused about how to attach and detach volumes to and from VMs, you&8217;re not alone. The Ocata release saw the Cinder team refactor these APIs in preparation for adding the ability to attach a single volume to multiple VMs, expected in an upcoming release.

Glance (OpenStack Image Service)

Image visibility:  Users can now create &8220;community&8221; images, making them available for everyone else to use. You can also specify an image as &8220;shared&8221; to specify that only certain users have access.

Neutron (OpenStack Networking Service)

Support for Routed Provider Networks in Neutron: You can now use the NOVA GRP (Generic Resource Pools) API to publish networks in IPv4 inventory.  Also, the Nova scheduler uses this inventory as a hint to place instances based on IPv4 address availability in routed network segments.
Resource tag mechanism: You can now create tags for subnet, port, subnet pool and router resources, making it possible to do things like map different networks in different OpenStack clouds in one logical network or tag provider networks (i.e. High-speed, High-Bandwidth, Dial-Up).

Heat (OpenStack Orchestration Service)

Notification and application workflow: Use the new  OS::Zaqar::Notification to subscribe to Zaqar queues for notifications, or the OS::Zaqar::MistralTrigger for just Mistral notifications.

Horizon (OpenStack Dashboard)

Easier profiling and debugging:  The new Profiler Panel uses the os-profiler library to provide profiling of requests through Horizon to the OpenStack APIs so you can see what&8217;s going on inside your cloud.
Easier Federation configuration: If Keystone is configured with Keystone to Keystone (K2K) federation and has service providers, you can now choose Keystone providers from a dropdown menu.

Telemetry (Ceilometer)

Better instance discovery:  Ceilometer now uses libvirt directly by default, rather than nova-api.

Telemetry (Gnocchi)

Dynamically resample measures through a new API.
New collectd plugin: Store metrics generated by collectd.
Store data on Amazon S3 with new storage driver.

Dragonflow (Distributed SDN Controller)

Better support for modern networking: Dragonflow now supports IPv6 and distributed sNAT.
Live migration: Dragonflow now supports live migration of VMs.

Kuryr (Container Networking)

Neutron support: Neutron networking is now available to containers running inside a VM.  For example, you can now assign one Neutron port per container.
More flexibility with driver-based support: Kuryr-libnetwork now allows you to choose between ipvlan, macvlan or Neutron vlan trunk ports or even create your own driver. Also, Kuryr-kubernetes has support for ovs hybrid, ovs native and Dragonflow.
Container Networking Interface (CNI):  You can now use the Kubernetes CNI with Kuryr-kubernetes.
More platforms: The controller now handles Pods on bare metal, handles Pods in VMs by providing them Neutron subports, and provides services with LBaaSv2.

Vitrage (Root Cause Analysis Service)

A new collectd datasource: Use this fast system statistics collection deamon, with plugins that collect different metrics. From Ifat Afek: &8220;We tested the DPDK plugin, that can trigger alarms such as interface failure or noisy neighbors. Based on these alarms, Vitrage can deduce the existence of problems in the host, instances and applications, and provide the RCA (Root Cause Analysis) for these problems.&8221;
New “post event” API: Use This general-purpose API allows easy integration of new monitors into Vitrage.
Multi Tenancy support: A user will only see alarms and resources which belong to that user&8217;s tenant.

Ironic (Bare Metal Service)

Easier, more powerful management: A revamp of how drivers are composed, &8220;dynamic drivers&8221; enable users to select a &8220;hardware type&8221; for a machine rather than working through a matrix of hardware types. Users can independently change the deploy method, console manager, RAID management, power control interface and so on. Ocata also brings the ability to do soft power off and soft reboot, and to send non-maskable interrupts through both ironic and nova&8217;s API.

TripleO (Deployment Service)

Easier per-service upgrades: Perform step-by-step tasks as batched/rolling upgrades or in parallel. All roles, including custom roles, can be upgraded this way.
Composable High-Availability architecture: Services managed by Pacemaker such as galera, redis, VIPs, haproxy, cinder-volume, rabbitmq, cinder-backup, and manila-share can now be deployed in multiple clusters, making it possible to scale-out the number of nodes running these services.

OpenStackAnsible (Ansible Playbooks and Roles for Deployment)

Additional support: OpenStack-Ansible now supports CentOS 7, as well as integration with Ceph.

Puppet OpenStack (Puppet Modules for Deployment)

New modules and functionality: The Ocata release includes new modules for puppet-ec2api, puppet-octavia, puppet-panko and puppet-watcher. Also, existing modules support configuring the [DEFAULT]/transport_url configuration option. This changes makes it possible to support AMQP providers other than rabbitmq, such as zeromq.

Barbican (Key Manager Service)

Testing:  Barbican now includes a new Tempest test framework.

Congress (Governance Service)

Network address operations:  The policy language has been enhanced to enable users to specify network network policy use cases.
Quick start:  Congress now includes a default policy library so that it&8217;s useful out of the box.

Monasca (Monitoring)

Completion of Logging-as-a-Service:  Kibana support and integration is now complete, enabling you to push/publish logs to the Monasca Log API, and the logs are authenticated and authorized using Keystone and stored scoped to a tenant/project, so users can only see information from their own logs.
Container support:  Monasca now supports monitoring of Docker containers, and is adding support for the Prometheus monitoring solution. Upcoming releases will also see auto-discovery and monitoring of applications launched in a Kubernetes cluster.

Trove (Database as a Service)

Multi-region deployments: Database clusters can now be deployed across multiple OpenStack regions.

Mistral (Taskflow as a Service)

Multi-node mode: You can now deploy the Mistral engine in multi-node mode, providing the ability to scale out.

Rally (Benchmarking as a Service)

Expanded verification options:  Whereas previous versions enabled you to use only Tempest to verify your cluster, the newest version of Rally enables you to use other forms of verification, which means that Rally can actually be used for the non-OpenStack portions of your application and infrastructure. (You can find the full release notes here.)

Zaqar (Message Service)

Storage replication:  You can now use Swift as a storage option, providing built-in replication capabilities.

Octavia (Load Balancer Service)

More flexibility for Load Balancer as a Service:  You may now use neutron host-routes and custom MTU configurations when configuring LBaasS.

Solum (Platform as a Service)

Responsive deployment:  You may now configure deployments based on Github triggers, which means that you can implement CI/CD by specifying that your application should redeploy when there are changes.

Tricircle (Networking Automation Across Neutron Service)

DVR support in local Neutron:  The East-West and North-South bridging network have been combined into North-South a bridging network, making it possible to support DVR in local Neutron.

Kolla (Container Based Deployment)

Dynamic volume provisioning: Kolla-Kubernetes by default uses Ceph for stateful storage, and with Kubernetes 1.5, support was added for Ceph and dynamic volume provisioning as requested by claims made against the API server.

Freezer (Backup, Restore, and Disaster Recovery Service)

Block incremental backups:  Ocata now includes the Rsync engine, enabling these incremental backups.

Senlin (Clustering Service)

Generic Event/Notification support: In addition to its usual capability of logging events to a database, Senlin now enables you to add the sending of events to a message queue and to a log file, enabling dynamic monitoring.

Watcher (Infrastructure Optimization Service)

Multiple-backend support: Watcher now supports metrics collection from multiple backends.

Cloudkitty (Rating Service)

Easier management:  CloudKitty now includes a Horizon wizard and hints on the CLI to determine the available metrics. Also, Cloudkitty is now part of the unified OpenStack client.

The post 53 new things to look for in OpenStack Ocata appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

53 new things to look for in OpenStack Ocata

The post 53 new things to look for in OpenStack Ocata appeared first on Mirantis | Pure Play Open Cloud.
With a shortened development cycle, you&;d think we&8217;d have trouble finding 53 new features of interest in OpenStack Ocata, but with so many projects (more than 60!) under the Big Tent, we actually had a little bit of trouble narrowing things down. We did a live webinar talking about 157 new features, but here&8217;s our standard 53. (Thanks to the PTLs who helped us out with weeding it down from the full release notes!)
Nova (OpenStack Compute Service)

VM placement changes: The Nova filter scheduler will now use the Placement API to filter compute nodes based on CPU/RAM/Disk capacity.
High availability: Nova now uses Cells v2 for all deployments; currently implemented as single cells, the next release, Pike, will support multi-cell clouds.
Neutron is now the default networking option.
Upgrade capabilities: Use the new &;nova-status upgrade check&8217; CLI command to see what&8217;s required to upgrade to Ocata.

Keystone (OpenStack Identity Service)

Per-user Multi-Factor-Auth rules (MFA rules): You can now specify multiple forms of authentication before Keystone will issue a token.  For example, some users might just need a password, while others might have to provide a time-based one time password and an additional form of authentication.
Auto-provisioning for federated identity: When a user logs into a federated system, Keystone will dynamically create that user a role; previously, the user had to log into that system independently, which was confusing to users.
Validate an expired token: Finally, no more failures due to long-running operations such as uploading a snapshot. Each project can specify whether it will accept expired tokens, and just HOW expired those tokens can be.

Swift (OpenStack Object Storage)

Improved compatibility: Byteorder information is now included in Ring files to support machines with different endianness.
More flexibility: You can now configure the base of the URL base for static web.  You can also set the &;filename&; parameter in TempURLs and validate those TempURLs against a common prefix.
More data: If you&8217;re dealing with large objects, you can now use multi-range GETs and HTTP 416 responses.

Cinder (OpenStack Block Storage)

Active/Active HA: Cinder can now run in Active/Active clustered mode, preventing concurrent operation conflicts. Cinder will also handle mid-processing service failures better than in past releases.
New attach/detach APIs: If you&8217;ve been confused about how to attach and detach volumes to and from VMs, you&8217;re not alone. The Ocata release saw the Cinder team refactor these APIs in preparation for adding the ability to attach a single volume to multiple VMs, expected in an upcoming release.

Glance (OpenStack Image Service)

Image visibility:  Users can now create &8220;community&8221; images, making them available for everyone else to use. You can also specify an image as &8220;shared&8221; to specify that only certain users have access.

Neutron (OpenStack Networking Service)

Support for Routed Provider Networks in Neutron: You can now use the NOVA GRP (Generic Resource Pools) API to publish networks in IPv4 inventory.  Also, the Nova scheduler uses this inventory as a hint to place instances based on IPv4 address availability in routed network segments.
Resource tag mechanism: You can now create tags for subnet, port, subnet pool and router resources, making it possible to do things like map different networks in different OpenStack clouds in one logical network or tag provider networks (i.e. High-speed, High-Bandwidth, Dial-Up).

Heat (OpenStack Orchestration Service)

Notification and application workflow: Use the new  OS::Zaqar::Notification to subscribe to Zaqar queues for notifications, or the OS::Zaqar::MistralTrigger for just Mistral notifications.

Horizon (OpenStack Dashboard)

Easier profiling and debugging:  The new Profiler Panel uses the os-profiler library to provide profiling of requests through Horizon to the OpenStack APIs so you can see what&8217;s going on inside your cloud.
Easier Federation configuration: If Keystone is configured with Keystone to Keystone (K2K) federation and has service providers, you can now choose Keystone providers from a dropdown menu.

Telemetry (Ceilometer)

Better instance discovery:  Ceilometer now uses libvirt directly by default, rather than nova-api.

Telemetry (Gnocchi)

Dynamically resample measures through a new API.
New collectd plugin: Store metrics generated by collectd.
Store data on Amazon S3 with new storage driver.

Dragonflow (Distributed SDN Controller)

Better support for modern networking: Dragonflow now supports IPv6 and distributed sNAT.
Live migration: Dragonflow now supports live migration of VMs.

Kuryr (Container Networking)

Neutron support: Neutron networking is now available to containers running inside a VM.  For example, you can now assign one Neutron port per container.
More flexibility with driver-based support: Kuryr-libnetwork now allows you to choose between ipvlan, macvlan or Neutron vlan trunk ports or even create your own driver. Also, Kuryr-kubernetes has support for ovs hybrid, ovs native and Dragonflow.
Container Networking Interface (CNI):  You can now use the Kubernetes CNI with Kuryr-kubernetes.
More platforms: The controller now handles Pods on bare metal, handles Pods in VMs by providing them Neutron subports, and provides services with LBaaSv2.

Vitrage (Root Cause Analysis Service)

A new collectd datasource: Use this fast system statistics collection deamon, with plugins that collect different metrics. From Ifat Afek: &8220;We tested the DPDK plugin, that can trigger alarms such as interface failure or noisy neighbors. Based on these alarms, Vitrage can deduce the existence of problems in the host, instances and applications, and provide the RCA (Root Cause Analysis) for these problems.&8221;
New “post event” API: Use This general-purpose API allows easy integration of new monitors into Vitrage.
Multi Tenancy support: A user will only see alarms and resources which belong to that user&8217;s tenant.

Ironic (Bare Metal Service)

Easier, more powerful management: A revamp of how drivers are composed, &8220;dynamic drivers&8221; enable users to select a &8220;hardware type&8221; for a machine rather than working through a matrix of hardware types. Users can independently change the deploy method, console manager, RAID management, power control interface and so on. Ocata also brings the ability to do soft power off and soft reboot, and to send non-maskable interrupts through both ironic and nova&8217;s API.

TripleO (Deployment Service)

Easier per-service upgrades: Perform step-by-step tasks as batched/rolling upgrades or in parallel. All roles, including custom roles, can be upgraded this way.
Composable High-Availability architecture: Services managed by Pacemaker such as galera, redis, VIPs, haproxy, cinder-volume, rabbitmq, cinder-backup, and manila-share can now be deployed in multiple clusters, making it possible to scale-out the number of nodes running these services.

OpenStackAnsible (Ansible Playbooks and Roles for Deployment)

Additional support: OpenStack-Ansible now supports CentOS 7, as well as integration with Ceph.

Puppet OpenStack (Puppet Modules for Deployment)

New modules and functionality: The Ocata release includes new modules for puppet-ec2api, puppet-octavia, puppet-panko and puppet-watcher. Also, existing modules support configuring the [DEFAULT]/transport_url configuration option. This changes makes it possible to support AMQP providers other than rabbitmq, such as zeromq.

Barbican (Key Manager Service)

Testing:  Barbican now includes a new Tempest test framework.

Congress (Governance Service)

Network address operations:  The policy language has been enhanced to enable users to specify network network policy use cases.
Quick start:  Congress now includes a default policy library so that it&8217;s useful out of the box.

Monasca (Monitoring)

Completion of Logging-as-a-Service:  Kibana support and integration is now complete, enabling you to push/publish logs to the Monasca Log API, and the logs are authenticated and authorized using Keystone and stored scoped to a tenant/project, so users can only see information from their own logs.
Container support:  Monasca now supports monitoring of Docker containers, and is adding support for the Prometheus monitoring solution. Upcoming releases will also see auto-discovery and monitoring of applications launched in a Kubernetes cluster.

Trove (Database as a Service)

Multi-region deployments: Database clusters can now be deployed across multiple OpenStack regions.

Mistral (Taskflow as a Service)

Multi-node mode: You can now deploy the Mistral engine in multi-node mode, providing the ability to scale out.

Rally (Benchmarking as a Service)

Expanded verification options:  Whereas previous versions enabled you to use only Tempest to verify your cluster, the newest version of Rally enables you to use other forms of verification, which means that Rally can actually be used for the non-OpenStack portions of your application and infrastructure. (You can find the full release notes here.)

Zaqar (Message Service)

Storage replication:  You can now use Swift as a storage option, providing built-in replication capabilities.

Octavia (Load Balancer Service)

More flexibility for Load Balancer as a Service:  You may now use neutron host-routes and custom MTU configurations when configuring LBaasS.

Solum (Platform as a Service)

Responsive deployment:  You may now configure deployments based on Github triggers, which means that you can implement CI/CD by specifying that your application should redeploy when there are changes.

Tricircle (Networking Automation Across Neutron Service)

DVR support in local Neutron:  The East-West and North-South bridging network have been combined into North-South a bridging network, making it possible to support DVR in local Neutron.

Kolla (Container Based Deployment)

Dynamic volume provisioning: Kolla-Kubernetes by default uses Ceph for stateful storage, and with Kubernetes 1.5, support was added for Ceph and dynamic volume provisioning as requested by claims made against the API server.

Freezer (Backup, Restore, and Disaster Recovery Service)

Block incremental backups:  Ocata now includes the Rsync engine, enabling these incremental backups.

Senlin (Clustering Service)

Generic Event/Notification support: In addition to its usual capability of logging events to a database, Senlin now enables you to add the sending of events to a message queue and to a log file, enabling dynamic monitoring.

Watcher (Infrastructure Optimization Service)

Multiple-backend support: Watcher now supports metrics collection from multiple backends.

Cloudkitty (Rating Service)

Easier management:  CloudKitty now includes a Horizon wizard and hints on the CLI to determine the available metrics. Also, Cloudkitty is now part of the unified OpenStack client.

The post 53 new things to look for in OpenStack Ocata appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

IBM Machine Learning comes to private cloud

Billions of transactions in banking, transportation, retail, insurance and other industries take place in the private cloud every day. For many enterprises, the z System mainframe is the home for all that data.
For data scientists, it can be hard to keep up with all that activity and those vast swaths of data. So IBM has taken its core Watson machine learning technology and applied it to the z System, enabling data scientists to automate the creation, training and deployment of analytic models to understand their data more completely.
IBM Machine Learning supports any language, any popular machine learning framework and any transactional data type without the cost, latency and risk that comes with moving data off premises. It also includes cognitive automation to help data scientists choose the right algorithms by which to analyze and process their organization&;s specific data stores.
One company that is evaluating the IBM Machine Learning technology is Argus Health, which hopes to help healthcare providers and patients navigate the increasingly complex healthcare landscape.
&;Helping our health plan clients achieve the best clinical and financial outcomes by getting the best care delivered at the best price in the most appropriate place is the mission of Argus while focused on the vision of becoming preeminent in providing pharmacy and healthcare solutions,&; said Marc Palmer, president of Argus Health.
For more, check out CIO Today&;s full article.
The post IBM Machine Learning comes to private cloud appeared first on news.
Quelle: Thoughts on Cloud