Interviews at OpenStack PTG Denver

I’m attending PTG this week to conduct project interviews. These interviews have several purposes. Please consider all of the following when thinking about what you might want to say in your interview:

Tell the users/customers/press what you’ve been working on in Rocky
Give them some idea of what’s (what might be?) coming in Stein
Put a human face on the OpenStack project and encourage new participants to join us
You’re welcome to promote your company’s involvement in OpenStack but we ask that you avoid any kind of product pitches or job recruitment

In the interview I’ll ask some leading questions and it’ll go easier if you’ve given some thought to them ahead of time:

Who are you? (Your name, your employer, and the project(s) on which you are active.)
What did you accomplish in Rocky? (Focus on the 2-3 things that will be most interesting to cloud operators)
What do you expect to be the focus in Stein? (At the time of your interview, it’s likely that the meetings will not yet have decided anything firm. That’s ok.)
Anything further about the project(s) you work on or the OpenStack community in general.

Finally, note that there are only 40 interview slots available, so please consider coordinating with your project to designate the people that you want to represent the project, so that we don’t end up with 12 interview about Neutron, or whatever.
I mean, LOVE me some Neutron, but let’s give some other projects love, too.
It’s fine to have multiple people in one interview – maximum 3, probably.
Interview slots are 30 minutes, in which time we hope to capture somewhere between 10 and 20 minutes of content. It’s fine to run shorter, but 15 minutes is probably an ideal length.
Quelle: RDO

Stein PTG Summary for Documentation and i18n

Ian Y. Choi and I already shared a summary of docs and i18n updates from the Stein Project Teams Gathering with the openstack-dev mailing list, but I also wanted to post the updates here for wider distribution. So, here comes what I found the most interesting out of our docs- and i18n-related meetings and discussions we had in Denver from 10 through 14 September.
The overall schedule for all our sessions with additional comments and meeting minutes can be found in OpenStack Etherpad.
First things first, so the following is our obligatory team picture (with quite a few members missing); picture courtesy of OpenStack Foundation folks:

Operators documentation
We met with the Ops community to discuss the future of Ops docs. The plan is for the Ops group to take ownership of the operations-guide (done), ha-guide (in progress), and the arch-design guide (to do).
These three documents are being moved from the openstack-manuals repository to their own repos, owned by the newly formed Operations Documentation SIG.
See also ops-meetup-ptg-denver-2018-operations-guide for more notes.
Documentation site and design
We discussed improving the docs.openstack.org site navigation, guide summaries (in particular, install-guide), adding a new index page for project team contrib guides, and more. We met with the OpenStack Foundation staff to discuss the possibility of getting assistance with site design work.
We are also looking into accepting contributions from the Strategic Focus Areas folks to make parts of the docs toolchain like openstackdocstheme more easily reusable outside of the official OpenStack infrastructure. Support for some of the external project docs has already landed in git.
We got feedback on our front page template for project team docs, with Ironic being the pilot for us.
We got input on restructuring and reworking specs site to make it easier for users to understand that specs are not feature descriptions nor project docs, and to make it more consistent in how the project teams publish their specs. This will need to be further discussed with the folks owning the specs site infra.
Support status badges showing at the top of docs.openstack.org pages may not work well for projects following the cycle-with-intermediary release model, such as swift. We need to rethink how we configure and present the badges.
There are also some UX bugs present in badges (for instance, bug 1788389).
Translations
We met with the infra team to discuss progress on translating project team docs and, related to that, generating PDFs.
With the Foundation staff, we discussed translating Edge and Container whitepapers and similar material.
More details in Ian’s notes.
Reference, REST API docs and Release Notes
With the QA team, we discussed the scope and purpose of the /doc/source/reference documentation area in project docs. Because the scope of /reference might be unclear and can be used inconsistently by project teams, the suggestion is to continue with the original migration plan and migrate REST API and possibly Release Notes under /doc/source, as documented in doc-contrib-guide.
Contributor Guide
The OpenStack Contributor Guide was discussed in a separate session, see FC_SIG_ptg_stein for notes.
Thanks!
Finally, I’d like to thank everybody who attended the sessions, and a special thanks goes to all the PTG organizers and the OpenStack community in general for all their work!
Quelle: RDO

Introducing Networking-Ansible

During the OpenStack Rocky release cycle a new OpenStack ML2 driver project was established: networking-ansible. This project integrates OpenStack with the Ansible Networking project. Ansible Networking is the part of the Ansible project that focuses on providing an Ansible interface for network operators to manage network switch configuration. By consuming Ansible Networking as the backend interface to network switch configuration we abstract the interface to communicate with the switching hardware to the Ansible layer. This provides opportunity to support multiple switching platforms in a single ML2 driver. This will reduce the maintenance overhead for OpenStack operators to integrate a heterogeneous network environment with baremetal guest deployments by only requiring a single ML2 driver to configure.
The networking-ansible team had two general goals in the Rocky release cycle. First, to establish the project. A significant amount of work was completed in the Rocky release cycle to establish OpenStack repositories and tracking tools, RDO packaging, upstream testing, and integration with Neutron, Ansible Networking, and Triple-O. We completed, and in some ways exceeded, our goals here. A big thank you to the RDO and OpenStack community members that contributed to this project’s successful establishment. Second, we intended to support a single initial use case that has a single basic feature focused on a single switch platform. We also accomplished and exceeded this goal. The Ironic project needs the ability to modify the switch port a baremetal guest is connected to to be able to have the node put onto the Ironic provisioning network for provisioning, then be moved to the Neutron assigned tenant network for guest tenant network traffic. This use case assumes a single network interface on the guest attached to a switch port in access mode. Using networking-ansible Neutron can swap the access port’s VLAN between the Ironic provisioning network and the Neutron assigned tenant network VLAN using Ansible Networking as its backend. We ended up testing on OVS and a Juniper QFX this cycle. Untested code exists for EOS and NXOS.
Looking towards the future, we have planned a set of goals for the OpenStack Stein release cycle. First, support for more platforms. There are a handful of switch platforms that we have gained access to. We plan to add support to the code base and work through as much testing as possible to expand what platforms are supported. Second, improved security and trunk port support. We are in process of adopting Ansible Vault to store switch credentials and working on offering the ability to configure a baremetal guest’s port in trunk mode to allow them to connect to multiple networks. Finally, expose a Python API. The underlying code that is interfacing with Ansible Networking does not need to have any hard dependencies on OpenStack. An API will be exposed and documented that is isolated from OpenStack dependencies. This API will be useful for use cases that would like the abstracted interface to networking hardware via Ansible Networking, but require different management needs than what OpenStack offers.
My congratulation goes out to the team and supporting community members that worked on this project for a very successful release cycle. My thanks again to the OpenStack and RDO communities for the support offered as we established this project. I look forward to adding the new features being worked on and I hope we’ll be just as successful in completing our new goals six months from now.
Quelle: RDO

RDO Rocky Released

The RDO community is pleased to announce the general availability of the RDO build for OpenStack Rocky for RPM-based distributions, CentOS Linux and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Rocky is the 18th release from the OpenStack project, which is the work of more than 1400 contributors from around the world.
The release already available on the CentOS mirror network at http://mirror.centos.org/centos/7/cloud/x86_64/openstack-rocky/
The RDO community project curates, packages, builds, tests and maintains a complete OpenStack component set for RHEL and CentOS Linux and is a member of the CentOS Cloud Infrastructure SIG. The Cloud Infrastructure SIG focuses on delivering a great user experience for CentOS Linux users looking to build and maintain their own on-premise, public or hybrid clouds.
All work on RDO and on the downstream release, Red Hat OpenStack Platform, is 100% open source, with all code changes going upstream first.
]4 Photo via Good Free Photos
New and Improved
Interesting things in the Rocky release include:

New neutron ML2 driver networking-ansible has been included in RDO. This module abstracts management and interaction with switching hardware to Ansible Networking.
Swift3 has been moved to swift package as the “s3api” middleware.

Other improvements include:

Metalsmith is now included in RDO. This is a simple tool to provision bare metal machines using ironic, glance and neutron.

Contributors
During the Rocky cycle, we saw the following new contributors:

Bob Fournier
Bogdan Dobrelya
Carlos Camacho
Carlos Goncalves
Cédric Jeanneret
Charles Short
Dan Smith
Dustin Schoenbrun
Florian Fuchs
Goutham Pacha Ravi
Ilya Etingof
Konrad Mosoń
Luka Peschke
mandreou
Nate Johnston
Sandhya Dasu
Sergii Golovatiuk
Tobias Urdin
Tony Breeds
Victoria Martinez de la Cruz
Yaakov Selkowitz

Welcome to all of you and Thank You So Much for participating!
But we wouldn’t want to overlook anyone. A super massive Thank You to all SIXTY-NINE contributors who participated in producing this release. This list includes commits to rdo-packages and rdo-infra repositories:

Ade Lee
Alan Bishop
Alan Pevec
Alex Schultz
Alfredo Moralejo
Bob Fournier
Bogdan Dobrelya
Brad P. Crochet
Carlos Camacho
Carlos Goncalves
Cédric Jeanneret
Chandan Kumar
Charles Short
Christian Schwede
Daniel Alvarez
Daniel Mellado
Dansmith
Dmitry Tantsur
Dougal Matthews
Dustin Schoenbrun
Emilien Macchi
Eric Harney
Florian Fuchs
Goutham Pacha Ravi
Haikel Guemar
Honza Pokorny
Ilya Etingof
James Slagle
Jason Joyce
Javier Peña
Jistr
Jlibosva
Jon Schlueter
Juan Antonio Osorio Robles
karthik s
Kashyap Chamarthy
Kevin Tibi
Konrad Mosoń
Lon
Luigi Toscano
Luka Peschke
marios
Martin André
Matthew Booth
Matthias Runge
Mehdi Abaakouk
Nate Johnston
Nmagnezi
Oliver Walsh
Pete Zaitcev
Pradeep Kilambi
rabi
Radomir Dopieralski
Ricardo Noriega
Sandhya Dasu
Sergii Golovatiuk
shrjoshi
Steve Baker
Thierry Vignaud
Tobias Urdin
Tom Barron
Tony Breeds
Tristan de Cacqueray
Victoria Martinez de la Cruz
Yaakov Selkowitz
yatin

The Next Release Cycle
At the end of one release, focus shifts immediately to the next, Stein which has a slightly longer release cycle due to the PTG Summit co-location next year with an estimated GA the week of 08-12 April 2019. The full schedule is available at https://releases.openstack.org/stein/schedule.html.
Twice during each release cycle, RDO hosts official Test Days shortly after the first and third milestones; therefore, the upcoming test days are 01-02 November 2018 for Milestone One and 14-15 March 2019 for Milestone Three.
Get Started
There are three ways to get started with RDO.
To spin up a proof of concept cloud, quickly, and on limited hardware, try an All-In-One Packstack installation. You can run RDO on a single node to get a feel for how it works.
For a production deployment of RDO, use the TripleO Quickstart and you’ll be running a production cloud in short order.
Finally, for those that don’t have any hardware or physical resources, there’s the OpenStack Global Passport Program. This is a collaborative effort between OpenStack public cloud providers to let you experience the freedom, performance and interoperability of open source infrastructure. You can quickly and easily gain access to OpenStack infrastructure via trial programs from participating OpenStack public cloud providers around the world.
Get Help
The RDO Project participates in a Q&A service at https://ask.openstack.org. We also have our [users@lists.rdoproject.org[(https://lists.rdoproject.org/mailman/listinfo/users) for RDO-specific users and operrators. For more developer-oriented content we recommend joining the dev@lists.rdoproject.org mailing list. Remember to post a brief introduction about yourself and your RDO story. The mailing lists archives are all available at https://mail.rdoproject.org. You can also find extensive documentation on RDOproject.org.
The #rdo channel on Freenode IRC is also an excellent place to find and give help.
We also welcome comments and requests on the CentOS mailing lists and the CentOS and TripleO IRC channels (#centos, #centos-devel, and #tripleo on irc.freenode.net), however we have a more focused audience within the RDO venues.
Get Involved
To get involved in the OpenStack RPM packaging effort, check out the RDO contribute pages, peruse the CentOS Cloud SIG page, and inhale the RDO packaging documentation.
Join us in #rdo on the Freenode IRC network and follow us on Twitter @RDOCommunity. You can also find us on Facebook, Google+ and YouTube.
Quelle: RDO

Community Blog Round Up 05 December 2018

Adam Young discusses OpenStack’s access policy, then deep dives to create a self trust in Keystone while Lars Kellogg-Stedman helps us manage USB gadgets using systemd as well as using ansible to integrate a password management service, then Pablo Iranzo Gómez shows how OpenStack contributions are peer reviewed.

Scoped and Unscoped access policy in OpenStack by Adam Young

Ozz did a fantastic job laying out the rules around policy. This article assumes you’ve read that. I’ll wait. I’d like to dig a little deeper into how policy rules should be laid out, and a bit about the realities of how OpenStack policy has evolved. OpenStack uses the policy mechanisms describe to limit access to various APIs. In order to make sensible decisions, the policy engine needs to know some information about the request, and the user that is making it.

Read more at https://adam.younglogic.com/2018/11/scoped-and-unscoped-access-policy-in-openstack/
Systemd unit for managing USB gadgets by Lars Kellogg-Stedman

The Pi Zero (and Zero W) have support for acting as a USB gadget: that means that they can be configured to act as a USB device — like a serial port, an ethernet interface, a mass storage device, etc. There are two different ways of configuring this support. The first only allows you to configure a single type of gadget at a time, and boils down to: Enable the dwc2 overlay in /boot/config.txt, Reboot, modprobe g_serial.

Read more at https://blog.oddbit.com/2018/10/19/systemd-unit-for-managing-usb-/
Integrating Bitwarden with Ansible by Lars Kellogg-Stedman

Bitwarden is a password management service (like LastPass or 1Password). It’s unique in that it is built entirely on open source software. In addition to the the web UI and mobile apps that you would expect, Bitwarden also provides a command-line tool for interacting with the your password store.

Read more at https://blog.oddbit.com/2018/10/19/integrating-bitwarden-with-ans/
Creating a Self Trust In Keystone by Adam Young

Lets say you are an administrator of an OpenStack cloud. This means you are pretty much all powerful in the deployment. Now, you need to perform some operation, but you don’t want to give it full admin privileges? Why? well, do you work as root on your Linux box? I hope note. Here’s how to set up a self trust for a reduced set of roles on your token.

Read more at https://adam.younglogic.com/2018/10/creating-a-self-trust-in-keystone/
Contributing to OSP upstream a.k.a. Peer Review by Pablo Iranzo Gómez

In the article “Contributing to OpenStack” we did cover on how to prepare accounts and prepare your changes for submission upstream (and even how to find low hanging fruits to start contributing). Here, we’ll cover what happens behind the scene to get change published.

Read more at https://iranzo.github.io/blog/2018/10/16/osp-upstream-contribution/
Quelle: RDO

3 key ideas to help drive compliance in the cloud

Deploying critical data and workloads in a cloud environment can drive numerous benefits such as reduced costs and increased time to market on product and services.
When designing a strategy for regulatory compliance in cloud deployments, however, IT leaders must first make some big decisions.
For example, the choice of public, private or hybrid cloud may depend on whether your business is risk tolerant of sharing at the hypervisor level or if it requires dedicated physical servers. Also, how will your compliance strategy affect recovery and business continuity in the event of a disaster?
The CIA triad
To help navigate these decisions, start with the basics. This simple diagram illustrates the three key components to creating an effective strategy for information security.
I call it the “CIA triad.” CIA stands for:

Confidentiality through preventing access by unauthorized users.
Integrity from validating that your data is trustworthy and accurate.
Availability by ensuring data is available when needed.

Technology, procedures and auditing
I recommend a three-pronged approach to designing a compliance strategy that addresses each area of the triad.
The first prong is technology. An effective cloud infrastructure should include controls that enable you to manage user access to the environment, using software-defined architecture such as virtual or host-based firewalls to isolate, segment and protect data. The infrastructure should also help meet availability targets for critical data with service-level agreements (SLAs) that go up to the application layer.
The second prong consists of procedures and processes for successfully implementing this technology. This includes the use of operational plans and metrics to achieve the strategic and organizational goals set forth by management. These procedures should define the roles of each team member and outline security policies to help ensure the confidentiality of the data.
Once your infrastructure and procedures are in place, it’s a good idea to work with a third party who can audit your environment and policies. This auditing process should help determine what control framework will be used and an approach to validating successful implementation. A qualified auditor can also identify compliance practices that align with the core business. For example, if e-retail is a core business function, then Payment Card Industry (PCI) standards should be considered.
Compliance on IBM Cloud session at Think 2018
At Think 2018, I will host a Think Tank session to dive deeper into these topics, discussing how IBM Cloud can help businesses meet industry and regulatory compliance requirements such as PCI, FEDRAMP and HIPAA.
Along with Barbara Davis, offering manager for managed hosting and application services, I will highlight ways to deploy SAP data and applications more efficiently in a managed cloud environment. To join our conversation, go to the Think 2018 website to register for the event and enroll in the session.
Learn more about Cloud Managed Application Services.
The post 3 key ideas to help drive compliance in the cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

KQueen: The Open Source Multi-cloud Kubernetes Cluster Manager

The post KQueen: The Open Source Multi-cloud Kubernetes Cluster Manager appeared first on Mirantis | Pure Play Open Cloud.
Kubernetes got a lot of traction in 2017, and I’m one of those people who believes that in 2 years the Kubernetes API could become the standard interface for consuming infrastructure. In other words, the same way that today we get an IP address and ssh key for our virtual machine on AWS, we might get a Kubernetes API endpoint and kubeconfig from our cluster. With the recent AWS announcement about EKS bringing this reality even closer, let me give you my perspective on k8s installation and trends that we see at Mirantis.
Existing tools
Recently I did some personal research, and I discovered the following numbers around the Kubernetes community:

~22 k8s distribution support providers
~10 k8s community deployment tools
~20 CaaS vendors

There are lot of companies that provide Kubernetes installation and management, including Stackpoint.io, Kubermatic (Loodse), AppsCode, Giant Swarm, Huawei Container Engine, CloudStack Container Service, Eldarion Cloud (Eldarion), Google Container Engine,  Hasura Platform, Hypernetes (HyperHQ), KCluster, VMware Photon Platform, OpenShift (Red Hat), Platform9 Managed Kubernetes, and so on.
All of those vendor solutions focus more or less on “their way” of k8s cluster deployment, which usually means specific a deployment procedure of defined packages, binaries and artifacts. Moreover, while some of these installers are available as open source packages, they’re not intended to be modified, and when delivered by a vendor, there’s often no opportunity for customer-centric modification.
There are reasons, however, why this approach is not enough for the enterprise customer use case. Let me go through them.
Customization: Our real Kubernetes deployments and operation have demonstrated that we cannot just deploy a cluster on a custom OS with binaries. Enterprise customers have various security policies, configuration management databases, and specific tools, all of which are required to be installed on the OS. A very good example of this situation is one of a customer from the financial sector. The first time the started their golden OS image at AWS, it took 45 minutes to boot. This makes it impossible for some customers to run the native managed k8s offering at public cloud providers.
Multi-cloud: Most existing vendors don’t solve the question of how to manage clusters in multiple regions, let alone at multiple providers. Enterprise customers want to run distributed workloads in private and public clouds. Even in the case of on-premise baremetal deployment, people don’t a build single huge cluster for whole company. Instead, they separate resources based on QA/testing/production, application-specific, or team-specific clusters, which often causes complications with existing solutions. For example, OpenShift manages a single Kubernetes cluster instance. One of our customers wound up with an official design where they planned to run 5 independent OpenShift instances without central visibility or any way to manage deployment. Another good example is CoreOS Tectonic, which provides a great UI for RBAC management and cluster workload, but has the same problem — it only manages a single cluster, and as I said, nobody stays with single cluster.
most existing vendors do not solve the question of how to manage clusters in multiple locations
“My k8s cluster is better than yours” syndrome: In the OpenStack world, where we originally came from, we’re used to complexity. OpenStack was very complex and Mirantis was very successful, because we could install it the most quickly, easily, and correctly. Contrast this with the current Kubernetes world; with multiple vendors, it is very difficult to differentiate in k8s installation. My view is that k8s provisioning is commodity with very low added value, which makes k8s installation more as vendor checkbox feature, rather than decision making point or unique capability. At the moment, however, let me borrow my favourite statement from a Kubernetes community leader: “Yes, there are lot of k8s installers, but very few deploy k8s 100% correctly.”
Moreover, all public cloud providers will eventually offer their own managed k8s offering, which will put various k8s SaaS providers out of business. After all, there is no point paying for managed k8s on AWS to a third-party company if AWS provides EKS.
K8s provisioning is a commodity, with very low added value.
Visibility & Audit: Lastly, but most importantly, deployment is just the beginning. Users need to have visibility, with information on what runs where in what setup. It’s not just about central monitoring, logging and alerting; it’s also about audit. Users need audit features such as “all docker images used in all k8s clusters” or “version of all k8s binaries”. Today, if you do find such a tool, it usually has gaps at the multi-cluster level, can providing information only for single clusters.
To summarize, I don’t currently see any existing Kubernetes tool that provides all of those features.
KQueen as Open Cluster Manager
Based on all of these points, we at Mirantis decided to build a provisioner-agnostic Kubernetes cluster manager to deploy, manage and operate various Kubernetes clusters on various public/private cloud providers. Internally, we have called the project KQueen and, and it follows several design principles:

Kubernetes as a Service environment deployment: Provide a multi-tenant self-service portal for k8s cluster provisioning.
Operations: Focus on the audit, visibility, and security of Kubernetes clusters, in addition to actual operations.
Update and Upgrade: Automate updating and upgrading of clusters through specific provisioners.

Multi-Cloud Orchestration: Support the same abstraction layer for any public, private, or bare metal provider.
Platform Agnostic Deployment (of any Kubernetes cluster): Enable provisioning of a Kubernetes cluster by various community installers/provisioners, including those with customizations, rather than a black box with a strict installation procedure.
Open, Zero Lock-in Deployment: Provide a pure-play open source solution without any closed source.

 

Easy integration: Provide a documented REST API for managing Kubernetes clusters and integrating this management interface into existing systems.

We have one central backend service called queen. This service listens for user requests (via the API) and can orchestrate and operate clusters.
KQueen supplies the backend API for provider-agnostic cluster management. It enables access from the UI, CLI, or API, and manages provisioning of Kubernetes clusters. It uses the following workflow:

Trigger deployment on the provisioner, enabling KQueen to use various provisioners (AKS, GKE, Jenkins) for Kubernetes clusters. For example, you can use the Jenkins provisioner to trigger installation of Kubernetes based on a particular job.
The provisioner installs the Kubernetes cluster using the specific provider.
The provisioner returns the Kubernetes kubeconfig and API endpoint. This config is stored in the KQueen backend (etcd).
KQueen manages, operates, monitors, and audits the Kubernetes clusters. It reads all information from the API and displays it as a simple overview visualization. KQueen can also be extended by adding other audit components.

KQueen in action
The KQueen project can help define enterprise-scale kubernetes offerings across departments and give them freedom in specific customizations. If you’d like to see it in action, you can see a generic KQueen demo showing the architecture design and managing a cluster from a single place, as well as a demo based on Azure AKS demo. In addition, watch this space for a tutorial on how to set up and use KQueen for yourself. We’d love your feedback!
The post KQueen: The Open Source Multi-cloud Kubernetes Cluster Manager appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

[Podcast] PodCTL #21 – Effective RBAC for Kubernetes

One of the strongest signals we heard coming out of KubeCon was the breadth of “Enterprise” companies deploying Kubernetes into production. As more containerized applications are placed in secure, often regulated, environments, having proper authorization is a critical element of providing defense-in-depth. In this week’s show, we looked at the “Effective RBAC” talk from KubeCon […]
Quelle: OpenShift

The Intelligent Delivery Manifesto

The post The Intelligent Delivery Manifesto appeared first on Mirantis | Pure Play Open Cloud.

div.mstory {padding-left:50px; border-left: 3px solid #e0ebe8; margin:40px 0;}div.mstory p {color: #0d4336;}div.mstory p i {color: #069073; font-family: ‘exo 2′; padding: 0 5px;}

It sat there in his inbox, staring at him.
Carl Delacour looked at the email from BigCo’s public cloud provider, Ganges Web Services. He knew he’d have to open it sooner or later.
It wasn’t as if there would be any surprises in it — or at least, he hoped not. For the last several months he’d been watching BigCo’s monthly cloud bills rising, seemingly with no end in sight. He’d only gotten through 2017 by re-adjusting budget priorities, and he knew he couldn’t spend another year like this.
He opened Slack and pinged Adam Pantera. “Got a sec?”
A moment later a notification popped up on his screen.  “For you, boss?  Always.”
“What’s it going to take,” Carl typed, “for us to bring our cloud workloads back on premise?”
There was a pause.
A long pause.
Such a long pause, in fact, that Carl wondered if Adam had wandered away from the keyboard.  “YT?”
“Yeah, I’m here,” he saw.  “I’m just … I don’t think we can do that the way everything is structured.  We built all of our automation on the provider API. It’d take months, at best, maybe a year.”
Carl felt a cold lump in the center of his chest as the reality of the situation sank in. It wasn’t just the GWS bill that was adding up in his head; the new year would bring new regulatory constraints as well.   It was his job to deal with this sort of thing, and he didn’t seem to have any options. These workloads were critical to BigCo’s daily business. He couldn’t just turn them off, but he couldn’t let things go on as they were, either, without serious consequences.  “Isn’t this stuff supposed to be cloud native?” he asked.
“It IS cloud native,” Adam replied. “But it’s all built for our current cloud provider. If you want us to be able to move between clouds, we’ll have to restructure for a multi-cloud environment.”
Carl’s mouse hovered over the monthly cloud bill, his finger suddenly stabbing the button and opening the document.
“DO IT,” he told Adam.

Carl wasn’t being unreasonable. He should be able to move workloads between clouds. He should also be able to make changes to the overall infrastructure. And he should be able to do it all without causing a blip in the reliability of the system.
Fortunately, it can be done. We’re calling it Intelligent Delivery, and it’s time to talk about what that’s going to take.
Intelligent Delivery is a way to combine technologies that already exist into an architecture that gives you the freedom to move workloads around without fear of lock-in, the confidence that stability of your applications and infrastructure isn’t in doubt, and ultimate control over all of your resources and cost structures.
It’s the next step beyond Continuous Delivery, but applied to both applications and the infrastructure they run on.
How do we get to Intelligent Delivery?
Providing someone like Carl with the flexibility he needs involves two steps: 1) making software deployment smarter, using those smarts to help the actual infrastructure, and 2) building in monitoring that ensures nothing relevant escapes your notice.
Making software deployment as intelligent as possible
It’s true that software deployment is much more efficient than it used to be, from CI/CD environments to container orchestration platforms such as Kubernetes. But we still have a long way to go to make it as efficient as it could be. We are just beginning to move into the multi-cloud age; we need to get to the point where the actual cloud on which the software is deployed is irrelevant not only to us, but also to the application.
The deployment process should be able to choose the best of all possible environments based on performance, location, cost, or other factors. And who chooses those factors? Sometimes it will be the developer, sometimes the user. Intelligent Delivery needs to be flexible enough to make either option possible.
For now, applications can run on public or private clouds. In the future, these choices may include spare capacity literally anywhere, from servers or virtual machines in your datacenter to wearable devices with spare capacity halfway around the world — you should be able to decide how to implement this scalability.
We already have built-in schedulers that make rudimentary choices in orchestrators such as Kubernetes, but there’s nothing stopping us from building applications and clouds that use complex artificial intelligence or machine-learning routines to take advantage of patterns we can’t see.
Taking Infrastructure as Code to its logical conclusion

Carl got up and headed to the break room for some chocolate, pinching his eyes together. Truth be told, Carl’s command wasn’t a surprise. He’d been worried that this day would come since they’d begun building their products on the public cloud. But they had complex orchestration requirements, and it had been only natural for them to play to the strengths of the GWS API.
Now Adam had to find a way to try and shunt some of those workloads back to their on-premises systems. But could those systems handle it? Only one way to find out.
He took a deep breath and headed for Bernice Gordon’s desk, rounding the corner into her domain. Bernie sat, as she usually did, in a rolling chair, dancing between monitors as she checked logs and tweaked systems, catching tickets as they came in.
“What?” she said, as he broached her space.
“And hello to you, too,” Adam said, smiling.
Bernie didn’t look up.  “Cory is out sick and Dan is on paternity leave, so I’m a little busy.  What do you need, and why haven’t you filed a ticket?”
“I have a question.  Carl wants to repatriate some of our workloads from the cloud.”
Bernie stopped cold and spun around to face him. He could have sworn her glare burned right through his forehead. “And how are we supposed to do that with our current load?”
“That’s why I’m here,” he said. “Can we do it?”
She was quiet for a moment. “You know what?” She turned back to her screens, clicking furiously at a network schema until a red box filled half the screen. “You want to add additional workloads, you’ve got to fix this VNF I’ve been nagging you about to get rid of that memory leak.”
He grimaced.  The fact was that he’d fixed it weeks ago. “I did, I just haven’t been able to get it certified. Ticket IT-48829, requesting a staging environment.”
Her fingers flew over the keyboard for a moment. “And it’s in progress.  But there are three certifications ahead of you.” She checked another screen.  “I’m going to bump you up the list. We can get you in a week from tomorrow.”

So far we’ve been talking about orchestrating workloads, but there’s one piece of the puzzle that has, until now, been missing: with Infrastructure as Code, the infrastructure IS a workload; all of the intelligence we apply to deploying applications applies to the infrastructure itself.
We have long-since passed the point where one person like Bernie, or even a team of operators could manually deploy servers and keep track of what’s going on within an enterprise infrastructure environment. That’s why we have Infrastructure as Code, where traditional hardware configurations such as servers and networking are handled not by a person entering command line commands, but by configuration management scripting such as Puppet, Chef, and Salt.
That means that when someone like Bernie is tasked with certifying a new piece of software, instead of scrambling, she can create a test environment that’s not just similar to the production environment, it’s absolutely identical, so she knows that once the software is promoted to production, it’ll behave as it did in the testing phase.
Unfortunately, while organizations use these capabilities in the ways you’d expect, enabling version control and even creating devops environments where developers can take some of the load off operators, for the most part these are fairly static deployments
On the other hand, by treating them more like actual software and adding more intelligence, we can get a much more intelligent infrastructure environment, from predicting bad deployments to getting better efficiency to enabling self-healing.
Coherent and comprehensive monitoring

Bernie Gordon quietly closed her bedroom door; regression and performance testing on the new version of Andy’s VNF had gone well, but had taken much longer than expected. Now it was after midnight as she got ready for bed, and there was something that was still bothering her about the cutover to production. Nothing she could put her finger on, but she was worried.
Her husband snored quietly and she gave him a gentle kiss before turning out the light.
Then the text came in. She grabbed her phone and pushed the first button her fingers found to cut off the sound so it wouldn’t wake Frank, but she already knew what the text would tell her.
The production system was failing.
Before she could even get her laptop out of her bag to check on it, her phone rang.  Carl’s avatar stared up at her from the screen.
Frank shot upright. “Who died?” he asked, heart racing and eyes wide.
“Nobody,” she said. “Yet. Go back to sleep.”  She answered the call.  “I got the text and I’m on my way back in,” she said without waiting.

With Intelligent Delivery, nobody should be getting woken up in the middle of the night, because with sufficient monitoring and analysis of that monitoring, the system should be able to predict most issues before they turn into problems.
Knowing how fast a disk is filling up is easy.  Knowing whether a particular traffic pattern shows a cyberattack is more complicated. In both cases, though, an Intelligent Delivery system should be able to either recommend actions to prevent problems, or even take action autonomously.
What’s more, monitoring is about more than just preventing problems; it can provide the intelligence you need to optimize workload placement, and can even feed back into your business to provide you with insights you didn’t know you were missing.
Intelligent Delivery requires comprehensive, coherent monitoring in order to provide a complete picture.
Of course, Intelligent Delivery isn’t something we can do overnight. The benefits are substantial, but so are the requirements.
What does Intelligent Delivery involve?
Intelligent Delivery, when done right, has the following advantages and requirements:

Defined architecture: You must always be able to analyze and duplicate your infrastructure at a moment’s notice. You can accomplish this using declarative infrastructure and Infrastructure as Code.
Flexible but controllable infrastructure: By defining your infrastructure, you get the ability to define how and where your workloads run. This makes it possible for you to opportunistically consume resources, moving your workloads to the most appropriate hardware — or the most cost-effective — at a moment’s notice.
Intelligent oversight: It’s impossible to keep up with everything that affects an infrastructure, from operational issues to changing costs to cyberattacks. Your infrastructure must be intelligent enough to adapt to changing conditions while still providing visibility and control.
Secure footing: Finally, Intelligent Delivery means that infrastructure stays secure using a combination of these capabilities:

Defined architecture enables you to constantly consume the most up-to-date operating system and application images without losing control of updates.
Flexible but controllable infrastructure enables you to immediately move workloads out of problem areas.
Intelligent oversight enables you to detect threats before they become problems.

What technologies do we need for Intelligent Delivery?
All of the technologies we need for Intelligent Delivery already exist; we just need to start putting them together in such a way that they do what we need.  Let’s take a good hard look at the technologies involved:

Virtualization and containers:

Of course the first step in cloud is some sort of virtualization, whether that consists of virtual machines provided by OpenStack or VMware, or containers and orchestration provided by Docker and/or Kubernetes.

Multi-cloud:

Intelligent Delivery requires the ability to move workloads between clouds, not just preventing vendor lock-in but also increasing robustness. These clouds will typically consist of either OpenStack or Kubernetes nodes, usually with federation, which enables multiple clusters to appear as one to an application.

Infrastructure as Code:

In order for Intelligent Delivery to be feasible, you must deploy servers, networks, and other infrastructure using a repeatable process. Infrastructure as Code makes it possible to not only audit the system but also to reliably, repeatedly perform the necessary deployment actions so you can duplicate your environment when necessary.

Continuous Delivery tools:

CI/CD is not a new concept; Jenkins pipelines are well understood, and now software such as the Spinnaker project is making it more accessible, as well as more powerful.

Monitoring:

In order for a system to be intelligent, it needs to know what’s going on in the environment, and the only way for that to happen is to have extensive monitoring systems such as Grafana, which can feed data into the algorithms used to determine scheduling and predict issues.

Microservices:

To truly take advantage of a cloud-native environment, applications should use a microservices architecture, which decomposes functions into individual units you can deploy in different locations and call over the network.

Service orchestration:

A number of technologies are emerging to handle the orchestration of services and service requests. These include service mesh capabilities from projects such as Istio, to the Open Service Broker project to broker requests, to the Open Policy Agent project to help determine where a request should, or even can, go. Some projects, such as Grafeas, are trying to standardize this process.

Serverless:

Even as containers seemingly trounce virtual machines (though that’s up for debate), so-called serverless technology makes it possible to make a request without knowing or caring where the service actually resides. As infrastructure becomes increasingly “provider agnostic” this will become a more important technology.

Network Functions Virtualization:

Where today NFV is confined mostly to telcos and other Communication Service Providers, NFV can provide the kind of control and flexibility required for the Intelligent Delivery environment.

IoT:

As software gets broken down into smaller and smaller pieces, physical components can take on a larger role; for example, rather than having a sensor take readings and send them to a server that then feeds them to a service, the device can become an integral part of the application infrastructure, communicating directly where possible.

Machine Learning and AI:

Eventually we will build the infrastructure to the point where we’ve made it as efficient as we can, and we can start to add additional intelligence by applying machine learning. For example, machine learning and other AI techniques can predict hardware or software failures based on event streams so they can be prevented, or they can choose optimal workload placement based on traffic patterns.

Carl glanced at the collection of public cloud bills in his inbox. All together, he knew, they were a fraction of what BigCo had been paying when they’d been locked into GWS. More than that, though, he knew he had options, and he liked that.
He looked through the glass wall of his office. Off in the corner he could see Bernie. She was still a bundle of activity — you couldn’t slow her down — but she seemed more relaxed these days, and happier as she worked on new plans for what their infrastructure could do going forward instead of just keeping on top of tickets all day.
On the other side of the floor, Andy and his team stared intently at a single monitor. They held that pose for a moment, then cheered.
A Slack notification popped up on his monitor.  “The new service is certified, live, and ready for customers,” Andy told him, “and one day before GoldCo even announces theirs.”
Carl smiled. “Good job,” he replied, and started on plans for next quarter.

The post The Intelligent Delivery Manifesto appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis