KQueen: The Open Source Multi-cloud Kubernetes Cluster Manager

The post KQueen: The Open Source Multi-cloud Kubernetes Cluster Manager appeared first on Mirantis | Pure Play Open Cloud.
Kubernetes got a lot of traction in 2017, and I’m one of those people who believes that in 2 years the Kubernetes API could become the standard interface for consuming infrastructure. In other words, the same way that today we get an IP address and ssh key for our virtual machine on AWS, we might get a Kubernetes API endpoint and kubeconfig from our cluster. With the recent AWS announcement about EKS bringing this reality even closer, let me give you my perspective on k8s installation and trends that we see at Mirantis.
Existing tools
Recently I did some personal research, and I discovered the following numbers around the Kubernetes community:

~22 k8s distribution support providers
~10 k8s community deployment tools
~20 CaaS vendors

There are lot of companies that provide Kubernetes installation and management, including Stackpoint.io, Kubermatic (Loodse), AppsCode, Giant Swarm, Huawei Container Engine, CloudStack Container Service, Eldarion Cloud (Eldarion), Google Container Engine,  Hasura Platform, Hypernetes (HyperHQ), KCluster, VMware Photon Platform, OpenShift (Red Hat), Platform9 Managed Kubernetes, and so on.
All of those vendor solutions focus more or less on “their way” of k8s cluster deployment, which usually means specific a deployment procedure of defined packages, binaries and artifacts. Moreover, while some of these installers are available as open source packages, they’re not intended to be modified, and when delivered by a vendor, there’s often no opportunity for customer-centric modification.
There are reasons, however, why this approach is not enough for the enterprise customer use case. Let me go through them.
Customization: Our real Kubernetes deployments and operation have demonstrated that we cannot just deploy a cluster on a custom OS with binaries. Enterprise customers have various security policies, configuration management databases, and specific tools, all of which are required to be installed on the OS. A very good example of this situation is one of a customer from the financial sector. The first time the started their golden OS image at AWS, it took 45 minutes to boot. This makes it impossible for some customers to run the native managed k8s offering at public cloud providers.
Multi-cloud: Most existing vendors don’t solve the question of how to manage clusters in multiple regions, let alone at multiple providers. Enterprise customers want to run distributed workloads in private and public clouds. Even in the case of on-premise baremetal deployment, people don’t a build single huge cluster for whole company. Instead, they separate resources based on QA/testing/production, application-specific, or team-specific clusters, which often causes complications with existing solutions. For example, OpenShift manages a single Kubernetes cluster instance. One of our customers wound up with an official design where they planned to run 5 independent OpenShift instances without central visibility or any way to manage deployment. Another good example is CoreOS Tectonic, which provides a great UI for RBAC management and cluster workload, but has the same problem — it only manages a single cluster, and as I said, nobody stays with single cluster.
most existing vendors do not solve the question of how to manage clusters in multiple locations
“My k8s cluster is better than yours” syndrome: In the OpenStack world, where we originally came from, we’re used to complexity. OpenStack was very complex and Mirantis was very successful, because we could install it the most quickly, easily, and correctly. Contrast this with the current Kubernetes world; with multiple vendors, it is very difficult to differentiate in k8s installation. My view is that k8s provisioning is commodity with very low added value, which makes k8s installation more as vendor checkbox feature, rather than decision making point or unique capability. At the moment, however, let me borrow my favourite statement from a Kubernetes community leader: “Yes, there are lot of k8s installers, but very few deploy k8s 100% correctly.”
Moreover, all public cloud providers will eventually offer their own managed k8s offering, which will put various k8s SaaS providers out of business. After all, there is no point paying for managed k8s on AWS to a third-party company if AWS provides EKS.
K8s provisioning is a commodity, with very low added value.
Visibility & Audit: Lastly, but most importantly, deployment is just the beginning. Users need to have visibility, with information on what runs where in what setup. It’s not just about central monitoring, logging and alerting; it’s also about audit. Users need audit features such as “all docker images used in all k8s clusters” or “version of all k8s binaries”. Today, if you do find such a tool, it usually has gaps at the multi-cluster level, can providing information only for single clusters.
To summarize, I don’t currently see any existing Kubernetes tool that provides all of those features.
KQueen as Open Cluster Manager
Based on all of these points, we at Mirantis decided to build a provisioner-agnostic Kubernetes cluster manager to deploy, manage and operate various Kubernetes clusters on various public/private cloud providers. Internally, we have called the project KQueen and, and it follows several design principles:

Kubernetes as a Service environment deployment: Provide a multi-tenant self-service portal for k8s cluster provisioning.
Operations: Focus on the audit, visibility, and security of Kubernetes clusters, in addition to actual operations.
Update and Upgrade: Automate updating and upgrading of clusters through specific provisioners.

Multi-Cloud Orchestration: Support the same abstraction layer for any public, private, or bare metal provider.
Platform Agnostic Deployment (of any Kubernetes cluster): Enable provisioning of a Kubernetes cluster by various community installers/provisioners, including those with customizations, rather than a black box with a strict installation procedure.
Open, Zero Lock-in Deployment: Provide a pure-play open source solution without any closed source.

 

Easy integration: Provide a documented REST API for managing Kubernetes clusters and integrating this management interface into existing systems.

We have one central backend service called queen. This service listens for user requests (via the API) and can orchestrate and operate clusters.
KQueen supplies the backend API for provider-agnostic cluster management. It enables access from the UI, CLI, or API, and manages provisioning of Kubernetes clusters. It uses the following workflow:

Trigger deployment on the provisioner, enabling KQueen to use various provisioners (AKS, GKE, Jenkins) for Kubernetes clusters. For example, you can use the Jenkins provisioner to trigger installation of Kubernetes based on a particular job.
The provisioner installs the Kubernetes cluster using the specific provider.
The provisioner returns the Kubernetes kubeconfig and API endpoint. This config is stored in the KQueen backend (etcd).
KQueen manages, operates, monitors, and audits the Kubernetes clusters. It reads all information from the API and displays it as a simple overview visualization. KQueen can also be extended by adding other audit components.

KQueen in action
The KQueen project can help define enterprise-scale kubernetes offerings across departments and give them freedom in specific customizations. If you’d like to see it in action, you can see a generic KQueen demo showing the architecture design and managing a cluster from a single place, as well as a demo based on Azure AKS demo. In addition, watch this space for a tutorial on how to set up and use KQueen for yourself. We’d love your feedback!
The post KQueen: The Open Source Multi-cloud Kubernetes Cluster Manager appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

[Podcast] PodCTL #21 – Effective RBAC for Kubernetes

One of the strongest signals we heard coming out of KubeCon was the breadth of “Enterprise” companies deploying Kubernetes into production. As more containerized applications are placed in secure, often regulated, environments, having proper authorization is a critical element of providing defense-in-depth. In this week’s show, we looked at the “Effective RBAC” talk from KubeCon […]
Quelle: OpenShift

The Intelligent Delivery Manifesto

The post The Intelligent Delivery Manifesto appeared first on Mirantis | Pure Play Open Cloud.

div.mstory {padding-left:50px; border-left: 3px solid #e0ebe8; margin:40px 0;}div.mstory p {color: #0d4336;}div.mstory p i {color: #069073; font-family: ‘exo 2′; padding: 0 5px;}

It sat there in his inbox, staring at him.
Carl Delacour looked at the email from BigCo’s public cloud provider, Ganges Web Services. He knew he’d have to open it sooner or later.
It wasn’t as if there would be any surprises in it — or at least, he hoped not. For the last several months he’d been watching BigCo’s monthly cloud bills rising, seemingly with no end in sight. He’d only gotten through 2017 by re-adjusting budget priorities, and he knew he couldn’t spend another year like this.
He opened Slack and pinged Adam Pantera. “Got a sec?”
A moment later a notification popped up on his screen.  “For you, boss?  Always.”
“What’s it going to take,” Carl typed, “for us to bring our cloud workloads back on premise?”
There was a pause.
A long pause.
Such a long pause, in fact, that Carl wondered if Adam had wandered away from the keyboard.  “YT?”
“Yeah, I’m here,” he saw.  “I’m just … I don’t think we can do that the way everything is structured.  We built all of our automation on the provider API. It’d take months, at best, maybe a year.”
Carl felt a cold lump in the center of his chest as the reality of the situation sank in. It wasn’t just the GWS bill that was adding up in his head; the new year would bring new regulatory constraints as well.   It was his job to deal with this sort of thing, and he didn’t seem to have any options. These workloads were critical to BigCo’s daily business. He couldn’t just turn them off, but he couldn’t let things go on as they were, either, without serious consequences.  “Isn’t this stuff supposed to be cloud native?” he asked.
“It IS cloud native,” Adam replied. “But it’s all built for our current cloud provider. If you want us to be able to move between clouds, we’ll have to restructure for a multi-cloud environment.”
Carl’s mouse hovered over the monthly cloud bill, his finger suddenly stabbing the button and opening the document.
“DO IT,” he told Adam.

Carl wasn’t being unreasonable. He should be able to move workloads between clouds. He should also be able to make changes to the overall infrastructure. And he should be able to do it all without causing a blip in the reliability of the system.
Fortunately, it can be done. We’re calling it Intelligent Delivery, and it’s time to talk about what that’s going to take.
Intelligent Delivery is a way to combine technologies that already exist into an architecture that gives you the freedom to move workloads around without fear of lock-in, the confidence that stability of your applications and infrastructure isn’t in doubt, and ultimate control over all of your resources and cost structures.
It’s the next step beyond Continuous Delivery, but applied to both applications and the infrastructure they run on.
How do we get to Intelligent Delivery?
Providing someone like Carl with the flexibility he needs involves two steps: 1) making software deployment smarter, using those smarts to help the actual infrastructure, and 2) building in monitoring that ensures nothing relevant escapes your notice.
Making software deployment as intelligent as possible
It’s true that software deployment is much more efficient than it used to be, from CI/CD environments to container orchestration platforms such as Kubernetes. But we still have a long way to go to make it as efficient as it could be. We are just beginning to move into the multi-cloud age; we need to get to the point where the actual cloud on which the software is deployed is irrelevant not only to us, but also to the application.
The deployment process should be able to choose the best of all possible environments based on performance, location, cost, or other factors. And who chooses those factors? Sometimes it will be the developer, sometimes the user. Intelligent Delivery needs to be flexible enough to make either option possible.
For now, applications can run on public or private clouds. In the future, these choices may include spare capacity literally anywhere, from servers or virtual machines in your datacenter to wearable devices with spare capacity halfway around the world — you should be able to decide how to implement this scalability.
We already have built-in schedulers that make rudimentary choices in orchestrators such as Kubernetes, but there’s nothing stopping us from building applications and clouds that use complex artificial intelligence or machine-learning routines to take advantage of patterns we can’t see.
Taking Infrastructure as Code to its logical conclusion

Carl got up and headed to the break room for some chocolate, pinching his eyes together. Truth be told, Carl’s command wasn’t a surprise. He’d been worried that this day would come since they’d begun building their products on the public cloud. But they had complex orchestration requirements, and it had been only natural for them to play to the strengths of the GWS API.
Now Adam had to find a way to try and shunt some of those workloads back to their on-premises systems. But could those systems handle it? Only one way to find out.
He took a deep breath and headed for Bernice Gordon’s desk, rounding the corner into her domain. Bernie sat, as she usually did, in a rolling chair, dancing between monitors as she checked logs and tweaked systems, catching tickets as they came in.
“What?” she said, as he broached her space.
“And hello to you, too,” Adam said, smiling.
Bernie didn’t look up.  “Cory is out sick and Dan is on paternity leave, so I’m a little busy.  What do you need, and why haven’t you filed a ticket?”
“I have a question.  Carl wants to repatriate some of our workloads from the cloud.”
Bernie stopped cold and spun around to face him. He could have sworn her glare burned right through his forehead. “And how are we supposed to do that with our current load?”
“That’s why I’m here,” he said. “Can we do it?”
She was quiet for a moment. “You know what?” She turned back to her screens, clicking furiously at a network schema until a red box filled half the screen. “You want to add additional workloads, you’ve got to fix this VNF I’ve been nagging you about to get rid of that memory leak.”
He grimaced.  The fact was that he’d fixed it weeks ago. “I did, I just haven’t been able to get it certified. Ticket IT-48829, requesting a staging environment.”
Her fingers flew over the keyboard for a moment. “And it’s in progress.  But there are three certifications ahead of you.” She checked another screen.  “I’m going to bump you up the list. We can get you in a week from tomorrow.”

So far we’ve been talking about orchestrating workloads, but there’s one piece of the puzzle that has, until now, been missing: with Infrastructure as Code, the infrastructure IS a workload; all of the intelligence we apply to deploying applications applies to the infrastructure itself.
We have long-since passed the point where one person like Bernie, or even a team of operators could manually deploy servers and keep track of what’s going on within an enterprise infrastructure environment. That’s why we have Infrastructure as Code, where traditional hardware configurations such as servers and networking are handled not by a person entering command line commands, but by configuration management scripting such as Puppet, Chef, and Salt.
That means that when someone like Bernie is tasked with certifying a new piece of software, instead of scrambling, she can create a test environment that’s not just similar to the production environment, it’s absolutely identical, so she knows that once the software is promoted to production, it’ll behave as it did in the testing phase.
Unfortunately, while organizations use these capabilities in the ways you’d expect, enabling version control and even creating devops environments where developers can take some of the load off operators, for the most part these are fairly static deployments
On the other hand, by treating them more like actual software and adding more intelligence, we can get a much more intelligent infrastructure environment, from predicting bad deployments to getting better efficiency to enabling self-healing.
Coherent and comprehensive monitoring

Bernie Gordon quietly closed her bedroom door; regression and performance testing on the new version of Andy’s VNF had gone well, but had taken much longer than expected. Now it was after midnight as she got ready for bed, and there was something that was still bothering her about the cutover to production. Nothing she could put her finger on, but she was worried.
Her husband snored quietly and she gave him a gentle kiss before turning out the light.
Then the text came in. She grabbed her phone and pushed the first button her fingers found to cut off the sound so it wouldn’t wake Frank, but she already knew what the text would tell her.
The production system was failing.
Before she could even get her laptop out of her bag to check on it, her phone rang.  Carl’s avatar stared up at her from the screen.
Frank shot upright. “Who died?” he asked, heart racing and eyes wide.
“Nobody,” she said. “Yet. Go back to sleep.”  She answered the call.  “I got the text and I’m on my way back in,” she said without waiting.

With Intelligent Delivery, nobody should be getting woken up in the middle of the night, because with sufficient monitoring and analysis of that monitoring, the system should be able to predict most issues before they turn into problems.
Knowing how fast a disk is filling up is easy.  Knowing whether a particular traffic pattern shows a cyberattack is more complicated. In both cases, though, an Intelligent Delivery system should be able to either recommend actions to prevent problems, or even take action autonomously.
What’s more, monitoring is about more than just preventing problems; it can provide the intelligence you need to optimize workload placement, and can even feed back into your business to provide you with insights you didn’t know you were missing.
Intelligent Delivery requires comprehensive, coherent monitoring in order to provide a complete picture.
Of course, Intelligent Delivery isn’t something we can do overnight. The benefits are substantial, but so are the requirements.
What does Intelligent Delivery involve?
Intelligent Delivery, when done right, has the following advantages and requirements:

Defined architecture: You must always be able to analyze and duplicate your infrastructure at a moment’s notice. You can accomplish this using declarative infrastructure and Infrastructure as Code.
Flexible but controllable infrastructure: By defining your infrastructure, you get the ability to define how and where your workloads run. This makes it possible for you to opportunistically consume resources, moving your workloads to the most appropriate hardware — or the most cost-effective — at a moment’s notice.
Intelligent oversight: It’s impossible to keep up with everything that affects an infrastructure, from operational issues to changing costs to cyberattacks. Your infrastructure must be intelligent enough to adapt to changing conditions while still providing visibility and control.
Secure footing: Finally, Intelligent Delivery means that infrastructure stays secure using a combination of these capabilities:

Defined architecture enables you to constantly consume the most up-to-date operating system and application images without losing control of updates.
Flexible but controllable infrastructure enables you to immediately move workloads out of problem areas.
Intelligent oversight enables you to detect threats before they become problems.

What technologies do we need for Intelligent Delivery?
All of the technologies we need for Intelligent Delivery already exist; we just need to start putting them together in such a way that they do what we need.  Let’s take a good hard look at the technologies involved:

Virtualization and containers:

Of course the first step in cloud is some sort of virtualization, whether that consists of virtual machines provided by OpenStack or VMware, or containers and orchestration provided by Docker and/or Kubernetes.

Multi-cloud:

Intelligent Delivery requires the ability to move workloads between clouds, not just preventing vendor lock-in but also increasing robustness. These clouds will typically consist of either OpenStack or Kubernetes nodes, usually with federation, which enables multiple clusters to appear as one to an application.

Infrastructure as Code:

In order for Intelligent Delivery to be feasible, you must deploy servers, networks, and other infrastructure using a repeatable process. Infrastructure as Code makes it possible to not only audit the system but also to reliably, repeatedly perform the necessary deployment actions so you can duplicate your environment when necessary.

Continuous Delivery tools:

CI/CD is not a new concept; Jenkins pipelines are well understood, and now software such as the Spinnaker project is making it more accessible, as well as more powerful.

Monitoring:

In order for a system to be intelligent, it needs to know what’s going on in the environment, and the only way for that to happen is to have extensive monitoring systems such as Grafana, which can feed data into the algorithms used to determine scheduling and predict issues.

Microservices:

To truly take advantage of a cloud-native environment, applications should use a microservices architecture, which decomposes functions into individual units you can deploy in different locations and call over the network.

Service orchestration:

A number of technologies are emerging to handle the orchestration of services and service requests. These include service mesh capabilities from projects such as Istio, to the Open Service Broker project to broker requests, to the Open Policy Agent project to help determine where a request should, or even can, go. Some projects, such as Grafeas, are trying to standardize this process.

Serverless:

Even as containers seemingly trounce virtual machines (though that’s up for debate), so-called serverless technology makes it possible to make a request without knowing or caring where the service actually resides. As infrastructure becomes increasingly “provider agnostic” this will become a more important technology.

Network Functions Virtualization:

Where today NFV is confined mostly to telcos and other Communication Service Providers, NFV can provide the kind of control and flexibility required for the Intelligent Delivery environment.

IoT:

As software gets broken down into smaller and smaller pieces, physical components can take on a larger role; for example, rather than having a sensor take readings and send them to a server that then feeds them to a service, the device can become an integral part of the application infrastructure, communicating directly where possible.

Machine Learning and AI:

Eventually we will build the infrastructure to the point where we’ve made it as efficient as we can, and we can start to add additional intelligence by applying machine learning. For example, machine learning and other AI techniques can predict hardware or software failures based on event streams so they can be prevented, or they can choose optimal workload placement based on traffic patterns.

Carl glanced at the collection of public cloud bills in his inbox. All together, he knew, they were a fraction of what BigCo had been paying when they’d been locked into GWS. More than that, though, he knew he had options, and he liked that.
He looked through the glass wall of his office. Off in the corner he could see Bernie. She was still a bundle of activity — you couldn’t slow her down — but she seemed more relaxed these days, and happier as she worked on new plans for what their infrastructure could do going forward instead of just keeping on top of tickets all day.
On the other side of the floor, Andy and his team stared intently at a single monitor. They held that pose for a moment, then cheered.
A Slack notification popped up on his monitor.  “The new service is certified, live, and ready for customers,” Andy told him, “and one day before GoldCo even announces theirs.”
Carl smiled. “Good job,” he replied, and started on plans for next quarter.

The post The Intelligent Delivery Manifesto appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Does your data deserve a private cloud?

Your business and your data are unique. For that reason, your enterprise architecture must also be tailored to fit the exact needs of your business. When data is involved, you want choices, not tradeoffs. More importantly, you want your solutions to build upon and complement one another.
For most companies, the variety of data sources and where that data should be stored are top priorities. You can’t afford to keep the data in silos or leave certain data untouched just because of its type or where it happens to sit. You need the flexibility to access all your data and place it in the optimal location.
Typically, the solution to this problem has been the public cloud. But what if you also have sensitive data that, due to company mandates or external regulations, needs significant levels of protection? You don’t want to trust the safety of that data to anyone else, preferring instead to take control and choose the level of security for yourself. This is where on-premises solutions have often come into play.
Ultimately, you want choice in flexibility and security without making sacrifices. You want to benefit from the best of both options as well as exceptional performance. At the intersection of these needs is the private cloud. Private clouds offer flexibility, just like public clouds, but they sit behind your firewall, giving you more control. Instead of making a tradeoff, flexibility and security are provided in unison, giving you more choice when it comes to your data.
IBM has embraced the concept of private clouds because we recognize the uniqueness of each data-related business opportunity. IBM Cloud Private delivers the features that many have come to expect in public cloud architectures with shared-resource efficiencies, utility computing and flexible scalability that delivers better total cost of ownership (TCO) and simplicity of deployment. Private cloud also gives companies more options in security, compliance and infrastructure customization. In other words, IBM Cloud Private helps provide the flexibility and security you desire in concert with our other data-focused solutions.
For example, running IBM Db2 on IBM Cloud Private offers choice in terms of deployment flexibility and security without sacrifices. At its core, Db2 still maintains the performance that enterprise users have come to expect: it’s fast, always available, secure, and flexible. The built-in IBM BLU Acceleration MPP architecture supports in-memory speed to get insights to those that need it faster. Its compression technology increases performance while simultaneously reducing storage requirements and giving you the opportunity to reduce storage costs. None of those key features go away when Db2 is running on IBM Cloud Private, but enhancements are made in two areas.
Data management flexibility complemented by container technology
Db2 is extremely flexible on its own. Thanks to the common SQL engine it shares with the entire IBM family of hybrid data management offerings, you can use data of various types sitting in a multitude of on-premises and on-cloud locations. The Db2 deployment is also flexible thanks to its ability to be deployed within a container. This is where IBM Cloud Private’s additive effect comes into play. IBM Cloud Private is built with two of the most popular container technologies at its base: Kubernetes and Cloud Foundry. Deploying Db2 using these technologies opens up the ability to maximize performance and efficiency by more closely aligning usage with company needs.
IBM Cloud Private also opens up the possibility of optimizing your infrastructure costs by offering the right mix of transactional (IBM Db2) and data warehousing solutions (IBM Db2 Warehouse) that adhere to the software-defined architecture. This provides the flexibility of managing your infrastructure needs via simplified deployment and efficient management of the same.
Built-in security bolstered by your own firewall
The built-in security features of Db2 help you deliver the protection that your customers, industry regulations, and other stakeholders demand. It begins with strong encryption capabilities included, so you can address compliance concerns. Then it goes further to provide centralized key management to heighten security and ease of use. IBM Cloud Private enhances this level of security by providing more control. Since IBM Cloud Private sits behind your firewall, it is protected by the security features you have built and integrated over the life of your company. Perhaps most importantly, you get to decide what level of security you need and adjust as necessary. The choice is in your hands.
IBM Db2 running on IBM Cloud Private delivers on the idea of choice without tradeoffs, cloud flexibility with on-premises security. Using IBM Cloud Private takes the extraordinary performance, flexibility, and security customers are accustomed to with Db2 and improves them.
To learn more about IBM Cloud Private and all the ways in which Db2 can benefit from running on that platform, read our solution brief. You can also register to attend our webinar and hear from the experts.
The post Does your data deserve a private cloud? appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

My First Contribution to ManageIQ

In this blog post, I am going to share my experience on how I made my first contribution to ManageIQ, the upstream open source project for Red Hat CloudForms. The post explains how I encountered and investigated an issue, and finally fixed it thereby sending my first “Pull Request” to ManageIQ repository.
 

Issue
When an infrastructure provider like VMware is added to CloudForms/ManageIQ, a user/admin have an option to put host(s) into maintenance mode. The “Enter Maintenance mode” option is available in a dropdown list when “Power” button is clicked on the host summary page, as shown in below image,
 

 
The following image shows a host in maintenance mode from Red Hat CloudForms. The host goes into maintenance mode but never “exits” the mode when selecting “Exit Maintenance Mode”.
 

 
As see below, the request to exit maintenance mode was successfully initiated from CloudForms user-interface.
 

 
However, the host still remains into maintenance mode, and we can validate this state from the VMware vSphere client.
 

 
Now that we have identified the issue, we can look at its possible cause(s) by troubleshooting Red Hat CloudForms.
 
Debugging an issue
A good place to troubleshoot is to look into standard log files under /var/www/miq/vmdb/ on the CloudForms appliance. Below is short description of few important log files:

production.log: All user Interface activities from Operations as well as Service UI are logged here.
automation.log: As the name suggest, all automation logs are collected in this file.
policy.log: This is a good place to look for logs related to events and policies.
evm.log: This file also covers automation logs as well as everything else. This file can be large in size and probably the first log file to look for errors & warning messages.

 
As you can see below, the evm.log file contained warning messages every time “Exit Maintenance Mode” request is initiated,
 
[—-] W, [2017-12-20T16:32:02.557678 #2197:1090af0]  WARN — : MIQ(ManageIQ::Providers::Vmware::InfraManager::HostEsx#exit_maint_mode) Cannot exit maintenance mode because <The Host is not powered ‘on’>
 
The log message clearly shows that the host attempts to exit maintenance mode but fails as it is not powered on. At this point, we can ask ourselves why is the task failing with this warning? Isn’t the host supposed to be in maintenance mode? We suspect something is not right with the logic behind this action. To dig deeper we can look into the host.rb file available at ManageIQ GitHub repository.
 

 
Looking at the logic in host.rb file, the method enter_maint_mode() is triggered when “Enter Maintenance Mode” request is made. This in-turns validates the maintenance mode using method validate_enter_maint_mode() which basically checks the power state of host using method validate_esx_host_connected_to_vc_with_power_state(). The arguments passed to this method are ‘on’ or ‘maintenance’.
 
 

 
A similar logic should be applied to the  exit_maint_mode() method. However, the method calls validate_enter_maint_mode() instead of calling validate_exit_maint_mode(), which causes the issue. The validation fails as the host is in ‘maintenance’ mode and not in ‘on’ mode as we can see below,
 
 

 
A simple fix is to call validate_exit_maint_mode() instead of validate_enter_maint_mode() each time “Exit Maintenance Mode ” request is made. This fix should validate the host to exit maintenance mode successfully.
 
Test
To verify our analysis, We can replaced the validation method call from validate_enter_maint_mode() to validate_exit_maint_mode() and restart evmserverd on the appliance using,
 
systemctl restart evmserverd
 
This time the host successfully exits maintenance mode
 
CloudForms User Interface:
 

 
VMware User Interface:
 

 
Creating a Pull Request
A “Pull Request” is a way to propose a change in code on GitHub. For those who don’t have an GitHub account, you can create one by following https://github.com/join. Once the account is created we have to fork the repository by clicking “Fork” button as shown below,
 

 
Next step is to clone the repository to our local machine so that changes can be made. Click on “Clone or download” button to copy the https URL link.
 

 
We can clone the repository by using the command
git clone https://github.com/imaanpreet/manageiq.git
 
Once our clone is completed, we can create a new branch using,
git checkout -b validate_exit_maint_mode
 
Make required changes and commit the changes using,
git add app/models/host.rb

git commit

 
Once changes are committed, it is time to send back changes as “Pull Request”, this can be done by pushing changes to the newly created branch,
 
git push origin validate_exit_maint_mode
 
The process to create a pull request is documented here.
 
Conclusion
The Pull Request is merged in the manageiq repository and the bug is currently being worked on. This was a great experience and I enjoyed the process of debugging, investigating, and fixing a bug in ManageIQ. I hope sharing this experience in this article will be useful for other readers, and will encourage them to submit more Pull Requests.
 
Quelle: CloudForms