Mirantis Announces Latest Mirantis Cloud Platform at KubeCon Bringing Kubernetes on Premises

The post Mirantis Announces Latest Mirantis Cloud Platform at KubeCon Bringing Kubernetes on Premises appeared first on Mirantis | Pure Play Open Cloud.
The release includes a large number of enhancements and new features for running K8S with enterprise on-premise infrastructure

SUNNYVALE, CA – December 10, 2018 – Mirantis announced today from KubeCon North America the release of Mirantis Cloud Platform (MCP). The new release offers a number of enhancements and new features including the ability to deploy Kubernetes on premises, as well as much improved and more rigorous quality assurance in all areas of the product.

This latest version of MCP follows on the heels of the recently announced MCP EdgeM, which offers operators a complete software solution designed for edge cloud use cases. Highlights of the latest MCP release includes OpenStack Queens, Kubernetes 1.11 and OpenContrail 4.0. It also includes upgrades and updates to the DriveTrain component upgrade pipeline, granular OpenStack Ocata to Pike Upgrade pipeline, Kubernetes Upgrade Pipeline, as well as security improvements in all areas and improved documentation experience.

“With Kubernetes becoming the de-facto infrastructure API and standard for building new applications, the need for virtual machines is devolving into a security layer for containers,” said Adrian Ionel, Mirantis co-founder and CEO. “Longer term, we believe customers will run Kubernetes on bare metal and we are looking to enable this with subsequent MCP releases.”

Over the course of the last 3 years, the world’s top brands have been partnering with Mirantis to build and support their K8S infrastructure, including Volkswagen, Reliance Jio and AT&T.

If you are interested in receiving a live demo of the latest MCP or MCP Edge, stop by the Mirantis booth P5 at KubeCon.

About Mirantis

Mirantis is the flexible infrastructure company harnessing open source to free application owners from operations concerns. The company employs a unique build-operate-transfer approach to deliver two distinct products:

Mirantis Cloud Platform, which is based on Kubernetes and OpenStack and helps services providers and enterprises run highly tunable private clouds powered by infrastructure-as-code and based on open standards.
Mirantis Application Platform, which is based on Spinnaker and helps enterprises adopt cloud native continuous delivery to realize cloud ROI at scale.

To date, Mirantis has helped more than 200 enterprises and service providers build and operate some of the largest open clouds in the world. Its customers include iconic brands such as Adobe, AT&T, Comcast, Reliance Jio, State Farm, STC, Vodafone, Volkswagen, and Wells Fargo. Learn more at www.mirantis.com.

Contact information:
Joseph Eckert for Mirantis
jeckertflak@gmail.com

The post Mirantis Announces Latest Mirantis Cloud Platform at KubeCon Bringing Kubernetes on Premises appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

OpenShift & Kubernetes: Where We’ve Been and Where We’re Going Part 1

As we approach the end of another year for Red Hat OpenShift and Kubernetes, and another Kubecon, which I believe will be even bigger than the last, it’s a great time to reflect on both where we’ve been and where we’re going. In this blog I will look back over the past 4+ years since […]
The post OpenShift & Kubernetes: Where We’ve Been and Where We’re Going Part 1 appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Technology company improves agility and growth with IBM Cloud

As artificial intelligence (AI), augmented reality and the Internet of Things (IoT) increasingly influence our everyday lives, the demand for sophisticated chips to support these innovations is on the rise. Nanometer-scale chip fabrication is extremely complex and requires the highest purity to eliminate even minuscule levels of contamination.
Entegris is a leader in specialty chemicals and advanced materials solutions for the microelectronics industry and other sectors that are driving these megatrends. The growing demand for our company’s services means that Entegris is expanding rapidly. We were looking to scale business systems securely, while cutting costs and boosting margins.
Entegris relies on SAP applications for core business processes. As transaction volumes increased, our reporting and analytics workload was growing – yet batch processes to prepare month-end reports required transaction systems to be halted, and backups required offline periods, causing unwelcome interruptions.
The shift to SAP HANA
To help the company continue to grow, Entegris evaluated switching to SAP HANA because of its high-performance, in-memory database technology. But the move would require a large investment in new infrastructure, and absorb internal resources for planning, deployment and management, a cost we wanted to defer or reduce.
We upgraded to SAP Business Suite powered by SAP HANA and planned to migrate the existing business processes without changing them significantly. Before making the migration, though, we decided to review our infrastructure approach. We investigated the advantages of switching to fully cloud-enabled operations and found that selecting a cloud approach could provide the scalability we needed at much lower operational costs, while still providing a secure, robust application environment.
Transitioning with help from IBM
Entegris selected IBM Cloud for SAP Applications combined with IBM Cloud Managed Services. The fully managed IBM services include operating system support, monitoring, and network management. They also provide the expert advice and support needed for enterprise-level work, while also offering the advantages of scalability, reliability and cost efficiency.
To help the transition to IBM Cloud, IBM created the virtual machines and configured the applications and databases to aid our deployment of the SAP HANA solutions. The rollout was completed in phases, starting with smaller solutions and moving on to the larger ones.
The transformation
Moving to IBM Cloud immediately improved Entegris’ business agility in three key areas:

Improved flexibility. We can now create new SAP environments for testing simply by adding cloud capacity instead of having to wait for hardware arrival and set up. We’re also better able to manage system refreshes. Before the migration, we couldn’t refresh more than one SAP instance at a time due to the limited capacity of our fixed server. By using IBM Cloud, we can quickly add capacity to complete multiple system refreshes, and then scale it back once that capacity is no longer needed.
Increased app availability. Since we can easily and quickly refresh SAP environments, Entegris can now keep applications available during month-end reporting, backups and system refreshes. As a result, we’ve seen a boost in productivity for Entegris’ around-the-clock work schedule.
Faster response times. We can also run reports using near-real-time information, because IBM Cloud enables faster SAP data load times. For example, SAP application dialog responses are now 50 percent faster. As our workload grows, we can scale up compute capacity to meet rising demand without any delay.

We chose IBM Cloud Managed Services because IBM offered a higher-performing product at a lower cost. It also offered executive commitment to the migration process, so we could take on less risk as IBM were aligned with our business objectives.
The IBM Managed Services approach gives us the scalability, flexibility and capacity we need to make the most of the SAP Business Suite. We can now better tune our manufacturing to help us optimize the business. We’re reducing capital requirements, cutting waste and improving fulfillment, and IBM and SAP are central to our success.
Read the case study for more details.
The post Technology company improves agility and growth with IBM Cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

[Podcast] PodCTL – Kube Security, Kube 1.13 and KubeCon

Heading into the week of KubeCon, we wanted to make sure that listeners had some basics to prepare them for a week of learning and announcements. We discussed the severe Kubernetes bug (Kubernetes Privilege Escalation Flaw) and available patches, all of the new features in Kubernetes 1.13, as some previews of things to expect from […]
The post [Podcast] PodCTL – Kube Security, Kube 1.13 and KubeCon appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

UK software firm chooses IBM Cloud for cloud-native banking platform

Thought Machine, a software developer based in the United Kingdom, has teamed with IBM to accelerate its new cloud-native banking platform.
The platform, called Vault, “allows banks to fully realize the benefits of IBM Cloud and has been developed to be highly flexible, giving banks the ability to quickly add new products, accommodate shifts in a bank’s strategy or react to external changes in the market”, reports CloudPro.
IBM and Thought Machine are opening a global practice headquartered in London that will be staffed by existing and new consultants with banking transformation and implementation expertise.
“Vault offers the first core banking platform built in the cloud from the ground up, massively scalable and with the flexibility to create and launch new products and services in days versus months, at unprecedented levels of cost and speed,” Jesus Mantas, managing partner of IBM Global Business Services, said.
For more details, read the full story at CloudPro.
The post UK software firm chooses IBM Cloud for cloud-native banking platform appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Playing with REST API

Overview
In this article, I will describe how REST API works natively in Red Hat CloudForms.
REST stands for Representational State Transfer. REST is a web standard based architecture and uses HTTP protocol for data communication. It revolves around resources where every component is a resource and is accessed by a common interface using HTTP standards method.  
Red Hat CloudForms provides APIs to integrate external systems and initiate provisioning via CloudForms. In CloudForms, REST can be accessed by adding “/api” prefix to the URL.  
 
https://<IP or hostname of appliance> /api/

How to play
In order to work with REST APIs, there are various REST API client tools  :
 
 

Internet Browser : Put a REST API call into the browser Address Bar

 

CURL :  Command line tool for HTTP client

 
curl -k -u username:password -X GET “Accept: application/json” https://<IP or hostname of appliance>/api/
 

Insomnia : A powerful REST API client with cookie management, code generation, authentication for Linux, Mac and Window etc.

 

HTTP Methods
Red Hat CloudForms API uses JSON (Javascript Object Notation Format) for data exchange format.  JSON is a commonly used format for data exchange and storing. The primary or most commonly used HTTP verbs are POST, GET, PUT, PATCH, OPTIONS, HEAD and DELETE. These correspond to create, read, update, delete (or CRUD) operations respectively.
 

Method
DEFINITION
Example (Using CURL)

GET
Return a specific resource
curl -k -u user:password -X GET “Accept: application/json”

https://<IP>/api/providers/

POST
Perform an action on the resource
curl -k –user user:password -i -X POST -H “Accept: application/json” -d ‘ { “type” : “ManageIQ::Providers::Redhat::InfraManager”, “name” : “RHEVM Provider”, “hostname” : “hostname of provider”, “ipaddress” : “IP”, “credentials” : { “userid” : “username”, “password” : “*****”}}’

https://<IP>/api/providers

PUT
Update or replace  a resource
curl -k –user username:password -i -X PUT -H “Accept: application/json”
-d ‘{ “name” : “updated service name” }’
   http://<IP>/api/services/<service_id>

DELETE
Delete a resource
curl -k –user user:password -i -X DELETE -H “Accept: application/json”

https://<IP>/api/providers/<provider_id&gt;

OPTIONS
Get the metadata
curl -k –user username:password -X OPTIONS “Accept: application/json”

https://<IP>/api/providers/

HEAD
Same as GET, but transfers the status line and header section only.
HEAD method is identical to GET except that the server MUST NOT return a message body in response.This method is often used for testing hypertext links for validity, accessibility, and recent modification.

PATCH
Update or modify a resource
curl -k –user username:password -i -X PATCH -H “Accept: application/json”
-d ‘[{ “action”: “edit”, “path”: “name”, “value”: “A new Service name” },{ “action”: “add”, “path”: “description”, “value”: “A Description for the new Service” },
 { “action”: “remove”, “path”: “display” }
]’
   http://<IP>/api/services/<service_id>

 
Updating resources
As shown in the above table, there are a couple of ways to update attributes in a resource. You can update a resource with PUT or PATCH method. Now, the question is When to use PUT and When PATCH?
 
For Example
“When a client needs to replace an existing Resource entirely, they can use PUT. When they’re doing a partial update, they can use HTTP PATCH.”
For instance, when updating a single field of the Resource, sending the complete Resource representation might be cumbersome and utilizes a lot of unnecessary bandwidth. In such cases, the semantics of PATCH make a lot more sense.
 
How to authenticate REST APIs
REST APIs authentication can be done by two ways :
 

Basic Authentication : The most simple way to deal with authentication is to use HTTP basic authentication in which the username and password credentials are passed with each HTTP request.

Token Based Authentication: For multiple API calls to the appliance, it is recommended to use this approach. In this approach, client requests a token for the username and password. Then the token is used instead of username and password for each API call.

 
Acquiring a Token :
Request:
 curl -k -u user:password -X GET “Accept: application/json” https://<IP>/api/auth
Response:
{“auth_token”:”4cb1fb32508350796caf32c12808fee2″,”token_ttl”:600,”expires_on”:”2017-12-01T11:25:06Z”}
 
Query with Token
curl -k -i -X GET “Accept: application/json” -H “X-Auth-Token: “token” https://<IP>/api/hosts

Delete a Token
curl -k -i -X DELETE -H “Accept: application/json” -H “X-Auth-Token: 21fe54dd14dc89c219d62f651497a54″ https://<IP>/api/auth

Moreover, the duration of token is about 10 minutes and we can change/modify the duration from CloudForms operational portal by navigating to Configuration -> Server -> Advanced -> api: -> token_ttl

Query Specification
 
Query specification identifies the controls available when querying collections. While querying, we can specify control attributes in the GET URL as value paris. There are three main techniques comes under query specifications. Let’s take a look on them.
 
Paging : In this capability, there are two attributes available as offset and limit. Offset means first item to return and limit means the number of items to return.
Sorting: In sorting, we can sort the attributes by order , options and attributes. For example: by specifying “sort_by=atr1,atr2” , “sort_order=asc or des”
Filtering:  This helps user to filter the data according to the use case. The syntax for filters is :
filter[]=attribute op value
where op means operators 
 
Return Codes
 
Success :  200 : OK, 201 : Created, 202 : Accepted, 204 : No content
 
Client Errors: 400 : Bad Request, 401 : Unauthorized, 403 : Forbidden, 404 : Not Found, 415 : Unsupported Media Type
 
Server Errors: 500 : Internal Server error
 
Troubleshooting
A good place to troubleshoot is to look into standard log files under /var/www/miq/vmdb/log on the CloudForms appliance. All the api related logs are recorded under /var/www/miq/vmdb/log/api.log. In order to dig deeper, changing the level of log is much recommended. You can change the log level by navigating to Configuration → Server → Advanced → :level_api: debug.

Conclusion
I hope after reading this article you will get basic understanding about how CloudForms can be managed via REST API’s. You can find the full Rest API documentation here.
Quelle: CloudForms

Calling an Embedded Ansible Playbook from the VM Provision State Machine

CloudForms 4.6 provided the ability to run embedded Ansible playbooks as methods, and it can be useful to include such a playbook in an existing workflow such as the VM Provision state machine.

In this example an Ansible playbook method is used at the AcquireIPAddress state to insert an IP address, netmask and gateway into the VM provisioning workflow. A cloud-init script is then used at first boot to set the values in the new VM using nmcli.
 
Creating the Instance and Method
 
A new acquire_ip_address instance and method are defined in the usual manner. The method is of Type: playbook and is defined to run on Hosts: localhost
 
 

 
The input parameters for the playbook method are dynamic. Two parameters miq_provision_request_id (the request ID) and miq_provision_id (the task ID), are defined as follows:

 
The new instance is added to the AcquireIPAddress state of the VM Provision state machine:

 
Inserting the IP Details into the VM Provision Workflow
 
The playbook can write the acquired IP details back into the provision task’s options hash in either of two ways: using the RESTful API, or using an Ansible role.
 
Calling the CloudForms RESTful API
 
The first example playbook uses the CloudForms RESTful API to write the retrieved IP details back in to the provision task’s options hash. To simplify the example the IP address, netmask and gateway are defined as static vars; in reality these would be retrieved from a corporate IPAM solution such as Infobox.
 

– name: Acquire and Set an IP Address
  hosts: all
  gather_facts: no
  vars:
  – ip_addr: 192.168.1.66
  – netmask: 24
  – gateway: 192.168.1.254
    
  tasks:
  – debug: var=miq_provision_id
  – debug: var=miq_provision_request_id
 
  – name: Update Task with New IP and Hostname Information
    uri:
      url: “{{ manageiq.api_url }}/api/provision_requests/{{ miq_provision_request_id }}/request_tasks/{{ miq_provision_id }}”
      method: POST
      body_format: json
      body:
        action: edit
        resource:
          options:
            addr_mode: [“static”, “Static”]
            ip_addr: “{{ ip_addr }}”
            subnet_mask: “{{ netmask }}”
            gateway: “{{ gateway }}”
      validate_certs: no
      headers:
        X-Auth-Token: “{{ manageiq.api_token }}”
      body_format: json
      status_code: 200

 
Using the manageiq-vmdb Ansible Role
 
The second example playbook uses the manageiq-vmdb Ansible role (GitHub – syncrou/manageiq-vmdb: Manageiq Role to modify / lookup vmdb objects ) to write the retrieved IP details back into the provision task’s options hash. Once again the IP address, netmask and gateway are defined as static vars for simplicity of illustration.
 

– name: Acquire and Set an IP Address
  hosts: all
  gather_facts: no
  vars:
  – ip_addr: 192.168.1.66
  – netmask: 24
  – gateway: 192.168.1.254
  – auto_commit: true
  – manageiq_validate_certs: false
     
  roles:
    – syncrou.manageiq-vmdb
    
  tasks:
  – debug: var=miq_provision_id
  – debug: var=miq_provision_request_id
 
  – name: Get the task vmdb object
    manageiq_vmdb:
      href: “provision_requests/{{ miq_provision_request_id }}/request_tasks/{{ miq_provision_id }}”
    register: task_object
   
  – name: Update Task with new IP and Hostname Information
    manageiq_vmdb:
      vmdb: “{{ task_object }}”
      action: edit
      data:
        options:
          addr_mode: [“static”, “Static”]
          ip_addr: “{{ ip_addr }}”
          subnet_mask: “{{ netmask }}”
          gateway: “{{ gateway }}”

 
In these example playbooks the netmask variable is defined in CIDR format rather than as octets, to be compatible with nmcli.
 
Configuring the IP Address at First Boot
 
Configuring a NIC with IP address details is a guest operating system operation, and so must be performed when the VM or instance first boots. For this example a template cloud-init script is defined in Compute -> Infrastructure -> PXE -> Customization Templates in the WebUI, as follows:
 
<%
   root_password = MiqPassword.decrypt(evm[:root_password])
   hostname = evm[:hostname]
   ip_addr = evm[:ip_addr]
   subnet_mask = evm[:subnet_mask]
   gateway = evm[:gateway]
   dns_servers = evm[:dns_servers]
   dns_suffixes = evm[:dns_suffixes]
%>
#cloud-config
ssh_pwauth: true
disable_root: false
users:
  – default
  – name: ansible-remote
    shell: /bin/bash
    sudo: [‘ALL=(ALL) NOPASSWD:ALL’]
    ssh_authorized_keys:
      – ssh-rsa AAAAB3NzaC1yc2E…

chpasswd:
  list: |
    root:<%= root_password %>
  expire: false
runcmd:
  ## Setup motd
  – echo Welcome to VM <%= hostname %>, provisioned by Red Hat CloudForms on $(date) > /etc/motd
  – rm -f /root/*
  – nmcli –fields UUID con show | awk ‘!/UUID/ {print}’ | while read line; do nmcli con delete uuid $line; done
  – nmcli con add con-name eth0 ifname eth0 type ethernet
    ip4 “<%= ip_addr %>/<%= subnet_mask %>”
    gw4 “<%= gateway %>”
  – nmcli con mod eth0
    ipv4.dns “<%= dns_servers %>”
    ipv4.dns-search “<%= dns_suffixes %>”
    connection.autoconnect yes
  – nmcli con up eth0
  – hostnamectl set-hostname <%= hostname %>
  – systemctl mask cloud-init-local cloud-init cloud-config cloud-final
 
If the cloud-init script is selected from the Customize tab of the provisioning dialog, CloudForms will make the variable substitutions at run-time and inject the resultant script into the VM or instance to be run at first boot.
Quelle: CloudForms

Mastering Automation Addendum for CloudForms 4.6

We cannot be more excited!!! Peter just finished work on an addendum to the ‘Mastering Automation’ book, to bring it up to date with some of the great new automate features in CloudForms 4.5 & 4.6.

The book is here: https://manageiq.gitbook.io/mastering-cloudforms-automation-addendum/preface

Please let us know your thoughts on this!
Quelle: CloudForms

Infrastructure Tour Italy Part 3

Introduction
 
Red Hat held an event on the infrastructure part of our portfolio in Milan and Rome on April 17th and 19th, 2018. Part of the demos presented was focused on the Automation topic managed with Red Hat Ansible and Red Hat Ansible Tower:
 
The event information and agenda is available at:

https://www.redhat.com/en/events/infrastructure-tour-milan-2018

https://www.redhat.com/en/events/infrastructure-tour-rome-2018

 
This is the third part of the series of articles written by my colleague Rinaldo Bergamini you can find them here and here.
In this part, I would like to show you how you can Automate “everything” with Red Hat Ansible and Ansible Tower.

At that time Ansible Tower demo was configured literally in a manual way. That means I had to:

Choose a cloud provider  
Define IAM users
Define Networks and Storage details
Create Instances for Tower and servers for my use cases
Install and Configure Tower

 
We showed to the audience several use cases  :

Application deployment [PROVISIONING]
Application configuration [CONFIGURATION MANAGEMENT]
Infrastructure Day 2 Operations [ORCHESTRATION]
Proactive & Automatic Analysis w/ Insights [SECURITY]
Security Content/Vulnerability Assessment & Remediation [SECURITY]

 
After the event as you can imagine, I used the same demo to show to our customers how Ansible can help them due to three of it’s main core values:  it’s SIMPLE,  it’s POWERFUL, it’s AGENT LESS
After few days in my mind comes a new idea: my demo needs to be fully automated!
In the past I have done something similar in an OpenStack/Cloudforms environment using Heat.
I did the same but using a public cloud provider and the power of Ansible
What do I mean? I want to start building AUTOMATICALLY the whole environment from scratch in order to:
 

Show an end-to-end deployment of multiple servers/services
Quickly reproduce the demo if needed
Track changes → ansible playbook are YAML based files so we can track changes on git
Rebuild old demo environment using the latest version  (demo was based on Tower 3.2.3 version. Now we are at 3.2.6)
Avoid to re-invent the wheel every time we need a demo environment
Use this effort as a baseline for new use cases
Write this follow-up post for who have joined us during the event

 
I think we can call this approach as Automation 3 or Cubed Automation   
 
We want to “Automate” the setup of the “Automation” environment “Automating” several tasks
Let’s start understanding what the main folder contains and how the playbook was designed…
Folder structure

In the main dir there are:
 

setup.ini file [1] where basically we need to configure some basic stuff in order to configure the environment. There are 3 sections:   [tower], [rhsm], [gce], and we have to write down on the [tower] section,  the version we want to install, the tower’s admin password (tower_password variable), etc.

We need also to declare our Red Hat Customer Portal user in order to register our instances to the Red Hat Portal under the [rhsm] section. The password is not a clear text password, instead, the playbook will use vault. You can refer to this document in order to correctly encrypt your password encoding it inside the playbook yaml file.As prerequisites on GCE side, we have to:

create two service accounts:

the first one called service_account_instance_creation@xxxxx which will be used to create instances on GCE. You need also to download it in JSON format and use it as credentials (parameter service_account_instance_creation_credentials)

the second one called tower-service-account which needs to be download as .p12  

you need to extract the private key from .p12 file with the command:  cat xxxxxxx.p12 | openssl pkcs12 -nodes -nocerts -passin pass:notasecret | openssl rsa > privateKey.pem.
This keyfile will be used in the future (playbook enhancement) by Tower to authenticate Tower to GCP and use the dynamic inventory feature  (not available right now)

[1] setup.ini file

 

Inventory File where you need to put the hosts being part of the inventory you’ll manage with Ansible. Those hosts will be created  by our playbook
Gce_createinstances  main playbook file that will execute some tasks and role.
License is the tower license file that will be loaded to Tower using a POST to it’s APIs
README basic prerequisites and guidelines
Roles folders used by the main playbook to organize playbooks and tasks by their own scope

 
I don’t want to explain all the playbooks/roles/tasks in details.
DISCLAIMER: At the time of writing the git repo is private. As soon as possible I’ll release it as Open Source of course and then feel free to contribute with pull request!
 
Now let’s watch this short video where you can see how we can set up the whole environment (tower included) in less than 30 minutes
 

 
At the end of the entire playbook run the whole environment is up and running in 28 minutes! [2]
 
[2] ansible output

Now let’s log in to Ansible Tower to quickly highlights the configuration performed
 
The role Tower has configured our Tower environment executing the setup, loading the license file and creating admin user using a mix of APIs call and ansible tower modules.
Then the role tower_uc_setup has created for us the skeleton for our automation building 4 projects, 4 inventory, 7 hosts, some groups and a lot of pre-configured job_templates and workflow identified by an id and a prefix inside the template name
 
Tower homepage

Configured hosts

First of all, we want to use a preconfigured job template called “UC-1 [Provisioning] – WebServers + Haproxy + Nagios “  in order to install our web servers (httpd1/httpd2), a load balancer (haproxy) and a monitoring system (nagios)
 
“UC-1 [Provisioning] – WebServers + Haproxy + Nagios “ Job Template

Executing it  will configure everything in 11 minutes

Index.html showed calling httpd1 server

Nagios configured with hostgroups and services

In addition, the Haproxy server will balance the two web servers in a round robin fashion.
Of course during the demo setup (runtime)  we could also execute the template using the available module tower_job_launch but in this case I would like to show how quickly this template can avoid spending your time on repetitive and boring tasks executing it manually from the Tower UI
 
I have also configured other templates able to:

Exclude a web server from a load balancer
Execute a custom command on a server
Re-include web server from Load Balancer
Unmonitor the web server from Nagios
Re-monitor the web server from Nagios
Rolling updates for all the project servers

 
All the playbooks used in this demo as job templates are available here
 

https://github.com/MikeNald/ansible-tower-examples

 
After the first run of the gce_createinstances playbook they will be available inside Tower

The setup will also configure an entire workflow able to
 

Exclude web server from LB → in case of success → temporary disable monitoring → in case of success → run command on server → always →

re-include it into monitoring → always re-include web server on LB
 
The workflow includes also a survey asking the end user for a target host and command to be executed

This is just an example of what Tower can do … here the full list of available use cases addressed by this Demo with several kinds of playbooks, workflows, survey, etc

For instance, I have configured an Openscap scan of a rhel system, create a remediation profile for it, build a report and fix the findings using job_template number 16, 17 and 18
Then I have configured a workflow to execute those playbooks in a consistent way.
 
The result is a rhel server where the Standard System Security Profile (ssg-rhel7-ds.xml) was used as a baseline and applied to the system.
The report is loaded on git repo automatically by the playbook “UC-8 [Security] – Openscap Security Scan “ and can be viewed here
 
Another example is the fully automated Insight Integration in order to proactively resolve possible wrong configurations or security issues on systems executing pre-configured ansible playbooks made available from Insights.
 
Resources:
 

https://github.com/MikeNald/ansible-tower-examples

https://docs.ansible.com/ansible-tower/

https://docs.ansible.com/ansible/latest/modules/list_of_all_modules.html

https://docs.ansible.com/ansible-tower/latest/html/userguide/insights.html

 
Conclusion:
 
This post would like to show you how much Ansible is powerful, simple and integrated with a broad ecosystem.
More than 1.600 modules are available, an entire galaxy (https://galaxy.ansible.com/) of re-usable roles is ready without needs to install plugins or agent on remote systems resulting in a quick adoption of the solution and avoiding an overhead to your systems without increasing surface attack!
What are you waiting for?  Now you can perform your own AUTOMATION
Quelle: CloudForms