Building IT Transformation Architecture with Red Hat OpenShift

In the era of mobile applications, business challenges to the enterprise IT organizations are more dynamic than ever. Many enterprises have difficulties responding in time because of the inherent complexity and risk of integrating emerging technologies into existing IT architectures. In this article, I will share my experience on how to utilize Red Hat OpenShift […]
The post Building IT Transformation Architecture with Red Hat OpenShift appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Red Hat Takes Home a Trio of CODiE Awards

It was a big awards night for Red Hat, recently, as three of our products won best in category business technology awards. The 2019 SIIA CODiE Awards have been distributed for over 30 years, now. They are the only peer-recognized program in the business and ed tech industries. In the words of the awards body, […]
The post Red Hat Takes Home a Trio of CODiE Awards appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Deploying a User Provisioned Infrastructure environment for OpenShift 4.1 on vSphere

Deploying a UPI environment for OpenShift 4.1 on vSphere NOTE: This process is not supported by Red Hat and is provided “as-is”. This process will utilize terraform to deploy the infrastructure used by the OpenShift Installer. Please install the appropriate version for your operating system: https://www.terraform.io/downloads.html This article was created utilizing Terraform v0.11.12.   Download […]
The post Deploying a User Provisioned Infrastructure environment for OpenShift 4.1 on vSphere appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Using Kubernetes Operators to Manage Let’s Encrypt SSL/TLS Certificates for Red Hat OpenShift Dedicated

Overview Red Hat OpenShift Dedicated is an enterprise Kubernetes application platform hosted on public cloud providers and managed by Red Hat Site Reliability Engineering (SRE). OpenShift Dedicated enables companies to implement a flexible, hybrid cloud IT strategy by connecting to their datacenter with minimal infrastructure and operating expenses. Valid SSL certificates are part of the […]
The post Using Kubernetes Operators to Manage Let’s Encrypt SSL/TLS Certificates for Red Hat OpenShift Dedicated appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

No Downtime Upgrade for Red Hat Data Grid on Openshift

In a blog post I wrote on the Red Hat Developer’s Blog, I wrote about multiple layers of security available while deploying Red Hat Data Grid on Red Hat Openshift. Another challenging problem I see for customer is performing a no downtime upgrade for Red Hat Data Grid images (published on Red Hat Container Catalog). […]
The post No Downtime Upgrade for Red Hat Data Grid on Openshift appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Kubernetes Operators Best Practices

Introduction Kubernetes Operators are processes connecting to the master API and watching for events, typically on a limited number of resource types. When a relevant event occurs, the operator reacts and performs a specific action. This may be limited to interacting with the master API only, but will often involve performing some action on some […]
The post Kubernetes Operators Best Practices appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

What are IBM Cloud Paks?

It’s been more than a decade since commercial cloud first transformed business, but even now only about 20 percent of workloads have moved to the cloud. Why? Factors such as skills gaps, integration issues, difficulties with established codes and vendor lock-in may be preventing most teams from fully modernizing their IT operations.
Business leaders have the difficult task of keeping pace with innovation without sacrificing security, compliance or the value of existing investments. Organizations must move past the basic cloud model and open the next chapter of cloud transformation to help successfully balance these needs.
Containers and the path to enterprise-grade and modular cloud solutions
Organizations focused on transformation can modernize traditional software to help improve operational efficiency, integrate clouds from multiple vendors and build a more unified cloud strategy.
As a major catalyst driving this transformation, containers make integration and modernization far easier by isolating pieces of software so they can run independently. Additionally, Kubernetes provides a powerful solution for orchestrating and managing containers.
That is why IBM has embraced containers and built its multicloud solutions around the Kubernetes open source project. Teams may need more than Kubernetes alone. Enterprises typically need to transform at scale, which includes orchestrating their production topology, offering a ready-to-go development model based on open standards and providing management, security and governance of applications.

 
Moving beyond Kubernetes with IBM Cloud Paks
IBM is addressing transformation needs by introducing IBM Cloud Paks, enterprise-grade container software that is designed to offer a faster, more reliable way to build, move and manage on the cloud. IBM Cloud Paks are lightweight, enterprise-grade, modular cloud solutions, integrating a container platform, containerized IBM middleware and open source components, and common software services for development and management. These solutions have reduced development time by up to 84 percent and operational expenses by up to 75 percent.
IBM Cloud Paks help enterprises do more. Below are some key advantages of the new set of offerings.

Run anywhere. IBM Cloud Paks are portable. They can run on-premises, on public clouds or in an integrated system.
Open and secure. IBM Cloud Paks have been certified by IBM with up-to-date software to provide full stack support, from hardware to applications.
Consumable. IBM Cloud Paks are pre-integrated to deliver use cases (such as application deployment and process automation). They are priced so that companies pay for what they use.

Introducing the five IBM Cloud Paks
IBM Cloud Paks are designed to accelerate transformation projects. The five Cloud Paks are comprised of the following:

IBM Cloud Pak for Applications. Helps accelerate the modernization and building of applications by using built-in developer tools and processes. This includes support for analyzing existing applications and guiding the application owner through the modernization journey. In addition, it supports cloud-native development microservices functions and serverless computing. Customers can quickly build cloud apps, while existing IBM middleware clients gain the most straightforward path to modernization.
IBM Cloud Pak for Automation. Helps deploy on clouds where Kubernetes is supported, with low-code tools for business users and near real-time performance visibility for business managers. Customers can migrate their automation runtimes without application changes or data migration, and automate at scale without vendor lock-in.
IBM Cloud Pak for Data. Helps unify and simplify the collection, organization and analysis of data. Enterprises can turn data into insights through an integrated cloud-native architecture. IBM Cloud Pak for Data is extensible and can be customized to a client’s unique data and AI landscapes through an integrated catalog of IBM, open source and third-party microservices add-ons.
IBM Cloud Pak for Integration. Helps support the speed, flexibility, security and scale required for integration and digital transformation initiatives. It also comes with a pre-integrated set of capabilities which include API lifecycle, application and data integration, messaging and events, high speed transfer and integration security.
IBM Cloud Pak for Multicloud Management. Helps provide consistent visibility, automation and governance across a range of hybrid, multicloud management capabilities such as event management, infrastructure management, multicluster management, edge management and integration with existing tools and processes.

Two new deployment options, IBM Cloud Pak System for application workloads and IBM Cloud Pak System for Data for data and AI workloads enable a pay-as-you-go capacity model and dynamic scaling for computing, storage and network resources.
Each of the IBM Cloud Paks will harness the combined power of container technology and IBM enterprise expertise to help organizations solve their most pressing challenges.
The move to cloud is a journey. IBM Cloud Paks help to meet companies wherever they are in that journey and help drive business innovation through cloud adoption.
Learn more about IBM Cloud Paks by visiting www.ibm.com/cloud/paks.
The post What are IBM Cloud Paks? appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Pod Evictions based on Taints/Tolerations

Red Hat OpenShift 4 is making an important and powerful change to the way pod evictions work. OpenShift has transitioned from using node conditions to using a Taint/Toleration based eviction process, which provides individual pods more control over how they are evicted. This new capability was added in Kubernetes 1.12 and enabled in OpenShift 4.1 […]
The post Pod Evictions based on Taints/Tolerations appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Red Hat OpenShift 4 is Now Available

  As of today, Red Hat OpenShift 4 is generally available to Red Hat customers. This rearchitecting in how we install, upgrade and manage the platform also brings with it the power of Kubernetes Operators, Red Hat Enterprise Linux CoreOS, and the Istio-base OpenShift Service Mesh. As transformational as our open hybrid cloud platform can […]
The post Red Hat OpenShift 4 is Now Available appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

CloudForms 4.7 – What’s new with Ansible?

Ansible continues to grow and is the strategic automation engine for Red Hat’s business. Having a solid and constantly improving integration with Ansible is key for CloudForms’ future success.
 
Ansible Tower Workflows are widely used in by the industry to orchestrate and govern interactions between different playbooks. CloudForms has been able to run Ansible Tower Jobs since its 4.1 release. Starting with CloudForms 4.7, we will expand this support and will be able to utilize Workflows from the Service Catalog.
Setup
CloudForms is using the concept of Providers, to integrate with other systems. Each Provider integration takes care of building and maintaining an up to date inventory, executing operational tasks, listening to events, and some also support features like Metric Collection or more.
 
The existing Ansible Tower Provider was extended to include existing Workflows into the inventory. If an Ansible Tower Provider was already configured, CloudForms will automatically add the Workflows to its inventory, after the Upgrade to 4.7 was successfully rolled out. Instructions on how to upgrade CloudForms to 4.7, can be found in the Migrating to Red Hat CloudForms 4.7 guide.
 
Adding a new Ansible Tower Provider is very simple. Navigate to “Automation”, “Ansible Tower”, “Explorer” and click on “Configuration”, “Add a new Provider”.

Workflows have been introduced with Ansible Tower 3.1. Instructions on how to create and use Workflows can be found in the Ansible Tower Documentation.
Ansible Tower Workflows
After the inventory was updated in CloudForms, Ansible Tower Workflows can be found in “Automation”, “Ansible Tower”, “Explorer” and by clicking on “Templates”.

The new “Type” column will help to separate Workflows from regular Jobs. After clicking on a Workflow, a Detail page will give additional information.

Service Dialogs and Catalogs
From this page, a Service Dialog for the currently selected Workflow can be automatically generated. The Service Dialog will automatically be populated with all extra_var and survey fields. To verify the result or customize the Service Dialog to make it more user-friendly, navigate to “Automation”, “Automate”, “Customization” and “Service Dialogs” in the accordion on the left.

NOTE: Ansible Tower Workflows do not support the “limit” parameter. Since different Jobs in a Workflow can potentially point to different inventories, a “limit” might break a Workflow and is therefore currently not support.
 
When creating a Service Catalog item select the Ansible Tower Provider and from the list of the “Ansible Tower Templates” the appropriate Workflow. Note the new section “Workflow Templates” at the end of the drop-down list.  

Instructions on how to build a Service Catalog with some examples to get you started, have already been provided in the post Service Catalogs and the User Self-Service Portal.
Job Output
With the embedded Ansible capabilities of CloudForms it was already possible to get the output of a Job in the “My Services”, “Jobs” tab. Starting with CloudForms 4.7, this will also work for Jobs running on Ansible Tower.
 
After ordering a Service Catalog Item which is using an Ansible Tower Job Template, a new “My Service” Object is created (Navigate to “Services”, “My Services” to find them). Click on the just created object to see some metadata of the Job. Click on the “Jobs” tab to get more details, including start and runtime and the Job output.

The Job Output can be found on the bottom of the Jobs page.

Workflows, variables and limits
Since Workflows are a features CloudForms inherits from Ansible Tower, there are some concepts which have to be kept in mind as well.
Extra Variables and Surveys
Ansible allows the use of variables, which can be defined in many different ways, including the Playbook itself, a role, the environment or in Ansible Tower. Ansible Tower can use Surveys to build a form asking for those variable values when running a Job. When using CloudForms’ Service Catalog features combined with Ansible Tower, a Service Dialog can be created to build an intuitive and user-friendly interface.
 
When running an Ansible Tower Workflow, all Extra Variables are sent to all subsequent Jobs. For example, if a Workflow has three Jobs (job1, job2, job3) and there are three variables (var1, var2, var3), all three variables are sent to all jobs. It is possible to set a variable to be used in a specific job only.
 
While this is not really a problem, it’s something to keep in mind, for example by avoiding duplicate variable names in multiple Jobs.
Limits
To run an Ansible Playbook, an inventory has to be used to tell Ansible which endpoints to use and how to access them. Sometimes an Ansible Playbook only has to run on a subset of the inventory. Limits allow the user to use the same inventory but filter out a subgroup of systems.
 
CloudForms is using this feature when running an Ansible Playbook from a button assigned to a Virtual Machine.
 
Ansible Tower Workflows do not support the “limit” option though. Potentially a workflow can have many Jobs with different inventories (e.g. run a sub job on the storage, a sub job on the network and a sub job on some servers, each using different inventories). A limit parameter would potentially break this Workflow (e.g. if the limit would be a specific Virtual Machine, the network and storage jobs would fail).
 
Quelle: CloudForms