Primerica modernizes applications with hybrid cloud and container technologies

The Primerica business really started at the kitchen tables of Middle American families. Those personal, sometimes difficult conversations about finances and life insurance built trust. That trust allowed us to help families get the financial services they needed to protect themselves and future generations. Our reputation and our business grew into the Primerica we are today.
Updating knowledge, skills and technologies
In recent years, long-time Primerica employees were beginning to retire, taking with them decades of institutional knowledge, including knowledge about maintaining our heritage IBM WebSphere technology. So, we wanted to update our technology and refresh our employees’ skills. Of course, growth often requires change, which can be difficult.
We reached out to the IBM Garage and said, “How do we get off of IBM technologies and onto IBM technologies? Oh, and while you’re at it, can you teach us how to properly manage and maintain this new technology?”
Migrating to the hybrid cloud
We worked with the IBM Garage team in Austin and migrated several existing applications to a hybrid cloud environment. Given the sensitivity and privacy requirements of financial services industry data, we decided to keep our customer data on premises on IBM Cloud Private and containerize many customer-centric applications on the public cloud.
This application modernization was the digital transformation we needed, but we also discovered something completely unexpected — a cultural shift. During our time working with the IBM Garage, we developed agile practices and a user-first approach to business. By implementing these techniques across our company, we have found that we can further deepen our personal connection with customers, which helps us better provide the financial services and protections they need. The IBM Garage helped us marry two critical, yet somewhat opposing, aspects of our business: good old-fashioned kitchen table conversations and modern, secure technology.
See how the IBM Garage modernized technology for Primerica, and learn more about this project by reading the case study.
The post Primerica modernizes applications with hybrid cloud and container technologies appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Improving Jenkins’ performance on Openshift: Part 2

This blog series will take a close look at Jenkins running on Red Hat OpenShift 3.11 and the various possibilities we have to improve its performance. The first post illustrated the deployment of a Jenkins master instance and a typical workload. This second post will deal with different approaches for improving the performance of Jenkins […]
The post Improving Jenkins’ performance on Openshift: Part 2 appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Splunk Connect for OpenShift: All About Objects

This is the second post of our blog series on Red Hat OpenShift and Splunk Integration. In the first post, we showed how to send application and system logs to Splunk. The second part is focused on how to use Splunk Kubernetes Objects. Prerequisites The prerequisites are the same as defined in the first part. Architecture Splunk […]
The post Splunk Connect for OpenShift: All About Objects appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Quick tip: Enable nested virtualization on a GCE instance

The post Quick tip: Enable nested virtualization on a GCE instance appeared first on Mirantis | Pure Play Open Cloud.
There are times when you need to run a virtual machine — but you’re already ON a virtual machine.  Fortunately, it’s possible, but you need to enable nested virtualization.  For me, this comes up often when I’m running OpenStack or Kubernetes on a Google Compute Engine instance.  To solve the problem, follow these steps:

Install the latest version of the gcloud command-line tool.
Create a new instance so you have a base disk to work with.  Because you’ll eventually want to use the image in a zone that includes nested virtualization, create it in zone us-central1-b.  You can do this from the UI, or using the command line. By default, the disk will have the same name as the instance:
gcloud compute instances create temp-image-base –zone us-central1-b
Stop the instance:
gcloud compute instances stop temp-image-base –zone us-central1-b

Now create a new disk, based on that disk, with nested virtualization enabled:
gcloud compute images create nested-vm-image
  –source-disk temp-image-base –source-disk-zone us-central1-b
  –licenses “https://www.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx”

Next create the new instance using the new image:
gcloud compute instances create nested-vm –zone us-central1-b –image=nested-vm-image –boot-disk-size=250GB

Connect to the instance:
gcloud compute ssh nested-vm –zone=us-central1-b

Confirm that nested virtualization is enabled by looking for a non-zero response to:
> grep -cw vmx /proc/cpuinfo
> 1

Finally, install a hypervisor such as KVM:
sudo apt-get update && sudo apt-get install qemu-kvm -y

From there, you’re ready to run VMs on your VM.
The post Quick tip: Enable nested virtualization on a GCE instance appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

From Red Hat Developers Blog: Using a custom builder image on Red Hat OpenShift with OpenShift Do

Daniel Helfand has created a video to match his excellent blog post over at Red Hat Developers Blog. He’s taken a lot of time to carefully explain how to use a custom builder image on Red Hat OpenShift using OpenShift Do. If you prefer the video tutorial, you’re all set. If you prefer a long […]
The post From Red Hat Developers Blog: Using a custom builder image on Red Hat OpenShift with OpenShift Do appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

OpenShift 4: Image Builds

One of the key differentiators of Red Hat OpenShift as a Kubernetes distribution is the ability to build container images using the platform via first class APIs. This means there is no separate infrastructure or manual build processes required to create images that will be run on the platform. Instead, the same infrastructure can be […]
The post OpenShift 4: Image Builds appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Using KubeFed to Deploy Applications to OCP3 and OCP4 Clusters

Introduction In the previous blog post  we saw what KubeFed is and how to deploy KubeFed on Red Hat OpenShift. On top of that, we deployed a federated MongoDB ReplicaSet and a federated Pacman application. In today’s blog, we are going to use KubeFed to deploy the federated MongoDB as well as the federated Pacman […]
The post Using KubeFed to Deploy Applications to OCP3 and OCP4 Clusters appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

What is a hybrid integration platform (HIP)?

Business leaders are constantly looking for new ways to transform their organizations by using technology and data to drive innovation and business results. But before you can think about deriving insights or building seamless customer experiences, you first need to connect and standardize all of the data across your entire application landscape.
From established on-premises systems to newly adopted software-as-a-service (SaaS) applications, integration is a critical, yet increasingly complicated, step toward digital business transformation.
Integration has become a bottleneck
Over the last several years the demand for new integrations has far surpassed the capacity most enterprises can handle. Traditional integration approaches simply can’t keep up with the requests. Lowering the cost per integration and is essential to creating a flexible, scalable model for integration.
Nobody can afford to pause their business or rip and replace their entire infrastructure. Instead, businesses are looking for ways to streamline processes, disperse skill sets over a wider range of people, restructure their integration architecture, and utilize new technologies to make integration simpler and more efficient. Adopting an agile integration strategy helps manage these changes across people, processes and architecture. And, as companies look to technology options for streamlined integration, hybrid integration platforms (HIP) are becoming more prevalent.
What is a hybrid integration platform?
According to Ovum, a hybrid integration platform is “a cohesive set of integration software (middleware) products enabling users to develop, secure and govern integration flows connecting diverse applications, systems, services and data stores, as well as enabling rapid API creation/composition and lifecycle management to meet the requirements of a range of hybrid integration use cases.”
In other words, a hybrid integration platform should provide organizations with all of the tools they need to make it simpler and easier to integrate data and applications across any on-premises and multicloud environment. With data silos broken down, businesses have an incredible opportunity to turn their data into actionable insights, allowing them to make better decisions faster.
What are the key capabilities to look for in a hybrid integration platform?
Today’s integration teams need access to a mix of tools that allow them to balance traditional and modern integration styles. When evaluating hybrid integration platforms, here are the most important capabilities you should look to evaluate.

API lifecycle management. APIs are among the most common styles of modern integration. Companies need to be able to create, secure, manage and share APIs across environments quickly and easily.
Application and data integration. Siloed data is one of the most critical problems organizations face when trying to digitally transform. The ability to copy and synchronize data across applications will help address a variety of issues, including data formats and standards.
Messaging and event-driven architecture. Syncing and standardizing data is crucial, but if enterprises want to be able to build more engaging customer experiences or react to things in real-time, they need to have the ability to securely exchange that data across their ecosystem from any cloud-based to on-premise application.
High-speed data transfer. The sheer volume of data being exchanged in a modern environment can be staggering. In fact, by 2025, IDC predicts worldwide data creation will reach 163 zettabytes per year. That’s ten times as much data as the world produced in 2017.

Being able to send, share, stream and sync large files reliably and at high speeds is critical to providing the types of real-time responses to data that modern organizations are looking for.
Is it better to build or buy a hybrid integration platform?
Until recently, hybrid integration platforms were mostly thought of as something that organizations needed to build by piecing together key capabilities from existing tools (like API management software, iPaaS and ESB solutions) from a variety of vendors into a cohesive system.
This can be an expensive and cumbersome process, however, and often leads to an end result that fails to meet all of the requirements. Some features or capabilities will be duplicated across offerings from multiple vendors, while others modern integration capabilities, like event streaming or high-speed data transfer, are left out.
Instead, enterprises should consider complete solutions, like IBM Cloud Pak for Integration, which combine all of the capabilities required for both traditional and modern integration styles into a unified, containerized platform. Features like single sign-on, common logging, tracing, an asset repository and a unified dashboard help bring all of the capabilities together and make integration workflows more efficient.
How can a hybrid integration platform help modernize integration?
By utilizing an agile integration approach combined with a robust hybrid integration platform, organizations can empower their teams with everything they need to speed up new integrations while lowering the cost. Done right, organizations will be able to continue using their existing infrastructure and traditional integration styles while introducing new skills, endpoints, use cases and deployment models at their own pace.
A hybrid integration platform should allow for more collaboration, democratization and reuse of assets through features like asset repositories, helping integration teams build and support the volume of integrations that digital transformation initiatives require.
Interested in learning more about hybrid integration platforms and the key capabilities, features and requirements you should look for when evaluating them?
Register to read the Ovum analyst report: Hybrid Integration Platforms: Digital business calls for integration modernization and greater agility.
Learn more about the IBM Cloud Pak for Integration.
The post What is a hybrid integration platform (HIP)? appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Kubernetes: The Video Game

Grant Shipley was recently in China for KubeCon, where he gave a keynote talk explaining the Kubernetes ecosystem within the context of Video Games. It’s a fun way to examine the entire world of Kubernetes, from end to end, while also enabling Grant to make Mavis Beacon and Commodore 64 references. Take a gander!
The post Kubernetes: The Video Game appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Federation V2 is now KubeFed

Some time ago we talked about how Federation V2 on Red Hat OpenShift 3.11 enables users to spread their applications and services across multiple locales or clusters. As a fast moving project, lots of changes happened since our last blog post. Among those changes, Federation V2 has been renamed to KubeFed and we have released […]
The post Federation V2 is now KubeFed appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift