OpenShift All-in-One (AIO) for Labs and Fun

DISCLAIMER: THE ALL-IN-ONE (AIO) OCP DEPLOYMENT IS AN UNSUPPORTED OCP 3.11.x CONFIGURATION INTENDED FOR TESTING OR DEMO PURPOSES. A common request from customers is how to run the actual Red Hat OpenShift  Container Platform (OCP) bits in a single node. This request often comes from customers that need to support training environments, dedicated single-user development […]
The post OpenShift All-in-One (AIO) for Labs and Fun appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Keep data safe, even when in use by container applications

One of the challenges for DevOps professionals is ensuring applications protect data while it is in use.
According to Container Journal, that challenge may now be easier for developers to overcome because of IBM Cloud Data Shield, “which runs containerized applications in secure enclaves on the IBM Cloud Kubernetes Service”.
IBM Cloud Data Shield uses the Fortanix Runtime Encryption Platform, which uses deterministic security and runtime memory encryption. It also enables developers to partition their application into enclaves by incorporating Intel Software Guard Extensions technology into the IBM Cloud Kubernetes Service.
“With IBM Cloud Data Shield, developers no longer need to worry about incorporating customized security code into applications before deploying those applications into containers,” Container Journal reports.
A beta version is available to IBM Cloud Kubernetes Service customers.
“IBM Cloud Data Shield with Fortanix Runtime Encryption and Intel SGX is designed to make it easy for developers to protect data in use without having to change their application code,” said Nataraj Nagaratnam, IBM CTO for Cloud Security.
Learn more about IBM Cloud Data Shield in the full Container Journal article. Also read this interview with Andrew Wilcock, vice president of IBM Cloud in the UK and Ireland, to learn more about the overarching IBM hybrid cloud strategy.
The post Keep data safe, even when in use by container applications appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Self-Serviced End-to-end Encryption Approaches for Applications Deployed in OpenShift

Introduction The majority of applications deployed on Red Hat OpenShift have some endpoints exposed to the outside of the cluster via a reverse proxy, normally the router (which is implemented with HAProxy). When using a router, the following options are possible: In the diagram we can see: Clear text: the connection is always unencrypted. Edge: […]
The post Self-Serviced End-to-end Encryption Approaches for Applications Deployed in OpenShift appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Simplifying complex modernization strategies with the right tools

A few years ago, NASA found water on Mars and mountains on Pluto. The first ever self-driving cars hit the road across the country. And organizations were still building compute workloads with monolithic applications in their local, dedicated data centers with predefined support and upgrade cycles.
How far we’ve come since then. Companies are realizing the benefits of microservices and cloud deployments. They’re beginning to incorporate those technologies into their IT ecosystems.
Those organizations are now also realizing something else: Migrating an entire topology at once to a microservice ecosystem simply isn’t realistic. Most are adopting cloud technology in incremental steps, moving their easiest-to-move applications to the cloud first while leaving the more challenging workloads back in the data center.
This means some IT teams are working in traditional IT infrastructures, maintaining and supporting the applications running on physical servers. Other teams within the same company are supporting modern applications designed using microservices and deployed on clouds. Supporting these very different resources in tandem can be a challenging proposition, especially when another team in the organization uses Openstack as a standard for controlling the compute, network and storage across the entire organization.
Moving past application modernization complexity

This is a common scenario for most companies in the process of their application modernization journeys. This incremental digital transformation can create a storm of complexity. But there are three key reasons it is necessary for companies to move forward with their application modernization journeys.
1. Reliability requirements are driving application architecture.
Containerization enables organizations to adopt modern, cloud native principles, making applications highly reliable. Microservices allow for cloud portability, improve efficiency and provide unparalleled agility.
To realize these benefits, companies need next generation tools such as IBM Microclimate, to get started. Microclimate provides end-to-end, cloud-native solutions for creating, building, testing and deploying container-based microservices. It helps developers focus on application code by automating many of the tasks that require in-depth domain knowledge. With the built-in data collectors in Microclimate, developers can see real-time changes to their code before they commit and make necessary remediation to improve performance.
2. The need for speed is driving continuous delivery.
To outpace competition and meet user expectations, applications must be updated very frequently. Cloud-native, microservice-based applications can easily be updated daily or even multiple times per day. To capitalize on the agility of this, technology teams must also adopt a DevOps and continuous delivery approach, introducing automation to test, build and deploy. This approach also enables teams to use multiple pipelines and ensure the reliability of each release.
3. Cloud flexibility is driving infrastructure automation.
The versatility of modern applications creates opportunities for huge expansion in a short period of time. IT administrators are seeing a transition from supporting just the data center to now managing hybrid environments in which traditional resources are managed alongside a multicloud software infrastructure. Even the largest, most sophisticated organizations’ data centers cannot expand quickly enough to keep up with demand, so companies are using private and public clouds to fill those needs.
IBM Cloud Automation Manager helps organizations automate provisioning by deploying and configuring infrastructure and applications across any cloud environment with workflow orchestration. They can also provide governance and control through effective, enforceable governance and intelligent insights for a security-rich, compliant IT environment.
Modernizing application monitoring
The easiest way to ensure reliability is with a simple and consistent monitoring method across hybrid cloud applications. To get ahead of issues before they reach users, teams need tools that can pinpoint troubled microservices across complex hybrid environments.
New advances in monitoring, relying on site reliability engineer (SRE) golden signals and one-hop dependency are key elements for shifting management from technology-based to service-based monitoring. This approach helps the site reliability engineer realize the value of modern application portability across hybrid clouds.
IBM Cloud App Management delivers a management solution for hybrid, multicloud applications. Designed for high-scale, highly resilient applications and crafted to support cloud operations and Kubernetes, IBM Cloud App Management supports DevOps, site reliability engineers and IT ops with app-centric monitoring of microservice-based applications.
With the aim to help the companies through their modernization journey, IBM is the only vendor providing a completely integrated tool set. IBM provides an end-to-end solution to cover every aspect of an organization’s transformational journey without forcing it to rip and replace its traditional infrastructure.
The post Simplifying complex modernization strategies with the right tools appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Providing around-the-clock call center support using a cognitive virtual assistant

According to Investopedia, financial technology has grown explosively since the mobile internet and smartphone revolution. Originally referring to computer technology applied to the back office of banks or trading firms, “fintech” now describes a variety of financial activities, from depositing a check with a smartphone to managing investments, generally without the assistance of a person.
Headquartered in Taipei, Taiwan, Allianz AG is a life insurance company that provides private and corporate customers an array of life, accident and health insurance products.
Because the population in Taiwan is so tech savvy and keen to try new things, Allianz constantly strives to innovate to keep up with the demands of customers.
We at Allianz knew that artificial intelligence (AI) technology would be a way to enhance the customer experience and thus created a cognitive virtual assistant called “Allie.”
A virtual assistant any time
Allie is fluent in Mandarin and handles 80 percent of the inbound customer requests to call centers for Allianz. Callers can ask almost any question related to insurance policies at any time of day. Allie is not just a database for FAQs, she can also handle policy changes. Her answers avoid insurance jargon and complex terminology. Allie explains the answers in simple ways that people understand while following regulations and legal compliance guidelines.
We are focusing more now on the human component. To do that, we’ve included some small talk capabilities. Many customers begin their chats with questions like “How’s the weather?” and we think that enabling Allie to have natural conversations with people will drive social adoption.
Taiwanese people appreciate Allie’s cute answers to these types of questions, but in the end, it’s still an insurance business, so Allie keeps the conversation professional.

Working with IBM
Allianz worked with IBM Global Business Services to develop the virtual assistant. Allie uses the IBM Watson Assistant service and runs on IBM Cloud.
We chose IBM not only because we believe IBM Watson is one of the strongest AI engines on the market, but also because IBM developed a sort of middle layer, which enables our data to remain on-premises.
When the customer is engaged with Allie, the question that the customer is asking goes to IBM Watson on the cloud. Watson understands the question, determines the intent, then transfers the request to the middle layer, which either calls upon our back-end technology or provides the answer to the customer, but not through the cloud. This means that policy inquiry data and policy change items are more secure, which, from a legal and regulatory point of view, is a very smooth solution.
The middle layer is open source, so we can continue training IBM Watson and adding more to the knowledge base as well as integrate APIs all on our own. IBM delivered intensive training and our IT and business staff are able to execute certain scenarios without additional assistance.
So, for example, if we need to make a quick update, we can do this without any support from IBM. We are not totally dependent and instead, we can work with IBM on more complex aspects of the project.
Our first discussions with IBM about developing a virtual assistant were around October 2017. We officially kicked-off the project in November 2017 and launched at the end of May 2018. Our first minimum viable product actually had a very broad scope with more than 20 APIs integrated to get policy inquiry and policy change information.
We are very proud of what we have achieved with IBM in such a short period of time.
Positive customer feedback
We launched with a big media event together with IBM last year, and the feedback here was very positive from both the market and customers. Allie’s customer rating is 4.5 out of 5 stars, and the company’s Net Promoter Score (NPS) continues to increase, suggesting that Allianz customers would recommend Allianz to others.
Another key performance indicator (KPI) that we identified, which was surprising, actually, is that 45 percent of our requests come in before and after business hours. This is quite a high number, and before Allie, we were not able to answer all these requests because our call center is only open from 9 AM to 6 PM.
The next step will be to expand Allie across channels to offer virtual assistant capabilities to agents and bankers in addition to customers.
Read the case study for more details.
The post Providing around-the-clock call center support using a cognitive virtual assistant appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Return of the Smesh (Spinnaker Shpinnaker and Istio Shmistio to make a Smesh! Part 2)

The post Return of the Smesh (Spinnaker Shpinnaker and Istio Shmistio to make a Smesh! Part 2) appeared first on Mirantis | Pure Play Open Cloud.
One of the first things I learned on my sojourn through the open source world is that there are ALWAYS new and different approaches to building the better mouse trap when it comes to component design within a given architecture, and that a single project doesn’t usually contain all of the answers to questions created when developing new application architectures.
Yes, the Kubernetes framework does satisfy a host of application needs in an acceptable manner for most applications. But what happens when your needs become more and more dependent on the flow of data between components and the distances between the providing resources becomes greater? Issues such as Quality of Service (QoS) become very important, for one thing. What if there’s a greater need for secured access against the individual services? These issues point to needs not addressed within the Kubernetes framework itself. This is where the concept of the Smesh (Service Mesh) comes into being to fill the need.
Before we go right to the heart of the Smesh, let’s take a closer look at the the Microservices architecture and the needs that it is designed to address.
The Microservices Architecture
Martin Fowler, renowned British author and software developer, described the microservice architectural style as “an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms,” often via an HTTP resource or API.
Providing a native microservice-capable platform such as Kubernetes is essential to supporting the Microservices Architecture properly.
Below is an example of how the Microservices Architecture is laid out, and a rudimentary diagram of how the services interact:

Istio is a service mesh layered on top of the K8s framework to support the definition of authority, enhance performance of the bandwidth, and to control the flow of data between microservices.
What is a Smesh (Service Mesh)? Regarding Istio and other tools…
A service mesh is a configurable infrastructure layer for a microservices application. It makes communication between service instances flexible, reliable, and fast. The mesh provides service discovery, load balancing, encryption, authentication and authorization, support for the circuit breaker pattern, and other capabilities.
William Morgan described the service mesh as “a dedicated infrastructure layer for handling service-to-service communication. It’s responsible for the reliable delivery of requests through the complex topology of services that comprise a modern, cloud native application.”
The service mesh technology comes with its own lexicon of new terms for old features and capabilities to learn and understand. Some of the more important terms and concepts are listed below for reference:

Container orchestration framework – Kubernetes is the most common framework filling this need, but there are others.
Services vs. service instances – There is a difference between the term service and the term service instance. The distinction is that the service represents the definition rather than the instance itself.
Sidecar proxy – A sidecar proxy attaches itself to a specific service instance. It is managed by the orchestration framework and handles intercommunication between all the other proxies, reducing demand on the instances themselves.
Service discovery – This capability enables the different services to “discover” each other when needed. The Kubernetes framework keeps a list of instances that are healthy and ready to receive requests.
Load balancing – In a service mesh, load balancing capabilities place the least busy instances at the top of the stack, so that more busy instances can get the greatest amount of service without starving the least busy instances of resources.
Encryption – Instead of having each of the services provide their own encryption/decryption, the service mesh can encrypt and decrypt requests and responses instead.
Authentication and authorization. The service mesh can validate requests BEFORE they are sent to the service instances.
Support for the circuit breaker pattern. The service mesh can support the circuit breaker pattern, which can stop requests from ever being sent to an unhealthy instance. We will discuss this specific feature later.

The combined use of these features and capabilities provide the means for traffic shaping or QoS. Traffic shaping, also known as packet shaping, is a type of network bandwidth management for the manipulation and prioritization of network traffic to reduce the impact of heavy use cases from affecting other users. QoS, another means of traffic shaping, recognizes the various types of traffic moving over your network and prioritizes it accordingly. Istio, for example,  provides a uniform way to connect, secure, manage and monitor microservices and provides traffic shaping between microservices while capturing the telemetry of the traffic flow for prioritizing network traffic.
Istio also includes the capability of circuit-breaking to the application development process. Circuit-breaking helps to guard against partial or total cascading network communication failures by maintaining a status of the health and viability of a service instance. The circuit-breaker feature determines whether traffic should continue to be routed to a given service instance. The application developer must determine what to do as a design consideration when the service instance has been marked as not accepting requests.
Envoy, which is integrated as the backend proxy for Istio, treats its circuit-breaking functionality as a subset of load balancing and health checking. Envoy separates out its routing methods from the communication to the actual backend clusters, eliminating the routes to those service instances which are unhealthy or unable to accept requests. This method allows for the creation of many different potential routes to map traffic to the proper healthy and request accepting backends.
Below is a diagram of the Istio architecture for reference:

The Istio components and their functions are listed below:
Control plane:

Istio-Manager: provides routing rules and service discovery information to the Envoy proxies.
Mixer: collects telemetry from each Envoy proxy and enforces access control policies.
Istio-Auth: provides “service to service” and “user to service” authentication. This component also converts unencrypted traffic to TLS based traffic between services, as needed.

Data plane:

Envoy: a feature rich proxy managed by control plane components. Envoy intercepts traffic to and from the service, applying routing and access policies following the rules set in the control plane.

So that’s the basics.  Next time, we’ll go ahead and install Istio and some sample apps and take it for a spin.
The post Return of the Smesh (Spinnaker Shpinnaker and Istio Shmistio to make a Smesh! Part 2) appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Kubeflow on OpenShift

  (Image source opensource.com) Kubeflow is an open source project that provides Machine Learning (ML) resources on Kubernetes clusters. Kubernetes is evolving to be the hybrid solution for deploying complex workloads on private and public clouds. A fast growing use case is using Kubernetes as the deployment platform of choice for machine learning. Infrastructure engineers […]
The post Kubeflow on OpenShift appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

For a better cloud strategy, look to data

Enterprise cloud strategy is undergoing revolutionary changes.
While initial deployments focused largely on back-office support functions and migrating workloads to off-site infrastructure, the modern cloud has become a go-to solution for top-tier workloads and forward-looking initiatives, such as mobile data and the Internet of Things (IoT).
While most people have traditionally viewed the cloud as a warehouse for generic computing, storage and networking resources, that model is losing ground. Increasingly, enterprise cloud environments reflect a highly optimized and customized infrastructure.
Attaining this state of digital nirvana can seem imposing, since complex cloud environments with both public and private components require more oversight to manage. Luckily, the key to optimization is already at your fingertips: It’s your data.
Better infrastructure through data
With the right data collection and analysis, companies can very easily determine what’s needed to maintain optimal performance and begin shifting workloads to the appropriate resources, even for highly dynamic use cases.
According to Ken Christiance, distinguished engineer on the IBM Technology, Innovation and Automation team, one of the key elements of an optimized cloud is proper workload balance. This can be achieved through a capacity management analytics (CpMA) solution, such as the Watson-powered Densify platform. Using deep visibility and detailed reporting tools, Densify can identify problems such as over- and under-allocated virtual machines, inefficient workload placement, imbalances between hosted resources and application needs, and inadequate cluster capacity. This helps infrastructure managers to stay on top of working conditions within the cloud, or, even better, feed this performance data into an automation engine that continuously fine tunes the environment for optimal performance.
A recent post by Tony Efremenko, executive architect at the IBM Cloud Garage, highlights how this works in practice on another Watson-powered platform called Studio. His level of optimization extends beyond merely tweaking infrastructure to meet current demands. It helps build predictive models that can determine what will be needed in the future, too. For example, microservices tend to be highly dynamic and can be employed in myriad ways. However, by putting Watson Studio to work, supported by real-time data collection, Efremenko can establish the right cost structure for multiple services regardless of whether they were developed in-house or on third-party software-as-a-service (SaaS) platforms.
AI in the mix
The increasing complexity of modern cloud environments all but demands that the enterprise uses artificial intelligence (AI) platforms such as Watson in the management stack sooner rather than later. James Kobielus of Silicon Angle noted recently that a successful multicloud strategy depends on the ability to automate multiple tasks, including log analysis, anomaly detection, root-cause diagnostics and closed-loop issue remediation. If these functions remain manual, performance will surely suffer and costs will become prohibitive. Going forward, look for AI to work its way into both infrastructure and application-layer management.
One of the lesser-known challenges of AI, however, is the tendency for operator bias to creep into learning algorithms. When applied to cloud management, this causes decisions regarding resource allocation and other functions to veer away from true optimization and more toward administrative preferences. IBM is helping the enterprise combat this issue with a new Trust and Transparency Service that detects bias and sheds light on the decision-making process. The system works with all leading machine-learning frameworks and features a number of customization tools to tailor performance to specific enterprise clouds.
AI is only the latest advanced technology to infiltrate distributed virtual infrastructure, but by no means will it be the last. To help navigate these uncharted technological waters, enterprises will want to partner with a proven technology leader that is not only at the forefront of cloud infrastructure, but has a vision for the future as well.
Properly making use of data that already exists within the cloud environment is key to ensuring enterprise cloud spending is producing the highest return. As data users become more accustomed to getting what they want when they want it, the only way to remain relevant will be to put in place a highly optimized, customized cloud ecosystem.
Learn more about how IBM can help you create a hybrid cloud environment that’s purpose-fit for your enterprise. Read how IBM can help create the right infrastructure for your big data strategy and improve performance. Register for the full report on infrastructure.
The post For a better cloud strategy, look to data appeared first on Cloud computing news.
Quelle: Thoughts on Cloud