The path to modernizing mission-critical applications

In a world where business disruption is the rule and not the exception, enterprises need IT environments that are designed to yield real innovation by enabling continuous, iterative development. Flexible, agile platforms allow organizations to develop new business models and deliver product innovation and deployment, as well as enable employee productivity and customer engagement.
Cloud adoption continues to grow, while migration to both public and private cloud infrastructure is already mainstream. Broader IT transformation efforts are picking up steam as well, including modernization with containers and microservices, and adoption of cloud-native tools and DevOps processes. Organizations must simultaneously develop modernization strategies for legacy workloads, adopt cloud-native approaches for application development, and integrate the old and the new within a common management and operations framework.
Overcoming four common application modernization challenges
According to 451 Research’s recent Voice of the Enterprise Digital Pulse: Workloads & Key Projects survey, over the next two years, enterprises are expected to expand their workload deployment on public cloud from 22 percent to 39 percent. However, private cloud is still the primary workload venue now, growing from 27 percent to 34 percent of workloads. Why? Most organizations, particularly large enterprises, don’t have the luxury of starting all over again on public cloud and therefore prefer to modernize mission-critical workloads in place. Drivers of this IT strategy include using existing infrastructure investments, ensuring security and maintaining application and data dependencies.
What does this mean for the path to application modernization? It’s not a question of “if.” It’s all about when and how. There are multiple challenges to overcome, which include the following:

Deconstructing the monolith. Taking stock of the existing estate generally involves deciding which applications to take on first. The assessment should take into account factors such as the business value of the application; technology, data and business process dependencies; and compatibility with the most basic type of modernization, VM containerization.
Access to talent. Cloud platform skills and cloud-native expertise remain key areas in which organizations are facing serious skills gaps. At the same time, however, skills related to heritage application architectures and customized software are also crucial to the success of modernization initiatives.
Culture. Cloud-native involves a new approach to application development and IT operations. A culture clash between agile IT and traditional work processes can create a bumpy road for organizations focused on modernization.
Ongoing management and orchestration. Ongoing application modernization in increasingly hybrid IT environments requires unified management platforms that can intelligently map workloads to and between IT execution venues, as well as orchestrate executions and performance across all venues, and enable automation.

Converting to a cloud operating model
Successful application modernization is crucial to digital transformation. Organizations that focus on cloud-enabling their applications and adopting cloud-native development techniques are empowering business agility and improving customer experiences. As more and more businesses begin to adopt multiple services, conversion to the cloud operating model is key to creating and maintaining a competitive advantage.
Read the results of the 451 Research Voice of the Enterprise Digital Pulse: Workloads & Key Projects survey to learn more and explore the benefits of the IBM Cloud.
The post The path to modernizing mission-critical applications appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Secure your Microservices

Microservices architectures are becoming the de facto way developers are thinking about how their applications are constructed. But security remains a top concern to most organizations.  Therefore, it is important to understand the intersection of security and microservices. While not guaranteeing your application will be secure, we can look at some of the capabilities that can be leveraged to address security concerns with microservices.  
OpenShift Service Mesh uses Istio’s three main principles to solve this new paradigm of security. The first one is Security by default, meaning that users won’t need to change any single line of code in order to use its security features. Secondly, Defense in depth where users can integrate their own security services with the service mesh existent ones (for instance, combining it with Kubernetes RBAC system). And last but not least, Zero-trust network that states that Service Mesh won’t consider security measures installed in the underneath platform. Strong identities, mTLS and RBAC are the most common features. Let’s explore the mTLS and how Kiali can help with that.
Start with mTLS
The goal of this section is to implement mTLS communications between all the services in the travel-agency namespace (but not to travel-portal). Before getting down to details, let’s understand what is mTLS. 
mTLS is the short name for Mutual TLS. mTLS is a protocol that applies the TLS authentication protocol in both directions: client to server and server to client. It is a popular authentication protocol for Machine-to-Machine communications where it is important not only to secure that the service is legit but also that the client is who says it is.
In order to achieve our goal, it is necessary to manage two different Istio Objects: the Destination Rule and the Policy. Let’s first add the DestinationRule object that will enforce all the workloads of the namespace to start connections with only mTLS.

In the overview page, Kiali shows the first hint of the mTLS status for each project available. As you can see in the ‘travel-agency’ namespace, there is a hollow lock icon next to the name telling that ‘mTLS partially enabled’. This means that either the security in that namespace is not properly enabled (this case) or there are communications without security enabled.
In addition, Kiali shows anomalies in health of both namespaces. This makes us think that there is an error on the last DestinationRule added. Let’s see what Kiali validations say regarding the validity of that Istio Object.

The screen right above shows the DestinationRule definition added just before through the Kiali editor. In this section, besides the fact of browsing, editing and deleting all kinds of Service Mesh objects, Kiali shows the result of the validity analysis of those objects. (All the validations provided by Kiali can be found here.)
In this example you can see that this DestinationRule has an error on the mode field saying “Policy enabling namespace-wide mTLS is missing”. This means that the service mesh needs a Policy enforcing all the services in travel-agency namespace to allow only mTLS connections. As a result of this error, line 19 is highlighted in red.
After adding the Policy, Kiali remove the error validation from the Policy:

At this point, mTLS is enabled for that namespace. But what if we’re a little paranoid and we want to make sure? Kiali has a useful security layer in the graph where it shows which connections are using mTLS, and at a glance, I can confirm that mTLS is enabled.

On one hand, as promised, the requests responded with 5xx errors are gone because the error has been fixed adding the Policy. On the other hand, you can see that each edge of the graph has now a lock icon right next to it. This means that there is some or all of the traffic using mTLS. On the side panel, Kiali shows the percentage of traffic using mTLS, ranging from 0 to 100%. At the beginning of the transition from non-mTLS to mTLS you will see numbers lower than 100%. As the traffic between services starts flowing, this number should be 100%. 
Going back to the overview page, you can see that the lock next to the travel-agency is full. Meaning that all the traffic within is configured to be mTLS.

One step is now complete
For most customers, application security is comprised of a number of steps, and mTLS is only one of those steps.  However, when moving to microservices and using a service mesh, being aware of the secure communication method available can help you plug one potential security hole.  The visualization of the security status provided by Kiali is one way you can quickly identify known holes as you work toward your application security goals.
The post Secure your Microservices appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Operators on OCP 4.x

In this video, we will cover introduction to operators, the use cases they cover, how operators are architected to extend kubernetes, and how OpenShift 4.x uses operators as the core technology. We will understand the types of operators in OpenShift 4.x, and also deploy an application using operator.Previous Videos:#1 Installation#2 Exploring Clusters#3 Operators
The post Operators on OCP 4.x appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Revamped OpenShift All-in-One (AIO) for Labs and Fun

DISCLAIMER: THE ALL-IN-ONE (AIO) OCP DEPLOYMENT IS A COMMERCIALLY UNSUPPORTED OCP CONFIGURATION INTENDED FOR TESTING OR DEMO PURPOSES.
Back in the 3.x days, I documented the All-in-One (AIO) deployment of OCP 3.11 for lab environments and other possible use cases. That blog post is available here: https://blog.openshift.com/openshift-all-in-one-aio-for-labs-and-fun/
With OCP4.2 (and OCP4.3 nightly builds) the all-in-one (AIO) deployment is also possible. Before going into the details I should highlight that this particular setup does have the same DNS requirements and prerequisites as any other OCP 4.x deployment.
This approach is NOT the best option for a local development environment on a laptop. This AIO is for external deployments in a home lab or cloud-based lab. If looking for an OCP 4.x development environment to run in a laptop, I highly recommend using RedHat CodeReady Containers which is a maintained solution for that specific purpose https://developers.redhat.com/products/codeready-containers
Back to the all-in-one deployment. There are two approaches to achieve the OCP4.2 all-in-one configuration:

Option 1: Customizing the manifests before installation

During the installation process generate the manifests them with “openshift-install create manifests –dir ./aio”. Then proceed to update the resulting ingress manifest in “./aio/manifests” and inject replacement manifests for the CVO, etcd-quorum-guard. After this, then generate the ignition files for the installation.
This is highly customizable and can be set so it does not require any post-install configuration. This method requires advanced OpenShift skills and it is NOT covered in this blog post.

Option 2: Customize the cluster during and after deployment

After the “bootkube.service” bootstrap process completes, update CVO to disable management of the etcd-quorum-guard Deployment and scale to one the etcd-quorum-guard and ingress controller. This is the approach covered in this blog.

The Design
NOTE: This document assumes the reader is familiar with the OCP4x installation process.
The final deployment will have cluster admins, developers, application owners (see #1), automation tools (see #2) and users of the applications (see #3) going to the same IP but different ports or routes. The logical view is as shown in the following diagram:

The host characteristics that I tested for this configuration:

VM or bare-metal
8 vCPU
16GB RAM
20GB Disk

Before Deployment

Setup the install-config.yaml to deploy a single master and no workers

apiVersion: v1
baseDomain: example.com
compute:
– hyperthreading: Enabled
name: worker
replicas: 0
controlPlane:
hyperthreading: Enabled
name: master
replicas: 1
metadata:
name: aio
networking:
clusterNetworks:
– cidr: 10.128.0.0/14
hostPrefix: 23
networkType: OpenShiftSDN
serviceNetwork:
– 172.30.0.0/16
platform:
none: {}
pullSecret: ”
sshKey: ‘ssh-rsa AAA…’

During Deployment

During installation there still need for a temporary external load balancer (or poor man version, modify the DNS entries).
For the installation prepare the DNS equivalent to this:

aio.example.com
etcd-0.aio.example.com
apps.aio.example.com
*.apps.aio.example.com
api-int.aio.example.com
api.aio.example.com

# etcd Service Record
_etcd-server-ssl._tcp.aio.example.com. IN SRV 0 0 2380 etcd-0.aio.example.com.

After bootkube.service completes modify the DNS

aio.example.com
etcd-0.aio.example.com
apps.aio.example.com
*.apps.aio.example.com
api-int.aio.example.com
api.aio.example.com

# etcd Service Record
_etcd-server-ssl._tcp.aio.example.com. IN SRV 0 0 2380 etcd-0.aio.example.com.

The single node will be shown with both roles (master and worker)

$ oc get nodes
NAME STATUS ROLES AGE VERSION
aio Ready master,worker 33m v1.16.2

Set etcd-quorum-guard to unmanaged state

oc patch clusterversion/version –type=’merge’ -p “$(cat <<- EOF
spec:
overrides:
– group: apps/v1
kind: Deployment
name: etcd-quorum-guard
namespace: openshift-machine-config-operator
unmanaged: true
EOF
)”

Downscale etcd-quorum-guard to one:

oc scale –replicas=1 deployment/etcd-quorum-guard -n openshift-machine-config-operator

Downscale the number of routers to one:

oc scale –replicas=1 ingresscontroller/default -n openshift-ingress-operator

(Recommended) Downscale the number of consoles, authentication, OLM and monitoring services to one:

oc scale –replicas=1 deployment.apps/console -n openshift-console
oc scale –replicas=1 deployment.apps/downloads -n openshift-console

oc scale –replicas=1 deployment.apps/oauth-openshift -n openshift-authentication

oc scale –replicas=1 deployment.apps/packageserver -n openshift-operator-lifecycle-manager

# NOTE: When enabled, the Operator will auto-scale this services back to original quantity
oc scale –replicas=1 deployment.apps/prometheus-adapter -n openshift-monitoring
oc scale –replicas=1 deployment.apps/thanos-querier -n openshift-monitoring
oc scale –replicas=1 statefulset.apps/prometheus-k8s -n openshift-monitoring
oc scale –replicas=1 statefulset.apps/alertmanager-main -n openshift-monitoring

(optional) Setup image-registry to use ephemeral storage.

WARNING: Only use ephemeral storage for internal registry for testing purposes.
oc patch configs.imageregistry.operator.openshift.io cluster –type merge
–patch ‘{“spec”:{“storage”:{“emptyDir”:{}}}}’

oc patch configs.imageregistry.operator.openshift.io cluster –type merge
–patch ‘{“spec”:{“managementState”:”Managed”}}’

NOTE: Wait until the image-registry operator completes the update before using the registry.
The post Revamped OpenShift All-in-One (AIO) for Labs and Fun appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Building an End-to-End 5G Cloud Native Network with OpenShift

In case you missed KubeCon 2019 in San Diego, the CNCF have been very diligent about putting the talks up online. That includes the 5G focused keynote delivered by Azhar Sayeed with Heather Kirksey (Linux Foundation) and Fu Qiao (China Mobile). A short summary of the talk is below, and naturally, the video is above.
It’s no secret that Kubernetes has gained significant traction in the cloud and enterprise software ecosystem, but less widely known is how this momentum is now moving into global telco networks as the next major area of adoption. Building on the momentum from a live keynote demo In Amsterdam last fall (see the demo here), a team made up of volunteers from several project communities, companies, and network operators has taken a cloud native approach to developing an E2E 5G network demonstration built on open source infrastructure. The demo will use a live prototype running in labs around the world using k8s and other open source technologies to deliver a fully containerized 5G-network on stage in San Diego. The demo will showcase both how the telecom industry is using cloud native software to build out their next gen networks, and also show solution providers what’s possible in this exciting new space.
The post Building an End-to-End 5G Cloud Native Network with OpenShift appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Cluster Scaling

In this video we will look at different ways to scale the worker nodes of an IPI based cluster up and down. We will see how easy it is to scale up and scale down a cluster manually. We will understand the architecture concepts behind the scaling. Next we will look at the concepts of auto scaling and create the necessary openshift components, generate workload and autoscale the cluster. We will see how the cluster sizes itself up and down based on the workload.Previous posts: #1 Installation#2 Exploring Clusters
The post Cluster Scaling appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Custom wildcard DNS for OpenShift Container Platform 4.2

With Red Hat OpenShift Container Platform 4 Red Hat introduced automated cluster provisioning by using openshift-installer binary. Installer based cluster provisioning enables users to deploy fully functioning OpenShift Container Platform cluster by running a single command ( openshift-install create cluster ).
Cluster parameters (like machine CIDR, cluster network, number of masters and workers or VM size ) can be changed according to user needs by updating the install-config.yaml file before cluster installation. 
When running openshift-installer to provision OpenShift Container Platform 4.2, wildcard DNS is set to *.apps.<cluster name>.<base domain> by default. 
Sometimes user might want to have a different wildcard DNS for applications. In order to change the default wildcard DNS, user needs to generate cluster manifest files and change the domain name. Following procedure explains how this can be achieved. 
Make sure prerequisites are completed: https://docs.openshift.com/container-platform/4.2/welcome/index.html 
Generate cluster manifests:
$ openshift-install create manifests –dir=output

? SSH Public Key /Users/user1/.ssh/id_rsa.pub

? Platform azure

? Region uksouth

? Base Domain cloud-ninja.name

? Cluster Name openshift

? Pull Secret [? for help] *********************************************************************************************************************************************************************************

$

Once you generated cluster manifests, navigate to the directory where manifests were created.
Cluster ingress is controlled by Cluster Ingress Operator and Cluster DNS operator. In manifests directory you’ll find cluster-ingress-02-config.yml file and cluster-dns-02-config.yml file.  
In order to have a custom wildcard DNS names for your routes, change the domain variable inside cluster-ingress-02-config.yml file to the one you would like to use. Make sure it is part of the same base domain:
apiVersion: config.openshift.io/v1

kind: Ingress

metadata:

creationTimestamp: null

name: cluster

spec:

domain: apps.openshift.cloud-ninja.name

status: {}

domain spec can be changed to e.g. prod.openshift.cloud-ninja.name or even changed to the base domain cloud-ninja.name
If you are going to use base domain for your wildcard DNS, search domain needs to be changed as well. Open cluster-dns-02-config.yml and change baseDomain spec:
apiVersion: config.openshift.io/v1

kind: DNS

metadata:

creationTimestamp: null

name: cluster

spec:

baseDomain: cloud-ninja.name

privateZone:

id: /subscriptions/d480a86/resourceGroups/openshift-pk8zr-rg/providers/Microsoft.Network/dnszones/openshift.cloud-ninja.name

publicZone:

id: /subscriptions/d480a86/resourceGroups/ocp-common/providers/Microsoft.Network/dnszones/cloud-ninja.name

status: {}

Note that your API URL will still be under <cluster name>.<base domain> subdomain. 
Once the manifests are updated, generate Ignition config files and deploy the cluster by running openshift-install create cluster –dir=<your ignition configs directory>:
$ openshift-install create ignition-configs –dir=output

INFO Consuming “Common Manifests” from target directory

INFO Consuming “Worker Machines” from target directory

INFO Consuming “Master Machines” from target directory

INFO Consuming “OpenShift Manifests” from target directory

$ openshift-install create cluster –dir=output

INFO Consuming “Worker Ignition Config” from target directory

INFO Consuming “Master Ignition Config” from target directory

INFO Consuming “Bootstrap Ignition Config” from target directory

INFO Creating infrastructure resources…

INFO Waiting up to 30m0s for the Kubernetes API at https://api.openshift.cloud-ninja.name:6443…

INFO API v1.14.6+2e5ed54 up

INFO Waiting up to 30m0s for bootstrapping to complete…

INFO Destroying the bootstrap resources…

INFO Waiting up to 30m0s for the cluster at https://api.openshift.cloud-ninja.name:6443 to initialize…

INFO Waiting up to 10m0s for the openshift-console route to be created…

INFO Install complete!

INFO To access the cluster as the system:admin user when using ‘oc’, run ‘export KUBECONFIG=/Users/user1/azure-openshift/output/auth/kubeconfig’

INFO Access the OpenShift web-console here: https://console-openshift-console.cloud-ninja.name

INFO Login to the console with user: kubeadmin, password: z5eEq-D3BCn-ChpWR-LnXMr

$

Once installation is done, you’ll notice that cluster console URL uses the new wildcard DNS suffix. Open console and test wildcard DNS by deploying test application.

With OpenShift Container Platform 4.2 new Developer console can be used to deploy applications (more on Developer experience on OpenShift Container Platfrom 4.2 here):

Once application container is built and deployed, access the application by selecting the URL e.g. http://nodejs-test.cloud-ninja.name
The video below shows the whole procedure done on Microsoft Azure:
 

The post Custom wildcard DNS for OpenShift Container Platform 4.2 appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Community Blog Round Up 16 December 2019

We’re super chuffed that there’s already another article to read in our weekly blog round up – as we said before, if you write it, we’ll help others see it! But if you don’t write it, well, there’s nothing to set sail. Let’s hear about your latest adventures on the Ussuri river and if you’re NOT in our database, you CAN be by creating a pull request to https://github.com/redhat-openstack/website/blob/master/planet.ini.
Reading keystone.conf in a container by Adam Young
Step 3 of the 12 Factor app is to store config in the environment. For Keystone, the set of configuration options is controlled by the keystone.conf file. In an earlier attempt at containerizing the scripts used to configure Keystone, I had passed an environment variable in to the script that would then be written to the configuration file. I realize now that I want the whole keystone.conf external to the application. This allow me to set any of the configuration options without changing the code in the container. More importantly, it allows me to make the configuration information immutable inside the container, so that the applications cannot be hacked to change their own configuration options.
Read more at https://adam.younglogic.com/2019/12/reading-keystone-conf-in-a-container/
Quelle: RDO

Why cloud-native development matters

Can we all agree that market research statistics should be taken with enough grains of salt to make your cardiologist worry? Good. Having said that, I’m now going to quote some market research statistics. But I do this not to focus on the numbers, but the fact that these numbers came from surveys and interviews of people just like you, and underline the trends and expectations that are driving the IT world today.
A rapid shift to cloud over the next three years will drive enterprises to move 75 percent of existing non-cloud apps to cloud environments. This research from IBV also found that in three years about 95 percent of internally developed apps are expected to be deployed on the cloud with 55 percent of newly developed apps designed as cloud-native. And, the most impressive stat I’ve heard recently is that from 2018 through 2023, million new logical apps will be created. This stat from IDC is equal to the number created over the last 40 years. These trends are contributing to the adoption of cloud-native development.
In the race to transform, enterprises embark upon their journey to cloud-native to deliver innovation at scale and at lower cost. Cloud-native applications do more than just run in the cloud; they’re designed and developed to maximize the economies of cloud. Cloud-native architectures and applications deliver faster time to market, higher scalability, in most cases superior customer experiences, simpler management, reduced cost through containerization and cloud standards, and more reliable systems without vendor lock-in.
Cloud native uses the strengths and accommodates the challenges of a standardized cloud environment. Adopting a cloud-native approach isn’t only about developing new generations of applications in better ways. It’s also about organizational culture. Enterprises must also transform to adopt a cloud environment successfully.
How and where to start the cloud-native journey
Containers and cloud-native applications have already been established and proven successful in large-scale cloud computing companies, but they are just expanding to enterprises. Enterprises are now beginning to learn the technologies and change how they approach development and operations. Enterprises are building applications within increasingly diverse IT environments. These range from traditional on-premises; to cloud native, which embraces containers and Kubernetes; to a hybrid, multicloud model. Most enterprises are in research and experimentation mode, with only a very small segment having experience or deployments of containers today.
The ability to innovate quickly while modernizing and using existing investments is key. The move to cloud native, along with the importance of maintaining current application performance levels, further highlights the pressing need to modernize operations and applications to move to cloud. To keep up with the competition, enterprises must regularly build new applications and update existing applications. To satisfy this demand, enterprises require an application platform that’s built on open source and open standards, that allows them to quickly build, test and deploy applications in a modern, microservices based architecture.
Three advantages of cloud-native development on IBM Cloud Pak for Applications
IBM Cloud Pak for Applications uses the power of open source technologies to help enterprises speed cloud-native application development and has some key advantages.
1. Broadest choice of industry runtimes
IBM Cloud Pak for Applications supports enterprise application needs through a choice of industry leading runtimes and choice of developer tools and modernization toolkits, DevOps and Apps/Ops Management.
2. Simplified build, deploy and management of applications
Enterprises can quickly build applications on any cloud, while providing the most straightforward path to modernize heritage applications. Kabanero.io, an open source project, which is an upstream for Cloud Pak for Applications, simplifies the build, deployment and management of applications. It offers an integrated experience from the creation of a cloud-native application on a developer’s laptop through testing and deployment in a container and on through the application’s ultimate managed lifecycle.
3. Modernization that maximizes existing investments
Enterprises can optimize their current investments, whether on-premises or in any public or private cloud. With IBM Cloud Pak for Applications, enterprises have the comfort of knowing they can modernize based on their unique timeline. They can realize ROI and are able to continue their cloud journey without ripping and replacing or getting locked in with a particular vendor. And when ready to modernize, enterprises can take advantage of a rich set of transformation tools including in the IBM Cloud Pak for Applications.
Ready to find out more?

Register to join our webinar 21 January 2020 at 9 AM ET.
Learn more about IBM Cloud Pak for Applications on our website.
Download the analyst report about the value of IBM and Red Hat.
Watch the replay of our version 4 release webinar.
Take a tour of IBM Cloud Pak for Applications.

 
The post Why cloud-native development matters appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Exploring OpenShift 4.x Cluster

In this video we will explore the cluster installed during the last video, log into the cluster, configure an authentication provider. We will understand the structure of the cluster and the architecture overview of HA installation. We will get deep understanding of what runs on the master node vs worker node, how the load balancers are setup. We will also look at the cloud provider to see all the infrastructure components that got created by the installer.
The post Exploring OpenShift 4.x Cluster appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift