A strategic approach to adopting cloud-native application development

Approximately three out of four non-cloud applications will move to the cloud within the next three years, according to a recent IBM report titled “The enterprise outlook on cloud-native development”. In today’s modern enterprise, optimizing the application cycle is critical: it can help companies keep up with consumer expectations, keep business operations agile, and speed the pace of innovation.
Cloud-native application development allows enterprises to capitalize on the full power of cloud by delivering faster time to market, increased scalability, improved flexibility and better consumer experiences — all while reducing cost.
With the cloud-native approach, developers can quickly augment applications without delaying delivery or disrupting functionality by using independent components called microservices, which break down large monolithic applications into smaller components. Getting started with cloud-native requires making a few strategic decisions.
Getting started with cloud-native application development: A strategic approach
To begin, the path to cloud-native application development requires a good, hard look at existing applications. Should the business modernize existing applications or build new?
When to build new:
In some cases, it makes more sense to develop new applications by utilizing cloud-native development practices instead of working with an existing monolith. Corporate culture, perceived risks and regulatory compliance are some constraints that can contribute to this decision. Building new applications allows teams to develop applications unencumbered by previous design architectures, providing more room for developers to experiment and deliver innovation to users.
When to modernize:
However, in some instances, businesses don’t need to start writing new applications from scratch. Modernizing existing applications can help companies use previous investments and existing workflows while capitalizing on the agility, flexibility and scalability of the cloud. Whether teams containerize, extend, decompose or refactor, traditional monolithic application can be updated into a cloud-native app.
3 principles for application development
Whether creating a new cloud-native application or modernizing an existing one, developers should keep to a consistent set of principles.
1. Follow the microservices architectural approach.
Break applications down to microservices, which allow the incremental, automated and continuous improvement of an application without causing downtime.
2. Rely on containers for maximum flexibility and scalability.
Containers package software with all its code and dependencies in one place, allowing the software to run anywhere. This allows maximum flexibility and portability in a hybrid-multicloud environment. Containers also allow fast scaling up or down with Kubernetes container software defined by the user.
3. Adopt agile methods.
Agile methods speed the creation and improvement process. Developers can quickly iterate updates based on user feedback, allowing the working application version to match as closely as possible to user expectations.
Building cloud-native applications unleashes business benefits
Cloud-native application development is crucial for digital transformation and innovation. Enterprises that adopt cloud-native application development see a marked increase in efficiency, scalability and productivity, as well as improved user experience.
Read the smart paper “Build cloud native: Build Once, deploy anywhere” to learn more about cloud-native application development and the unique approach, tools and solutions offered by IBM for application innovation.
The post A strategic approach to adopting cloud-native application development appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

How Dynatrace and OpenShift Served Vital Information During the Woolsey Fire

Our partners, Dynatrace, have written a blog and a case study covering our joint customer, the largest county in the United States. Normally, case studies and joint customer stories are strictly about business affairs, discussing ROI, OpEx and developer agility; tantalizing topics for IT folks, but not exactly the stuff of drama and danger. This particular case study, however, is about the county’s usage of Red Hat OpenShift and Dynatrace’s Davis AI during the Woolsey Fire in November of 2018.
Normally, you wouldn’t think of access to a website as being a life or death situation but when the evacuation of more than 295,000 people depends upon the information being distributed on that website, SLAs and service guarantees can be tied almost directly to the saving of human lives.
You can read the case study here, or peruse the blog here.
The post How Dynatrace and OpenShift Served Vital Information During the Woolsey Fire appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Deploy VueJS applications on OpenShift

Recently, Vue.js became the top starred JavaScript framework on GitHub. It’s a great framework to quickly get started building single page applications in JavaScript. Its simplicity makes it easy to get started, but it’s robust enough to build large production applications.
But once your application is built, where can you deploy it? Because Vue.js will bundle up everything as static files, many options are available to you. In this post, we will explore how to deploy a Vue.js application on an Nginx server running in a Red Hat OpenShift cluster.
You will need
vue-cli
To create our application, we will use vue-cli to generate the skeleton of our application. You can install vue-cli by using a global install with npm.
npm install -g @vue/cli
An OpenShift instance
You can either use the official OpenShift platform or use a version that runs locally with MiniShift. If you don’t have access to an OpenShift instance and don’t want to install your own, you can sign up for the OpenShift Online service which includes a free trial or a paid tier if you require more resources and support.
Once your cluster is running, you can use oc to login and to interact with your OpenShift instance.
oc login
Docker or Podman
You will need the docker or the podman cli installed. My personal preference is for podman and that is what I will be using here. You can use both of them interchangeably.
Install Docker on Windows
Install Docker on Mac
Install podman on Linux
Getting Started
For this tutorial, you will deploy the skeleton application generated by vue-cli.
vue create vue-openshift
You can accept the default options here. Once everything is installed, you can test out the application.
cd vue-openshift
npm run serve
This will start the local server. You can make sure that your application is running by pointing your browser to http://localhost:8080. You should see the starter application here.

This is the development server. It has all those fancy features that you need for development, like file watching and hot reloads. But the development server also includes a bunch of packages like babel and eslint. To deploy to production, you will want a clean, minimalistic version of your single page application. To do so, stop the development server with Ctrl-C and run a build with:
npm run build
This will create the minified version of your website that is ready to be deployed in the /dist folder of your project. If you take a look in this folder, you will see an index.html file as well as all the other assets that compose your project.
Prepare Nginx
Nginx is a high performance open source web server that can be used to serve static files. It is easy to use within a container for that purpose.
To deploy a container that uses Nginx on OpenShift, you will need to do some adjustments to the default configuration.
OpenShift has many security features built in. One of them is ensuring that no container is allowed to run as root. This is why you need to create our own nginx.conf file to use with OpenShift.
First, you can start with the basic setup for nginx:
worker_processes auto;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types
server {
server_name _;
index index.html
location / {
}
}
}
Nginx will need read and write access to a few files. Because this container won’t run as root, we need to make sure that those files are stored in a folder that can be read by nginx. At the root of the configuration file, just under worker_processes auto;, add the following line
pid /tmp/nginx.pid;
And in the http section, right after the include /etc/nginx/mime.types, add
client_body_temp_path /tmp/client_temp;
proxy_temp_path /tmp/proxy_temp_path;
fastcgi_temp_path /tmp/fastcgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
scgi_temp_path /tmp/scgi_temp;
In the server section, you will need to specify a port that is over 1024 so that non-root users can run this, as well as specify the access and error log paths.
`
server {
listen 8080;
# …
error_log /tmp/error.log;
access_log /tmp/access.log;
location / {
# …
}
}
`
Finally, you will need to tell nginx which files to serve. In the next step, we will copy our files into the /code folder of our container. This is the folder you will use as a root for the application. Finally, you need to configure nginx to try to find the files or to redirect to index.html. This will be used by the vue router when you try to access a section of your site like /about. If a /about/index.html file does not exist, it will use the default /index.html file and display the information based on the vue-router configuration.
location / {
root /code
try_files $uri /index.html
}
Your final nginx.conf file should look like this.
`
nginx.conf
worker_processes auto;
pid /tmp/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
client_body_temp_path /tmp/client_temp;
proxy_temp_path /tmp/proxy_temp_path;
fastcgi_temp_path /tmp/fastcgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
scgi_temp_path /tmp/scgi_temp;
server {
listen 8080;
server_name _;
index index.html;
error_log /tmp/error.log;
access_log /tmp/access.log;
location / {
root /code;
try_files $uri /index.html;
}
}
}
`
Containerize Your Application
The next step will be to prepare a container that can be deployed on OpenShift. For this application, a container that runs Nginx will be used to serve those static files that you generated in the first step.
The first step to prepare the container is to create a Dockerfile in the root folder of your application (/vue-openshift). The file will contain the following
FROM nginx:1.17
This image will be based on the official Nginx image provided on Docker Hub.
As explained in the previous step, nginx needs a special configuration in order to run as a non-root user in OpenShift. You can copy the new configuration to overwrite the default one provided in the base container.
COPY ./nginx.conf /etc/nginx/nginx.conf
Now that nginx is ready, you can copy over your files from the /dist folder into the /code folder of your container. This is the folder nginx will serve files from.
WORKDIR /code
COPY ./dist .
We will also need OpenShift to create a service at port 8080 so we can create a new route for nginx. This can be done by exposing the port 8080.
EXPOSE 8080
Finally, we will to start the nginx server with the following command that will be executed when the container runs.
CMD [“nginx”, “-g”, “daemon off;”]
Your final Dockerfile should look like this:
`
Dockerfile
FROM nginx:1.17
COPY ./nginx.conf /etc/nginx/nginx.conf
WORKDIR /code
COPY ./dist .
EXPOSE 8080:8080
CMD [“nginx”, “-g”, “daemon off;”]
`
Build, Run And Publish Your Container
Now that your container is ready to go, you will need to build it and then push it to a registry. To build your container, run:
podman build -t vue-openshift .
Or, if you have Docker installed:
docker build -t vue-openshift .
The -t flag gives a tag to your build. It’s just a label to help you refer to your container later.
You can test your image by running:
podman run -d –rm –name vue-test -p 3000:8080 vue-openshift
This will start the container. The -d argument tells podman to run in the background. The –rm is used so that the container image is destroyed once you run ‘podman stop’. The –name is a label for your container. The -p maps port 3000 on your local machine to port 8080 in our container. This is the port on which nginx is running according to our config file. Finally, vue-openshift is the name you gave to the container you built.
Now that your container is running, you should be able to point your browser to http://localhost:3000 and see the skeleton Vue application.
You can stop this container by using:
podman stop vue-test
If you can see the application, your container is working and ready to be published. Many registries are available for you to push your images to. In this example, you can use the docker.io registry.
podman push vue-openshift docker://docker.io/<your-username>/vue-openshift
Now that your container is publicly available, you will be able to install it on OpenShift.
Deploy To OpenShift
You now have a working container that runs nginx and the build code for your Vue.js application. The last thing to do is to deploy it on OpenShift. This can be done through the web interface or through the command line tool oc.
First, start by creating a new project on OpenShift.
oc new-project vue-app
Then, deploy your new application by using the container you’ve just published.
oc new-app docker.io/<your-username>/vue-openshift
To verify that you application was deployed, you can use:
oc status
This should list you a service (svc) as well as a deployment configuration (dc). If you open up the web console, you should see the application with one pod running.

The last step to make your Vue.js application publicly available is to create a route that maps to the port 8080 in your pod. You can use the following command to create that route:
oc expose service vue-openshift –port=8080
If you look back at your application, you will now see a link under the Routes – External Traffic label.

This is the link to your Vue.js application. By clicking on this link, you should now see the deployed application that you built earlier.
All Done!
And that’s it! You now have a fully running Vue.js application served by an Nginx server in your OpenShift cluster. If you make changes to your application, you will need to re-create the Vue.js build, rebuild your container, push it to your registry and deploy your application. This is actually easier that it sounds.
npm run build
podman build -t vue-openshift .
podman push vue-openshift docker://docker.io/<your-username>/vue-openshift
oc rollout latest vue-openshift
The post Deploy VueJS applications on OpenShift appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Cloud infrastructure supports smart meter energy use in Texas

The consumer benefits of reducing energy consumption include lowering energy bills and positive environmental stewardship. Understanding energy use, however, can be tricky.
Texas is a leader in the use of automated metering, including the availability of a portal for consumers to see their energy use and meter information. To do this, a collaborative effort among the five Texas energy companies called Smart Meter Texas (SMT) was formed. By providing timely access to energy data, SMT enables customers to better manage their energy consumption and save money. A cloud infrastructure is making it possible.
The evolving smart meter portal technology
Smart meters, at their most basic, are electronic devices that record energy use. SMT stores data recorded in 15-minute intervals by smart meters and provides secure portal access to that data for customers and authorized market participants.
SMT is one of the largest smart meter solutions in the country, both in terms of the volume of data collected and processed. All that data creates a very large multi-terabyte database, and we were looking for a partner to host it, maintain it, develop it and support it. We needed to find a company that has strong service lines in all four of those areas.
We sent a request for proposal to 12 companies and quickly narrowed the choices down to three. It became clear that there are only a few companies that could actually do all of the different services we needed proficiently. We chose IBM because of its deep and wide expertise in the utility industry and also because the company is known for its strong hosting and infrastructure capabilities. Also, it was very competitive in the pricing.
We began the project with IBM in 2009 and the first release of SMT launched in January 2010. Every five years we refresh our infrastructure and evaluate whether the solution is providing the functionality required. We had a new release and total infrastructure refresh in 2014 and now we’re looking at our third refresh. The difference in technology since the original 2010 implementation is huge.
With the first release, we envisioned a simple website portal. Now, 10 years later, we have some residential patrons who want to interface with SMT through machine-to-machine technology with Application Programming Interfaces (APIs). We can allow this because part of our solution is running on Skytap on IBM Cloud.
The key things that have been so important to us about SkyTap on IBM Cloud are, first of all, it’s on the cloud. Because the solution is cloud based, we didn’t have to invest as much money as would have been needed to purchase the hardware and servers and all the software and operating systems that go on them. Second, the environment can grow and shrink as we need it, so it’s a lot more cost effective. Lastly, it’s secure, so we’re able to open the staging environment to third-party providers that want to integrate with it.
The changing industry energy landscape
SMT enables consumers to see how much energy they are using at different times of day. Since electricity rates vary during “on peak” or “off peak” hours, people can use the smart meter data to decide how to better manage their electricity usage. Perhaps they will run their electric dryer or charge their electric car only at night. Maybe they will decide to turn their thermostats up or down a few degrees. Maybe they will decide to look at more energy-efficient appliances.
Since the smart meter sends data directly to the utility company, no one needs to come and read a meter.
Additionally, hundreds of nimble competitive service providers have popped up that use the SMT web portal to offer competitive services to help consumers identify the best energy plan, their ideal provider and possible renewable energy alternatives based on their energy use.
This industry ecosystem wouldn’t even have been possible in Texas before SMT.
Learn more about how IBM is helping energy and utility companies increase efficiency and reduce expenditures. And, learn how Skytap on IBM Cloud can help your business
The post Cloud infrastructure supports smart meter energy use in Texas appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

KubeVirt Joins Cloud Native Computing Foundation

This month the Cloud Native Computing Foundation (CNCF) formally adopted KubeVirt into the CNCF Sandbox. KubeVirt allows you to provision, manage and run virtual machines from and within Kubernetes. In joining the CNCF Sandbox, KubeVirt now has a more substantial platform to grow as well as educate the CNCF community on the use cases for placing virtual machines within Kubernetes. The CNCF onboards projects into the CNCF Sandbox when they warrant experimentation on neutral ground to promote and foster collaborative development.
For our part, Red Hat has been a contributor and advocate for KubeVirt and we’ve been leveraging it to play with some technologies you may remember. At Red Hat Summit you watched us demonstrate the capabilities of a Kubernetes Native platform bringing together the capabilities of VMs, containers, networking and storage. If you’re an OpenShift customer you may have started to play with Container-native virtualization (CNV), available via tech preview — this uses KubeVirt under the hood.
Congratulations to the KubeVirt team! We’re excited to watch the advancement of this project as it grows with CNCF Sandbox. To learn more about the project and its initial kickoff with CNCF you can view a quick webinar here.  

Helpful Links:.
Getting Started with KubeVirt
Reimagining Virtualization with KubeVirt and Kubernetes Part 1 and Part II
Using OpenShift Service Mesh and KubeVirt Together
The post KubeVirt Joins Cloud Native Computing Foundation appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

OpenShift Cluster Node Tuning Operator – Nodes on steroids

What do you prefer: manual or automatic transmissions?
I like to have the control over a car which a manual transmission provides – using the engine to slow down without brakes and being more efficient when overtaking. On the other hand, it’s nice not to involve the left leg all of the time and to keep both hands on the steering wheel. Using an automatic transmission in general is easier and my family prefers it, so I have no choice.
Wouldn’t it be great to be able to do things more efficiently and precisely but not do it in a manual way? It would be great if an automatic transmission always behaved as I wanted and needed at that exact moment.
Returning to an OpenShift scenario I’ll ask again:
Wouldn’t it be great to tweak my RHEL CoreOS node only when I need to, and to not have to do it manually?
You can do this using the OpenShift Cluster Node Tuning Operator. This operator gives the user an interface to add custom tuning to apply to nodes on specified conditions and to configure the kernel according to the user’s needs. More information can be found at github.
The Node Tuning Operator runs as a daemonset on every node in the cluster. Check it with the command:
$ oc get pods -n openshift-cluster-node-tuning-operator -o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cluster-node-tuning-operator-847984d77-f92tv 1/1 Running 0 6h51m 10.128.0.17 skordas0813-6p5bl-master-1 <none> <none>
tuned-2gz29 1/1 Running 0 6h51m 10.0.0.4 skordas0813-6p5bl-master-1 <none> <none>
tuned-5hkmr 1/1 Running 0 6h51m 10.0.0.7 skordas0813-6p5bl-master-2 <none> <none>
tuned-5jv59 1/1 Running 0 6h50m 10.0.32.4 skordas0813-6p5bl-worker-centralus1-tkbxs <none> <none>
tuned-gvlnt 1/1 Running 0 6h50m 10.0.32.5 skordas0813-6p5bl-worker-centralus3-nrh4t <none> <none>
tuned-nvfb5 1/1 Running 0 6h51m 10.0.0.6 skordas0813-6p5bl-master-0 <none> <none>
tuned-xhpfx 1/1 Running 0 6h49m 10.0.32.6 skordas0813-6p5bl-worker-centralus2-xm865 <none> <none>

Also, you can check the tuned custom resources:
$ oc get tuned -n openshift-cluster-node-tuning-operator
NAME AGE
default 5h31m

Let’s take a closer look at the default tuning
$ oc get tuned -o yaml -n openshift-cluster-node-tuning-operator

apiVersion: v1
items:
– apiVersion: tuned.openshift.io/v1
kind: Tuned
metadata:
creationTimestamp: “2019-08-07T14:08:10Z”
generation: 1
name: default
namespace: openshift-cluster-node-tuning-operator
resourceVersion: “6878”
selfLink: /apis/tuned.openshift.io/v1/namespaces/openshift-cluster-node-tuning-operator/tuneds/default
uid: c9f0361b-b91c-11e9-931e-000d3a9420dc
spec:
profile:
– data: |
[main]
summary=Optimize systems running OpenShift (parent profile)
include=${f:virt_check:virtual-guest:throughput-performance}

[selinux]
avc_cache_threshold=8192

[net]
nf_conntrack_hashsize=131072

[sysctl]
net.ipv4.ip_forward=1
kernel.pid_max=>131072
net.netfilter.nf_conntrack_max=1048576
net.ipv4.neigh.default.gc_thresh1=8192
net.ipv4.neigh.default.gc_thresh2=32768
net.ipv4.neigh.default.gc_thresh3=65536
net.ipv6.neigh.default.gc_thresh1=8192
net.ipv6.neigh.default.gc_thresh2=32768
net.ipv6.neigh.default.gc_thresh3=65536

[sysfs]
/sys/module/nvme_core/parameters/io_timeout=4294967295
/sys/module/nvme_core/parameters/max_retries=10
name: openshift
– data: |
[main]
summary=Optimize systems running OpenShift control plane
include=openshift

[sysctl]
# ktune sysctl settings, maximizing i/o throughput
#
# Minimal preemption granularity for CPU-bound tasks:
# (default: 1 msec# (1 + ilog(ncpus)), units: nanoseconds)
kernel.sched_min_granularity_ns=10000000
# The total time the scheduler will consider a migrated process
# “cache hot” and thus less likely to be re-migrated
# (system default is 500000, i.e. 0.5 ms)
kernel.sched_migration_cost_ns=5000000
# SCHED_OTHER wake-up granularity.
#
# Preemption granularity when tasks wake up. Lower the value to
# improve wake-up latency and throughput for latency critical tasks.
kernel.sched_wakeup_granularity_ns=4000000
name: openshift-control-plane
– data: |
[main]
summary=Optimize systems running OpenShift nodes
include=openshift

[sysctl]
net.ipv4.tcp_fastopen=3
fs.inotify.max_user_watches=65536
name: openshift-node
– data: |
[main]
summary=Optimize systems running ES on OpenShift control-plane
include=openshift-control-plane

[sysctl]
vm.max_map_count=262144
name: openshift-control-plane-es
– data: |
[main]
summary=Optimize systems running ES on OpenShift nodes
include=openshift-node

[sysctl]
vm.max_map_count=262144
name: openshift-node-es
recommend:
– match:
– label: tuned.openshift.io/elasticsearch
match:
– label: node-role.kubernetes.io/master
– label: node-role.kubernetes.io/infra
type: pod
priority: 10
profile: openshift-control-plane-es
– match:
– label: tuned.openshift.io/elasticsearch
type: pod
priority: 20
profile: openshift-node-es
– match:
– label: node-role.kubernetes.io/master
– label: node-role.kubernetes.io/infra
priority: 30
profile: openshift-control-plane
– priority: 40
profile: openshift-node
status: {}
kind: List
metadata:
resourceVersion: “”
selfLink: “”

The section spec.profiles is a list of profile definitions in which we define the names and values that the operator will set for the node. It is possible to define a child profile that you only want to use in other profiles using the include key. In the example above, the openshift profile is an example of this. We can also add a summary to describe the profile.
The spec.recommend section is a list of profile selection logic checks what conditions should be met for the operator to apply the correct profile on the node. This part may not be so obvious, so let’s look deeper.
Each check needs three pieces of information:
match – What conditions need to be met to apply the recommended profile? If the match part is omitted, then the operator assumes that the match is always true. More details below.
priority – smaller numbers are higher priority. If there is more than one profile that should be used, then the Node Tuning Operator will apply the profile with a higher priority.
profile – name of the profile from spec.profiles that should be used.
If you want to apply more than one profile at the same time, you need to create a new profile that will include other profiles.
What criteria needs to be met to apply a specific profile? Everything is managed by labels on nodes and pods. All conditions are in the match section.
Each match can have four definitions:
label – node or pod label.
value – node or pad label value – if it’s omitted, then operator will match on the existence of the label.
type – only node or pod values – It defines what label the operator should check. If it is omitted then the operator will check the node label.
match – type is array – nested additional matches – the operator will check this match only when the toplevel match returns true.
Reading the recommend section is much easier now. Let’s move on to the default recommendation. The operator will check each node independently to determine which profile should be used on which node.
– match:
– label: tuned.openshift.io/elasticsearch
match:
– label: node-role.kubernetes.io/master
– label: node-role.kubernetes.io/infra
type: pod
priority: 10
profile: openshift-control-plane-es

At the beginning, the operator will check if the node has a pod running on it with the tuned.openshift.io/elasticsearch label. If this match is true, it will check the nested match: If the node (node is implied – because type is omitted) has the labels node-role.kubernetes.io/master or node-role.kubernetes.io/infra, the operator will apply the openshift-control-plane-es profile because it is a control plane or infra node running an elasticsearch pod.
If this second control plane/infra match is false, then the operator will move on and check the next match with lower priority:
– match:
– label: tuned.openshift.io/elasticsearch
type: pod
priority: 20
profile: openshift-node-es

openshift-node-es profile will be applied only when the previous control plane/infra match returns false and the node is running a pod with thetuned.openshift.io/elasticsearch label.
As before, if there is no match we continue to the next match in priority:
– match:
– label: node-role.kubernetes.io/master
– label: node-role.kubernetes.io/infra
priority: 30
profile: openshift-control-plane

openshift-control-plane profile will be applied only when the previous matches return false and the node is labeled node-role.kubernetes.io/masteror node-role.kubernetes.io/infra
Finally, if there were no matches by this point, the operator will apply openshift-node profile:
– priority: 40
profile: openshift-node

Because there is no match array, it is always true.
Now we can create our own profile:

Create a file with CustomResource: cool_app_ip_port_range.yaml

apiVersion: tuned.openshift.io/v1
kind: Tuned
metadata:
name: ports
namespace: openshift-cluster-node-tuning-operator
spec:
profile:
– data: |
[main]
summary=A custom profile to extend local port range

[sysctl]
net.ipv4.ip_local_port_range=”1024 65535″

name: port-range

recommend:
– match:
– label: cool-app
value: extended-range
type: pod
priority: 25
profile: port-range

Create new tuned and verify if is there

$ oc create -f cool_app_ip_port_range.yaml
tuned.tuned.openshift.io/ports created
$ oc get tuned -n openshift-cluster-node-tuning-operator
NAME AGE
default 6h32m
ports 31s

Let’s check the value of net.ipv4.ip_local_port_range on each node:

for i in $(oc get nodes –no-headers -o=custom-columns=NAME:.metadata.name); do echo $i; oc debug node/$i — chroot /host sysctl net.ipv4.ip_local_port_range; done

In my case each node has the same range:
net.ipv4.ip_local_port_range = 32768 60999

Create our own app and label it correctly

$ oc new-project my-cool-project
$ oc new-app django-psql-example
$ oc get pods -o wide -n my-cool-project | grep Running
django-psql-example-1-pgd67 1/1 Running 0 3m15s 10.128.2.10 skordas0813-6p5bl-worker-centralus3-nrh4t <none> <none>
postgresql-1-cw86k 1/1 Running 0 5m12s 10.131.0.14 skordas0813-6p5bl-worker-centralus1-tkbxs <none> <none>
$ oc label pod postgresql-1-cw86k -n my-cool-project cool-app=
$ oc label pod django-psql-example-1-pgd67 -n my-cool-project cool-app=extended-range

Check net.ipv4.ip_local_port_range once again on each node:

for i in $(oc get nodes –no-headers -o=custom-columns=NAME:.metadata.name); do echo $i; oc debug node/$i — chroot /host sysctl net.ipv4.ip_local_port_range; done

On node skordas0813-6p5bl-worker-centralus3-nrh4t the value of net.ipv4.ip_local_port_range has been changed
net.ipv4.ip_local_port_range = 1024 65535

because a pod labeled cool-app=extended-range is running on this node!
If you change the matching label or just delete pod, project or ‘port’ tuned profile, then the range will be set back to the default kernel values.
Everything is managed by the OpenShift Cluster Node Tuning Operator and the profiles you use, so you don’t need to tweak any values on the nodes’ operating system. This results in an automatic transmission-like experience for operators of OpenShift.
The post OpenShift Cluster Node Tuning Operator – Nodes on steroids appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Design for Users by Users: Design Thinking @ Red Hat – Sara Chirzari (Red Hat UX Research Team) – OpenShift Commons Briefing

 
Have you ever wondered how product teams decide what features to build and what changes to make? In this OpenShift Commons Briefing, the Red Hat User Design Experience Design and Research team discuss applying design thinking to real product development challenges, from problem discovery to testing and validating ideas.
Red Hat’s Sara Chizari walks us thru The Red Hat User Experience Design and Research team’s Design Thinking process that they use to help product teams build solutions that focus on solving problems, and are tailored to users’ needs. In this session, she takes us on a user-centered design journey. Learn about the techniques they use to develop an understanding of the users’ challenges and needs, articulate the users’ problems, and brainstorm potential solutions. Slides: Designing with Users for Users – OpenShift Commons Briefing Slides
Want to take part in upcoming OpenShift UX Design Workshop on San Francisco Oct 28th, 2019?  
If you are an OpenShift user and want to participate in an OpenShift UX Design Thinking workshop, we’re hosting a workshop with the Red Hat UX Design and Research team at the upcoming OpenShift Commons Gathering on October 28th in San Francisco! Request an invitation soon as space is limited to 20!

To request your invitation to attend the Design Thinking Workshop to be held in conjuction with the upcoming OpenShift Commons Gathering in San Francisco on Oct 28th, send an email to schizari@redhat.com. Space is limited to 20.
The Red Hat UX Research team will be focusing in on the OpenShift Console and aspects of troubleshooting, so if you are interested in contributing your feedback on OpenShift UX this workshop will be a great opportunity to do so!
As well, after the morning long workshop, you are invited to join the rest of the OpenShift Commons Gathering which will be focusing on enabling Machine Learning and AI workloads on OpenShift as our guests. More Gathering details here.
 
 

About OpenShift Commons
OpenShift Commons builds connections and collaboration across OpenShift and OKD communities, upstream projects, and stakeholders. In doing so we’ll enable the success of customers, users, partners, and contributors as we deepen our knowledge and experiences together.
Our goals go beyond code contributions. OpenShift Commons is a place for companies using OpenShift to accelerate its success and adoption. To do this, we’ll act as resources for each other, share best practices and provide a forum for peer-to-peer communication.
To stay abreast of all the latest announcements, briefings and events, please join the OpenShift Commons and join our mailing lists & slack channel.
Join OpenShift Commons today!
The post Design for Users by Users: Design Thinking @ Red Hat – Sara Chirzari (Red Hat UX Research Team) – OpenShift Commons Briefing appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Upcoming Virtual Event: Develop. Deploy. Deliver Continuously

For those of you out there looking to learn a bit more about what application development looks like in a Red Hat OpenShift environment, you may want to sign up for the Virtual Event we’ll be hosting on October 10, 2019. This Online Event. will drill down into the practices and processes that can help increase developer velocity and productivity. The event will feature keynotes from Brian Gracely, Director of Product Strategy at Red Hat, and Mike Piech, Vice President and General Manager of Middleware at Red Hat.
This virtual event is a bit like an online conference, complete with three tracks of talks on a dozen topics. From microservices, to integration patterns, to serverless computing, this virtual event will make it easier to find the information you need to bring your development teams up to speed on Kubernetes, OpenShift and hybrid cloud computing at scale.
The two keynotes are detailed below. Head over to the event page to register now!
Innovating in a Hybrid Business World
Brian Gracely, Director of Project Strategy, Red Hat
It’s been nearly a decade since software began eating the world and developers became the new kingmakers. But app makers are still frustrated that they can’t build fast enough, deploy fast enough, and not worry about other layers of the stack. In this keynote, we’ll talk about the reasons why companies are faced with hybrid opportunities and challenges at the business level, and how this impacts app makers. We’ll also highlight how Red Hat is bringing together technology, innovation and culture to help remove the friction for app makers in ways that will help them success with existing and future applications.
Cloud-Native Development with Red Hat Middleware
Mike Piech, Vice President and General Manager, Middleware
Modern business requires the ability to roll out functionality to customers and employees faster than ever before, yet still requires extreme reliability in service delivery. While new technologies offer unprecedented opportunities to boost productivity it is rarely possible to make wholesale platform changes and constancy remains critical.
In this talk we will provide an overview of key architecture strategies that can boost developer productivity, improve robustness and enable long term evolution of IT environments. Topics covered will include the impact of containerization, APIs, next generation integration, process automation and development processes. The talk will also cover a number of examples that show how such application environment flexibility can have a major impact on business outcomes.
The post Upcoming Virtual Event: Develop. Deploy. Deliver Continuously appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

How to achieve business automation success with process modeling

As organizations strive for excellence in both process and operations, it is very important to use the right tools based on business priorities. But, having the right tools is just one layer to the foundation for business automation success and process excellence. It is equally important to have the right partner. Together, the right tools paired with the right partner set the stage for successful digital transformation.
The critical first step in business automation: process modeling
Modeling business processes is the critical first step in achieving digital transformation. Process modeling is an approach for non-technical business people to work across departments to visualize and understand their “as-is” process landscape. Once the business processes are modeled, companies are better able to pinpoint the bottlenecks, complexities and inefficiencies that are costing the business excess time and money. These discoveries can be used to optimize and automate workflows, helping businesses create efficiencies at scale.
The benefits of successful process modeling can include:

Business benefits totaling up to millions of US dollars
More business processes become fully modeled and engaged
Significant growth in user numbers and participation

Improved business processes can propel business growth
One largely traded software company saw significant growth in their pursuit of process excellence by recognizing the importance of the right partner and tools at the beginning of their digital transformation journey. First, they selected Salient Process to partner and IBM Blueworks Live as the right tool to reach their process excellence goals. The challenge they faced required them to document and catalog the entire organization’s business processes. They also needed to create a centralized, common repository that was easy to use; enable process improvement; and speed business automation.
With Salient, the client felt they possessed the level of experience with new users and in operational efficiency needed to meet their business goals. Initially, Salient helped train and enable the clients to incorporate new users before providing consulting services to help them build out reports and dashboards to extend the visibility and value to the IBM Blueworks Live footprint.
Salient Process provided best practices, proven methodologies and a unique technology accelerator called BlueworksInsights to better enable the client team.
This particular client started with 15 users, with a plan to double that within a year. However, after only a few months it became clear that the team was far exceeding even their most aggressive estimates. By the end of the sixth month, the client added more than 500 new users and has steadily grown since then.
Register to read the full case study.
The post How to achieve business automation success with process modeling appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Overcoming the challenges of hybrid multicloud IT management

Hybrid cloud environments have become the norm among most businesses. In our latest Voice of the Enterprise: Digital Pulse, Budgets and Outlook 2019 survey, we asked 916 IT professionals to describe their overall IT approach and strategy. Among the respondents, we found that

62 percent said they now use a hybrid IT environment with integrated on-premises systems and off-premises cloud/hosted resources
17percent said their IT environment is completely off-premises, distributed across various SaaS, IaaS and PaaS clouds
Nine percent are using a hybrid cloud with limited or no interoperability between the on-premises and off-premises environments
Four percent are building an on-premises environment only
Just eight percent claimed clouds are not an important part of their IT strategy.

It’s now safe to say that interoperable on-premises infrastructure and hybrid multiclouds are common enterprise IT architectures, and they are likely to remain so for several years to come. However, getting these distributed environments to operate efficiently and effectively to better serve the business needs of enterprises is another matter.
Navigating hybrid multicloud management
There is no industry standard for hybrid multicloud architecture or management. Workload placement across multiple distributed execution venues is highly subjective to each enterprise and depends on a range of factors including the value to risk ratio tied to workloads, lifecycle stages, usage patterns, application behavior characteristics, data criticality, data sovereignty, the price, performance and risk characteristics of various execution venues, and so on. So too, the hybrid offerings from cloud service providers vary. Each has well-designed and comprehensive hybrid cloud architectures, but they differ in their design, deployment and management models.
As workloads, data and processes shift across diverse and disparate execution venues (e.g., on-premises infrastructure, managed services, clouds), there will be a need for a new approach to hybrid multicloud management – one that requires a uniform means for provisioning, access control, capacity management, performance analysis, billing and cost control, among others. Enterprises will demand that IT vendors craft a holistic platform to allocate workloads strategically to the best execution venue, and do so while managing business continuity across hybrid IT architecture. This will drive the development of a new generation of cloud management technology we refer to as unified infrastructure management (UIM) platforms.
Tackling next-generation challenges with unified infrastructure management
To tackle the challenges of hybrid IT management, a unified infrastructure management platform needs to be able to answer two fundamental questions and execute upon the findings. The first: “Under what conditions do I put what workloads on what execution venues?” This requires an understanding of workload characteristics and the capabilities of execution venues (beyond cost) to intelligently map workloads to their best execution venues and to migrate, monitor and manage workloads across execution venues.
To manage data and logic placement across distributed architecture, the UIM must also be able to answer and execute upon the second question: “Do I move the logic to the data or the data to the logic?” For example, in the case of core/fog/edge IoT architecture, the issue is how to intelligently and dynamically choose and shift where logic is computed – i.e., in the core (cloud), in the fog (nodes), on the edge (devices) – and how to minimize data in motion.
Such decisions require detailed analysis of many complex variables beyond cost. We believe that next-generation unified infrastructure management platforms will gradually be equipped with various open source technologies to answer such questions and provide a means to execute the following capabilities:

Analyze and compare the economics (price and performance characteristics) of various execution venues.
Analyze workloads to determine their performance characteristics and operational requirements.
Automate the provisioning of compute, storage, network, security, application stacks and data.
Intelligently deploy workloads and services determined by economic analysis as well as any compliance policies required of on-premises infrastructure, managed services, and private and public clouds.
Intelligently redeploy workloads to other execution venues when venue or workload characteristics change.
Interoperate with, build and deploy container and microservices coding platforms to coordinate cloud services for automated iterations of application and workload deployments.
Manage security, identity authentication and access control for administrators, tenants and user accounts.
Provide financial metering, reporting and chargeback/viewback by cloud, tenant, user, application, compute and other consumption-based services.
Orchestrate events and manage runtime execution and performance of all venues, and enact policies to automate scaling, bursting, high availability and disaster recovery.
Maintain a service library that includes operating system images, databases, middleware, message busses, load balancers and servers.
Control and dynamically allocate network resources in response to the transmission, latency and security requirements of specific data and workloads.

Trusted IT and cloud-enabling technology vendors are now crafting such unified infrastructure management platforms. Going forward, unified infrastructure management platforms may also include and/or integrate with orchestration tools to execute and synchronize business processes that span execution venues, analytics that pave the way for predictive and prescriptive deterministic reasoning, and even autonomic self-healing capabilities empowered by machine learning and artificial intelligence (AI) technologies that can also expose insights for continuous improvement. Highly valued vendors will be those that embrace this opportunity and can ensure that the advantages and business agility promised of hybrid multicloud IT architecture indeed become reality. Additional insights and improved strategies are required to reap the full range of benefits of a true hybrid IT environment.
Learn more about hybrid and multicloud strategy for the enterprise.
The post Overcoming the challenges of hybrid multicloud IT management appeared first on Cloud computing news.
Quelle: Thoughts on Cloud