What Matters Most to OpenShift Users?

Using the Top Tasks method
Red Hat OpenShift Container Platform has a broad set of powerful functions available to users as soon as it’s deployed. Providing so many functions within OpenShift poses a challenge to the OpenShift User Experience Design (UXD) team.
Which functions and tasks are the most important to our users? What aspects of the product and interface should we focus on? To answer these questions, our UXD researchers are implementing the Top Tasks method to get insights from our users on how to craft the next stages of OpenShift’s user experience.
Take the survey here
The Top Tasks approach is a two-phase survey method pioneered by Gerry McGovern. In the first phase, already completed by our team, we sent a survey to Red Hatters to arrive at a list of all possible OpenShift tasks. Using qualitative coding and an expert review process, we consolidated 416 open responses from 67 Red Hatters into 124 final tasks. These tasks serve as the input to the second phase survey: the most important part of the Top Tasks process.
What our final data will look like after phase two

In phase two, OpenShift users and customers will vote for the five most important tasks they complete using the web console (visual interface) and command line interface. By quantitatively analyzing the responses gathered during phase two, the OpenShift team will get a deep understanding of what product features/functions our users care most about. Thus, we’ll build out our future roadmap according to the preferences of you, the user, in the most egalitarian way possible.
Are you ready to help influence the future of OpenShift’s user experience?
Survey link: give your input here
The post What Matters Most to OpenShift Users? appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

From The Enterprisers Project: 4 Facts about Kubernetes Operators

The Enterprisers Project ran a story last week that delves into some of the lesser known qualities of Kubernetes Operators. Read the whole thing here. From the article:
3. Kubernetes Operators aren’t just for databases
A (very) brief history of Kubernetes Operators goes something like this: In its early days, Kubernetes was considered a great fit for managing stateless applications. For stateful applications like databases, not so much – at least not without significant operational burden. Operators to the rescue.
(Again, that’s the CliffsNotes version.)
So Operators in their early days were often focused on database applications and helping to extend Kubernetes’ capabilities to this critical category. Bromhead from Instaclustr led the development of an Operator for Apache Cassandra, for example.
“Back when Kubernetes Operators started, people would create Operators mostly for managing stateful database workloads,” says Yossi Jana, DevOps team leader at AllCloud. “Some of the examples were MongoDB, Cassandra, and Redis. Those databases are more difficult to set up and continuously manage on your own without the proper expertise.”
So, yes, if you scan the OperatorHub.io registry, you’ll see plenty of database-related Operators. But Operators aren’t for databases alone.
The post From The Enterprisers Project: 4 Facts about Kubernetes Operators appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Cloud innovation in real estate: Apleona and IBM rely on new technologies

Digitization does not stop at the proverbial concrete gold — real estate. In fact, the real estate industry is on the move. Companies are realizing the benefits of digital transformation and are capitalizing on the power of new technologies such as cloud, AI and blockchain.
Take, for example, Apleona GmbH, one of Europe’s largest real estate service providers. The German company has more than €2 billion in sales and manages properties in all asset classes in more than 30 countries. Apleona recognizes that it can broaden the established facility management model by strategically partnering with its corporate clients to transform and digitize their offerings. To this end, Apleona has been working with IBM since 2017 in a digital partnership that is continually expanding. The company has already implemented a range of innovative ideas, including the following new applications:

Room Booking is the easy and fast way to book office conference rooms without spending too much time on mobile apps. The mobile terminal’s visual representation can also display which rooms are occupied or vacant. As a result, you can quickly find available spaces in the building.
With Smart Ticketing, office workers can use a mobile app to create a “ticket” in just a few clicks if they notice a problem such as defective devices in the office. With this simple and straightforward solution, employees can track the ticket’s processing through the app. IBM Watson AI Services even allows them to “spot” problems so they can automatically create a categorized ticket.
Finally, Energy Pods are convenient mini-booths for “power naps” in the office, optionally accompanied by music or guided meditations. Employees can recharge in these stylish pods and even book them with a mobile phone. Four of the units are in use and are very popular at the IBM headquarters; more are planned at other locations.

 
New technologies and agile methods are key to innovative solutions
Apleona team members and experienced IBM IT consultants co-created the new solutions in the IBM Garage in Munich, Germany. Most of the applications are hosted on IBM Cloud in the Frankfurt data center, but the customer-oriented flexibility of the IBM hybrid multicloud platform makes the applications available across platforms.
The Apleona and IBM team capitalized on existing real estate and facility management data in inventive ways to develop new offerings. To create smart environments that optimally adapt to people’s needs, the team used data analytics, cognitive automation, edge computing and AI. These technologies, in connection with sensor technology and the cloud, made it possible to process and transfer data streams.
The forward-thinking and agile principles of the IBM Garage Methodology also helped Apleona succeed in its digital transformation. Apleona and IBM continue to follow this methodology at the IBM Watson IoT Center, which serves as the team’s co-creation space. This location even boasts the Room Booking solution.
Expanding offerings and scaling solutions
While the real estate industry has recognized the general benefits of digitization, critical voices are still asking for detailed business cases and quantifiable cost savings. How do you quickly scale the new solutions? “In our industry, innovation and monetization must always be thought of together,” explains Dr. Jochen Keysberg, Chief Executive Officer at Apleona. “That’s why it’s so important that new ideas, such as those developed with IBM, always have competitive advantages, efficiency improvements and better resource utilization in mind.”
The real estate industry cannot shy away from digitization, and like Apleona, other facility managers will have to adopt modern solutions to succeed in the marketplace. “Simply start, develop agile and at the same time, bring the required expertise to the table. Then quickly validate and test it with the addressed users. In our experience, it usually works in a quite straightforward way,” explains Stefan Lutz, General Manager and Services Transformation Program Leader at IBM.
Indeed, new customers, including a major German bank and global companies in the technology, energy and automation industries, are already relying on the innovation of Apleona and IBM. In addition, other IBM locations will soon use Apleona’s cutting-edge solutions, and more applications will roll out in 2020. The joint team is excited to see the offerings expand and the solutions scale.
Want to experience the IBM Garage for your business? Schedule a no-charge visit with the IBM Garage to get started.
The post Cloud innovation in real estate: Apleona and IBM rely on new technologies appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Mirantis Shortlisted for Best Mobile Innovation for Emerging Markets in MWC Global Mobile Awards

The post Mirantis Shortlisted for Best Mobile Innovation for Emerging Markets in MWC Global Mobile Awards appeared first on Mirantis | Pure Play Open Cloud.
Mirantis Shortlisted for Best Mobile Innovation for Emerging Markets in MWC Global Mobile Awards
Mirantis has been nominated for Magma Open Source Wi-Fi Packet Core
Campbell, CA, February 10, 2020–
Mirantis, the open cloud company, announced today that it has been listed as a finalist in the Global Mobile Awards for Best Mobile Innovation for Emerging Markets for its work with Magma Open Source Wi-Fi Packet Core.
In September, Mirantis announced it was helping bring the converged access gateway software platform Magma, an open source initiative by Facebook, to mobile operators around the world. Mirantis has worked to integrate, test and certify Magma with Mirantis’ Kubernetes-based edge infrastructure offering, called MCP Edge.
“It’s an honor to be shortlisted for this prestigious award,” said Dave Van Everen, SVP of Marketing, Mirantis. “We see our nomination as confirmation of the tremendous impact that open source networking innovations can have on mobile operators across all markets and the people and businesses they serve.”
Mirantis Cloud Platform (MCP), integrated with Magma, enables mobile network operators to offload cellular data to Wi-Fi at the network edge. The open source packet core seamlessly integrates with the Mobile Network Operators’ (MNO) existing 4G evolved packet core (EPC) back end and extends its capabilities, making it possible to federate multiple wireless technologies into a single, existing mobile packet core of a service provider. With Magma running on MCP Edge infrastructure, MNOs can not only deploy containerized services and applications, they can also extend their cellular networks, federate services from other ISPs, and launch new Wi-Fi services.
The Global Mobile Awards aims to “recognise and honour the individuals, the teams, the organisations and the partnerships which have changed the meaning of what it is to be connected.” The winners will be announced at a ceremony during Mobile World Congress in Barcelona, Spain on Tuesday, February 25, 2020.The post Mirantis Shortlisted for Best Mobile Innovation for Emerging Markets in MWC Global Mobile Awards appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Building RHEL based containers on Azure Red Hat OpenShift

Red Hat Summit 2020 is fast approaching, and if you missed it last year, you would have also missed Microsoft CEO Satya Nadella and former Red Hat CEO Jim Whitehurst announcing Red Hat and Microsoft’s first joint offering: Azure Red Hat OpenShift (ARO).
Azure Red Hat OpenShift (ARO) is a fully managed service of Red Hat OpenShift on Azure, jointly engineered, operated and supported by Microsoft and Red Hat. 
Did you know that it is possible for both new and existing Red Hat customers to build Red Hat Enterprise Linux (RHEL) based container images on Azure Red Hat OpenShift?
In this blog I will demonstrate how to perform the following on Azure Red Hat OpenShift:

Build a RHEL based container with a Dockerfile using your existing Red Hat subscription, and;
Build a freely redistributable RHEL based container with a Dockerfile using the Red Hat Universal Base Image (UBI). 

Both of these methods will work on the current Azure Red Hat OpenShift offering, the next iteration of which will be based on OpenShift 4. 
Provisioning an Azure Red Hat OpenShift cluster
Let’s start with provisioning an Azure Red Hat OpenShift cluster. There are some prerequisites to complete. An existing Azure subscription is required, and users need to be created in Azure Active Directory. Follow the documentation to set environment variables and using the Azure cli create a resource group and provision the cluster.
$ az openshift create –resource-group $CLUSTER_NAME –name $CLUSTER_NAME -l $LOCATION –aad-client-app-id $APPID –aad-client-app-secret $SECRET –aad-tenant-id $TENANT –customer-admin-group-id $GROUPID
After about 10 – 15 minutes, the deployment process should have completed and the public URL for your fully managed Azure Red Hat OpenShift cluster is displayed. Log in to the console with your Active Directory credentials and copy the login command by clicking on your username and selecting “Copy login command.” This string will be used to login to the cluster using the command line.
Using an existing Red Hat subscription
For this section I highly recommend using an existing RHEL machine which holds a valid subscription. This will make creating the OpenShift prerequisites required for the Dockerfile build much easier. The OpenShift command line tool ‘oc’ is also required to be installed on this machine. For those without an existing subscription skip ahead to the section titled “Using the Universal Base Image (UBI)”.
Login to the ARO cluster using the copied login command. It will look similar to below.
$ oc login https://osa{ID}.{REGION}.cloudapp.azure.com –token={ARO TOKEN}
Create a new OpenShift project
$ oc new-project rhel-build
If you do not have one already, create a registry service account to ensure that you can pull a RHEL image from registry.redhat.io using your credentials. In a browser go to catalog.redhat.com, login and select “Service Accounts” and then “New Service Account”. Download the generated OpenShift secret. Create the secret in your OpenShift project.
$ oc create -f {SECRET_FILE}.yaml -n rhel-build
Create a secret that contains the entitlements
$ oc create secret generic etc-pki-entitlement –from-file /etc/pki/entitlement/{ID}.pem –from-file /etc/pki/entitlement/{ID}-key.pem -n rhel-build
Create a configmap that contains the subscription manager configuration.
$ oc create configmap rhsm-conf –from-file /etc/rhsm/rhsm.conf -n rhel-build
Create a configmap for the certificate authority.
$ oc create configmap rhsm-ca –from-file /etc/rhsm/ca/redhat-uep.pem -n rhel-build
Create a build configuration in the project.
$ oc new-build https://github.com/grantomation/rhel-build.git –context-dir sub-build –name rhel-build -n rhel-build
$ oc get buildconfig rhel-build -n rhel-build
NAME         TYPE FROM     LATEST
rhel-build   Docker Git     1
List the secrets in the project
$ oc get secrets -n rhel-build
NAME                    TYPE               DATA AGE
{SERVICE PULL SECRET}   kubernetes.io/dockerconfigjson        1 2m
Set the registry pull credentials as a secret on the buildConfig
$ oc set build-secret –pull bc/rhel-build {SECRET CREATED BY REGISTRY SERVICE ACCOUNT FILE}
Patch the build configuration
$ oc patch buildconfig rhel-build -p ‘{“spec”:{“source”:{“configMaps”:[{“configMap”:{“name”:”rhsm-conf”},”destinationDir”:”rhsm-conf”},{“configMap”:{“name”:”rhsm-ca”},”destinationDir”:”rhsm-ca”}],”secrets”:[{“destinationDir”:”etc-pki-entitlement”,”secret”:{“name”:”etc-pki-entitlement”}}]}}}’ -n rhel-build
Start the Dockerfile build on OpenShift.
$ oc start-build rhel-build –follow -n rhel-build
Following a successful build, the new image is pushed to the internal OpenShift registry and an image stream is created in the project. To confirm that the image build worked correctly, the imagestream can be used to create an OpenShift application.
$ oc new-app rhel -n rhel-build
Create an edge route which will use the digicert certificate included on ARO.
$ oc create route edge –port 8080 –service rhel-build -n rhel-build
Curl the route to the application
$ curl https://$(oc get route rhel -o go-template='{{.spec.host}}’)
Azure Red Hat OpenShift
Using the Universal Base Image (UBI)
Red Hat UBI provides complementary runtime languages and packages that are freely redistributable. If you’re new to the UBI, you can check out Scott McCarty’s excellent blog and demo as a primer. Using the UBI as a base for your next containerised application is a great way to build and deploy on Azure Red Hat OpenShift. The following steps demonstrate how to use UBI based on RHEL 8. 
Create a new OpenShift project.
$ oc new-project ubi-build
Create a build configuration in the project.
$ oc new-build https://github.com/grantomation/rhel-build.git –context-dir ubi-build –name ubi-build -n ubi-build
Follow the container build.
$ oc logs -f build/ubi-build-1
To confirm that the image build worked correctly, the generated imagestream can be used to create an OpenShift application.
$ oc new-app ubi
Create an edge route which will use the digicert certificate included on ARO.
$ oc create route edge –port 8080 –service ubi -n ubi-build
Curl the route to the application.
$ curl https://$(oc get route ubi -o go-template='{{.spec.host}}’)
And with that done, you’ve got an OpenShift cluster up and running in Azure, running RHEL based containers.
 
The post Building RHEL based containers on Azure Red Hat OpenShift appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Advanced Network customizations for OpenShift Install

Each organization has its own unique IT environment, and sometimes it will not fit within the network configuration which Red Hat OpenShift sets by default. Thus, it becomes essential to customize the installation for the target environment. In this blog we are going to showcase how to do the advanced network related customization and configuration needed to accomplish this. 
Planning 
The first step in implementing or deploying any product is determining the specifications of the target environment.
How do you define your target environment?

What will be the MTU size on your physical network?
OpenShift’s MTU size should always be smaller than the transport network’s MTU by at least 50 bytes if using OVS.
Similarly if you are using any 3rd party SDN plugin along with OpenShift’s OVN which, consists of GENEVE  and VXLAN, make sure to account for bytes due to its encapsulations.
Is your virtualization platform already using VXLAN? If so, what port is it utilizing?
How many networks are laid out for each of the OpenShift nodes? Is there any preference to use any specific network for SDN?
Do you plan to bond together the different networks adaptors for redundancy purposes?
What type of network mode you are planning to use i.e. network policy  or multi tenant?
Do you plan to use a DHCP server to provide persistent IP and server names, or do you want to inject this information via ignition files?
Is your OpenShift cluster behind the proxy? Do you want to use enterprise NTP servers on to your masters to avoid any unnecessary latency issues.

In this blog we are going to showcase how to do advanced network related customizations and configurations during the OpenShift deployment for the above stated points. Note that this blog should only be used for customizing the primary interface i.e. OpenShift Container Platform (OCP) SDN. For customizing secondary  interfaces i.e. net1,net2 , the SRIOV Machine Configuration Operator (MCO) should be directly used.
Multiple networks scenario:
There are scenarios where customers will not be able to open up external/internet access on the internal network due to security concerns. As long as you can attach both public/external networks and the internal network to the OpenShift nodes, you will be able to do the connected deployment. All you need to do is inject the proper default gateway into your ignition files. This example describes the steps to generate the custom ignition file for such scenarios.
In order to make any customizations to the ignition file, we will use a tool called filetranspiler.
You can download it with the following commands:
curl -o ./utils/filetranspile https://raw.githubusercontent.com/ashcrow/filetranspiler/master/filetranspile

chmod +x ./utils/filetranspile
Download the python-yaml package on your RHEL server. This package is required due to the filetranspiler dependencies.
yum  -y install python3-pyyaml
Create a “fakeroot” filesystem for each OpenShift node you want to make an ignition file.
mkdir -p bootstrap/etc/sysconfig/network-scripts/
Create a network configuration inside the “fakeroot” filesystem. For example, in this scenario we will make these files for each of the interfaces.
cat < bootstrap/etc/sysconfig/network-scripts/ifcfg-ens3
DEVICE=ens3
BOOTPROTO=none
ONBOOT=yes
IPADDR=192.168.7.20
NETMASK=255.255.255.0
GATEWAY=192.168.7.1
DNS1=192.168.1.77
DOMAIN=ocp.example.com
PREFIX=24
DEFROUTE=yes
IPV6INIT=no
EOF

cat < bootstrap/etc/sysconfig/network-scripts/ifcfg-ens4
DEVICE=ens4
BOOTPROTO=none
ONBOOT=yes
IPADDR=192.168.1.20
NETMASK=255.255.255.0
GATEWAY=192.168.7.1
DNS1=192.168.1.77
DOMAIN=ocp.example.com
PREFIX=24
DEFROUTE=yes
IPV6INIT=no
EOF
Note: Here Default gateway is of public/external network!!
Using filetranspiler, create a new ignition file based on the one created by openshift-install. 
Please note in this example, bootstrap.ign is my original ignition file created by the openshift-install. Continuing with the example of my bootstrap server; it looks like this.
filetranspiler -i bootstrap.ign -f bootstrap -o bootstrap-static.ign
The syntax is:
filetranspiler -i $ORIGINAL -f $FAKEROOTDIR -o $OUTPUT
NOTE: If you’re using the container version of filetranspiler, you need to be in the directory where these files/dirs are. In other words, absolute paths won’t work.
In this way, you can create your customized ignition file for each of the node types; i.e. master or worker.
Multiple networks bonding scenario:
There are scenarios where customers will not be able to open up external/internet access on the internal network due to security concerns. As long as you can attach both public/external network and internal network to the OpenShift nodes, you will be able to do the connected deployment. All you need to do is to inject the proper default gateway into your ignition files. This example describes the steps to generate the custom ignition file for such scenarios. In this scenario, we also showcase how to pass the bond information into the ignition files.

Create a “fakeroot” filesystem structure  for each OpenShift node you want to make an ignition file.
mkdir -p bootstrap/etc/sysconfig/network-scripts/
Create a network configuration inside the “fakeroot” filesystem. For example in this scenario we will make these files for each of the interfaces.
cat < bootstrap/etc/sysconfig/network-scripts/ifcfg-ens3
DEVICE=ens3
BOOTPROTO=none
ONBOOT=yes
IPADDR=192.168.7.20
NETMASK=255.255.255.0
GATEWAY=192.168.7.1
DNS1=192.168.7.77
DOMAIN=ocp.example.com
PREFIX=24
DEFROUTE=yes
IPV6INIT=no
EOF

cat < bootstrap/etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
TYPE=Bond
NAME=bond0
BONDING_MASTER=yes
BOOTPROTO=none
ONBOOT=yes
IPADDR=192.168.1.70
NETMASK=255.255.255.0
GATEWAY=192.168.7.1
BONDING_OPTS=”mode=5 miimon=100″

cat < bootstrap/etc/sysconfig/network-scripts/ifcfg-ens4
TYPE=Ethernet
DEVICE=ens4
BOOTPROTO=none
ONBOOT=yes
HWADDR=”08:00:27:69:60:c9″
MASTER=bond0
SLAVE=yes
EOF

cat < bootstrap/etc/sysconfig/network-scripts/ifcfg-ens5
TYPE=Ethernet
DEVICE=ens5
BOOTPROTO=none
ONBOOT=yes
HWADDR=”08:00:27:69:60:c6″
MASTER=bond0
SLAVE=yes
EOF

cat < bootstrap/etc/resolv.conf
search ocp.example.com
nameserver 192.168.7.1
EOF
Note: Here Default gateway is a public/external network!!
Specify the IP address, Netmask & Bonding modes as per your requirement. In my example I am using ‘mode=5′ which is used to provide fault tolerance and load balancing.
Using filetranspiler, create a new ignition file based on the one created by openshift-install. 
Please note in this example bootstrap.ign is my original ignition file created by the openshift-install utility. Continuing with the example of my bootstrap server; it looks like this:
filetranspiler -i bootstrap.ign -f bootstrap -o bootstrap-static.ign
In this way, you can create your customized ignition file for each of the node types; i.e. master or worker.
Setting up MTU size or VXLAN port number scenario:
If the target environment requires a particular MTU size packet then you can follow the steps below.
Use the following command to create manifests:
$ ./openshift-install create manifests –dir=<installation_directory>
Create a file that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory:
$ touch <installation_directory>/manifests/cluster-network-03-config.yml
cat < <installation_directory>/manifests/cluster-network-03-config.yml
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
 name: cluster
spec:
 clusterNetwork:
 – cidr: 10.128.0.0/14
   hostPrefix: 23
 serviceNetwork:
 – 172.30.0.0/16
 defaultNetwork:
   type: OpenShiftSDN
   openshiftSDNConfig:
     mode: NetworkPolicy
     mtu: 1450
     vxlanPort: 4789
Note: The above MTU of 1450 is allocated because of the additional encapsulation overhead, i.e. always consider 50 bytes less than what is set on the physical transport side.
defaultNetwork: This section configures the network isolation mode for OpenShiftSDN. The allowed values are Multitenant, Subnet, or NetworkPolicy. The default value is NetworkPolicy.
Mtu: MTU for the VXLAN overlay network. This value is normally configured automatically, but if the nodes in your cluster do not all use the same MTU, then you must set this explicitly to 50 less than the smallest node MTU value.
vxlanPort: The port to use for all VXLAN packets. The default value is 4789. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for VXLAN, since both SDNs use the same default VXLAN port number.
Alternatively, you can also specify the cluster network configuration for your OpenShift Container Platform cluster by setting the parameters for the defaultNetwork parameter in the CNO CR.
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
 name: cluster
spec:
 clusterNetwork: 
 – cidr: 10.128.0.0/14
   hostPrefix: 23
 serviceNetwork: 
 – 172.30.0.0/16
 defaultNetwork: 
   …
 kubeProxyConfig: 
   iptablesSyncPeriod: 30s 
   proxyArguments:
     iptables-min-sync-period: 
     – 30s
 
kubeProxyConfig: The parameters for this object specify the kube-proxy configuration. If you do not specify the parameter values, the Network Operator applies the displayed default parameter values.
iptablesSyncPeriod: The refresh period for iptables rules. The default value is 30s. Valid suffixes include s, m, and h and are described in the Go time package documentation.
Iptables-min-sync-period: The minimum duration before refreshing iptables rules. This parameter ensures that the refresh does not happen too frequently. Valid suffixes include s, m, and h and are described in the Go time package.
Injecting static IP and server name scenario:
For static IPs, you need to generate new ignition files based on the ones that the OpenShift installer generated. You can use the filetranspiler tool to make this process a little easier. This is an example from the bootstrap node.
When using filetranspiler you first need to create a “fakeroot” filesystem.
mkdir -p bootstrap/etc/sysconfig/network-scripts/
Create your network configuration inside this fakeroot.
cat < bootstrap/etc/sysconfig/network-scripts/ifcfg-ens3
DEVICE=ens3
BOOTPROTO=none
ONBOOT=yes
IPADDR=192.168.7.20
NETMASK=255.255.255.0
GATEWAY=192.168.7.1
DNS1=192.168.7.77
DNS2=8.8.8.8
DOMAIN=ocp4.example.com
PREFIX=24
DEFROUTE=yes
IPV6INIT=no
EOF
cat < bootstrap/etc/hostname
Bootstrap-hostname
EOF
NOTE: Your interface WILL probably differ, be sure to determine the persistent name of the device(s) before creating the network configuration files.
Using filetranspiler, create a new ignition file based on the one created by openshift-install. Continuing with the example of my bootstrap server; it looks like this.
filetranspiler -i bootstrap.ign -f bootstrap -o bootstrap-static.ign
Injecting enterprise NTP server information scenario:
If the OpenShift cluster is sitting behind the corporate proxy, then it will be a good idea to use an enterprise NTP server rather than what is available on the internet to avoid any latency and firewall issues. One of the ways is to inject the NTP information directly into the ignition files if you are unable to inject it via your DHCP server.
When using filetranspiler you first need to create a “fakeroot” filesystem.
vi master/etc/chrony.conf
Create your chrony configuration inside this fakeroot.
cat < master/etc/chrony.conf
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
pool 2.enterprise.pool.ntp.org iburst
# Record the rate at which the system clock gains/losses time.
driftfile /var/lib/chrony/drift
# Allow the system clock to be stepped in the first three updates
# if its offset is larger than 1 second.
makestep 1.0 3
# Enable kernel synchronization of the real-time clock (RTC).
rtcsync
# Enable hardware timestamping on all interfaces that support it.
#hwtimestamp *
# Increase the minimum number of selectable sources required to adjust
# the system clock.
#minsources 2
# Allow NTP client access from local network.
#allow 192.168.0.0/16
# Serve time even if not synchronized to a time source.
#local stratum 10
# Specify file containing keys for NTP authentication.
keyfile /etc/chrony.keys
# Get TAI-UTC offset and leap seconds from the system tz database.
leapsectz right/UTC
# Specify directory for log files.
logdir /var/log/chrony
# Select which information is logged.
#log measurements statistics tracking
EOF
Using filetranspiler, create a new ignition file based on the one created by openshift-install. Continuing with the example of my master server; it looks like this.
filetranspiler -i master.ign -f master -o master-static.ign
Conclusion:
As you can see from the post, with the help of filetranspiler tool, you can customize your ignition files at the deployment stage. This will help you deploy the OpenShift cluster in the target environment of your choice. There are other tools available to do such customizations in the ignition level i.e. jqplay. And lastly, for further customizations of your OpenShift cluster refer to this github link.
The post Advanced Network customizations for OpenShift Install appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Innovate with Enterprise Design Thinking in the IBM Garage

We’ve all been there. You have an amazing idea that’s really exciting. Maybe it’s a home improvement project, or perhaps it’s a new business idea. You think about all the details required to make it real. But, once you get to the seventh action item, you’re not so excited anymore.
Sometimes when we realize the weight of the effort needed for our big ideas, it can crush our original excitement and momentum.
This is the crux of many failed initiatives.
So how can you move forward?
How to apply Enterprise Design Thinking and Lean Startup
Enterprise Design Thinking enables teams to think beyond what they consider possible and find truly innovative ideas. It’s about thinking big.
Lean Startup and a minimum viable product (MVP) are about thinking in small steps. What’s the smallest thing you can build efficiently to learn more about your biggest risk?
Combining the “bigness” of Design Thinking and the “smallness” of Lean Startup propels teams towards real solutions, but it can also trip up many teams. If you’re too focused on MVPs, you won’t come up with big ideas. If you’re too focused on big ideas, keeping an MVP to something that’s truly minimum is very challenging.
The key is that you can’t treat these as two separate exercises. They must be integrated seamlessly into one process. This lets teams think big but act small.
How IBM Garage Design Thinking Workshops help guide the journey
At the IBM Garage, our experts guide clients on their journey starting with a crisp definition of the opportunity they want to tackle. We then assemble a diverse group of stakeholders and users and bring them together for a two-day IBM Garage Design Thinking workshop.
Enterprise Design Thinking: Think big. Once assembled, it’s time to think big. In typical Enterprise Design Thinking style, we unpack the opportunity to find the part of the problem we want to tackle first — the part that once solved will have the most impact on the users and thus the business. Then we use the diversity in the room to find an array of innovative solutions to the problem, generally exploring more than 100 ideas before we focus in on the one with just the right balance between do-ability and awesomeness.
That right balance is different in every case, which is why having the right team of stakeholders and IBM Garage experts assembled is crucial. Technology is evolving so quickly that any one person’s notion of what ideas are and are not feasible is probably wrong. You need the team to be willing to proceed with the right idea, even if that idea initially looks risky.
Lean Startup: Find the approach. Day 1 of an IBM Garage Design Thinking workshop is about using Enterprise Design Thinking to think big. Day 2 is about applying a Lean Startup approach to drive that big idea to the right MVP.
First, we look at the vision and ask: “Are you confident enough in every aspect of this vision to be willing to jump in completely and invest whatever it takes to build it?”
If we really thought big on Day 1, the answer will almost always be, “no”.
Next, we explore all aspects of the vision that are holding the team back. For example, do they worry the market isn’t ready for the idea? Will the company legal team approve the project? Can we design something simple enough to allow the idea to reach the right audience?
Now, focusing on the biggest risk that the team wants to learn more about, we define a testable hypothesis, and identify the smallest thing needed to be able to test that hypothesis.
How to test the MVP solution
Some hypotheses can be tested without any coding, and if that’s the right MVP, of course, we do that. But the Garage has a bias toward building production pilots — we believe the best way to learn is by putting something real in the hands of real users.
Figuring out how to get something valuable into user’s hands in, typically, six to eight weeks requires as much creative thinking as identifying the big idea. This is why the IBM Garage views Enterprise Design Thinking and Lean Startup as two parts of a single method, not two separate phases of a project.
Client case study example: Mueller, Inc.
Let’s look at a real client example, Mueller Inc, a manufacturer of steel buildings and components.
On day one of the Garage Workshop, we arrived at a vision. The team wanted to build a mobile ordering tool to empower contractors to make accurate materials quotes and place an order, all while on the job at a building site. The vision was straightforward, but it was a huge, innovative step for their business.
We knew building the app was possible, but there was some cost-prohibitive data normalization and integration required to make it happen.
In defining the MVP, the team made the critical decision of restricting the scope of the application to only those parts needed for a single type of project. This allowed the team to limit the amount of data normalization needed and get something useful into production.
Within two days of going live, Mueller was transacting real sales through the app.
The MVP provided value to real customers by enabling them to complete order requests faster and proved that such a solution had market value. Plus, the MVP app gave the Mueller team a better understanding of how to normalize their data. All of that in about eight weeks. The perfect MVP.
 
That’s the power of combining Enterprise Design Thinking with Lean Startup. That’s what the IBM Garage can do for you.
How could your business benefit from the IBM Garage experience? Schedule a no-cost visit with IBM Garage to investigate.
The post Innovate with Enterprise Design Thinking in the IBM Garage appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Citrix ADC in OpenShift Service Mesh

This is a guest post by Dhiraj Gedam, Principal Software Engineer, Citrix Systems.
Citrix is proud and thankful to achieve Red Hat OpenShift Operator Certification. Operators enable users to deploy and manage resources in an OpenShift environment in an easier and more simplified manner. This blog post talks about various benefits of Citrix Cloud Native Stack and deployment of Citrix ADC to act as OpenShift Ingress. 
I believe that readers are familiar with Kubernetes, Istio, and Istio resources such as Gateway, VirtualService etc. It is recommended to glance through this blog post to gain perspective about aforementioned resources.  
In this blog, I shall talk about deploying Citrix ADC as Gateway in OpenShift Service Mesh using the Citrix ADC Istio Ingress Gateway Operator. 
Red Hat OpenShift Service Mesh
Red Hat OpenShift Service Mesh provides a platform for behavioral insight and operational control over microservices deployed in a service mesh. OpenShift Service Mesh is based on Istio open source project. Detailed information about OpenShift Service Mesh can be found here. 
Red Hat OpenShift Service Mesh is based on the Istio community release, with additional features and integration automation for OpenShift. In addition to delivering enhanced security and hardened, production-ready code, it adds features, such as tracing with Jaeger and visibility with Kiali, when deploying the Service Mesh on OpenShift Container Platform. This page describes the differences between Red Hat OpenShift Service Mesh and Istio. 
Citrix ADC as Ingress Gateway in Red Hat OpenShift Service Mesh
Citrix ADC solution comes in various form factors such as Hardware based (MPX), Virtualized (VPX), and container-based (CPX). Hardware and Virtual devices can be deployed traditionally, whereas deployment of container solutions differs slightly which will be evident in the rest of this post. Citrix provides an operator namely ‘Citrix ADC Istio Ingress Gateway Operator’ to facilitate the deployment of Citrix ADC as an Ingress Gateway in OpenShift Service Mesh. This single operator can be used to deploy various form factors of Citrix ADC. 
 
Deploying Citrix ADC MPX or VPX as Ingress Gateway

Figure 1 Citrix ADC MPX/VPX as Ingress Gateway in Red Hat OpenShift Service Mesh
Before deploying Citrix ADC MPX/VPX as an Ingress Gateway, you need to establish connectivity between Citrix ADC and the OpenShift Container Platform. You can achieve this with a route-based configuration on Citrix ADC or by using the Citrix K8s Node Controller (CNC). This connectivity is required so the ADC can send packets to application pods inside the Kubernetes cluster. Citrix ADC also monitors application pods’ health status so requests go to healthy pods.
When Citrix ADC MPX/VPX is deployed as an Ingress Gateway device, the Istio-adaptor container primarily runs inside a pod managed by the Ingress Gateway deployment. Keep reading for more on the Istio-adaptor.
 
Deploying Citrix ADC CPX as Istio Ingress Gateway

Figure 2 Citrix ADC CPX as Istio Ingress Gateway
When Citrix ADC CPX is deployed as Ingress Gateway, CPX and Istio-adaptor, both run as containers inside the Ingress Gateway Pod.
Citrix Istio Adaptor
Citrix Istio Adaptor is an open source software written in Go by Citrix Systems. It’s main job is to automatically configure the Citrix ADC deployed in the Istio service mesh. 
Components such as Istio Pilot, Citadel, Mixer, and more comprise the Istio control plane. Pilot is the control plane component that provides service discovery to proxies in the mesh. It’s essentially a gRPC xDS server, and it’s also responsible for configuring proxies at runtime.
Istio-adaptor is a gRPC client to the xDS server and receives xDS resources such as clusters, listeners, routes, and endpoints from the xDS server over a secure gRPC channel. After receiving these resources, the Istio-adaptor converts them to the equivalent Citrix ADC configuration blocks and configures the associated Citrix ADC using RESTful NITRO calls.
This blog talks about Citrix Istio Adaptor in great detail.
 
Deploying Citrix ADC as Gateway using the Citrix ADC Istio Ingress Gateway Operator Operator
Prerequisites

Active OpenShift Container Platform subscription
OpenShift Container Platform (OCP) 4.1/4.2 should be installed 
Appropriate version OCP CLI i.e. oc client tool
Red Hat OpenShift Service Mesh should be installed. Follow this link.

For deploying Citrix ADC VPX or MPX as an Ingress gateway:

Create a Kubernetes secret for the Citrix ADC user name and password using the following command:

oc create secret generic nslogin –from-literal=username=<citrix-adc-user> –from-literal=password=<citrix-adc-password>

 
Steps

Login  to the OpenShift Container Platform web console.
Create a project named citrix-system
Add the citrix-system project to the member list in the Service Mesh Member Roll using the information provided in Red Hat documentation.
Navigate to Catalog → OperatorHub.
Type Citrix into the filter box. Select and Install ‘Citrix ADC Istio Ingress Gateway Operator’.
Under Create Operator Subscription, select the following and then click Subscribe.

Installation Mode: specific namespace on the cluster. Select citrix-system
Update Channel: alpha
Approval Strategy: Automatic

Provide below SCC privileges to the service account which will be used by the Ingress gateway using the following commands.

oc adm policy add-scc-to-user privileged -z builder  -n citrix-system
oc adm policy add-scc-to-user privileged -z default  -n citrix-system
oc adm policy add-scc-to-user privileged -z deployer -n citrix-system
oc adm policy add-scc-to-user anyuid     -z builder -n citrix-system
oc adm policy add-scc-to-user anyuid     -z default -n citrix-system
oc adm policy add-scc-to-user anyuid     -z deployer -n citrix-system

 
 8. Under citrix-system project, navigate to Operators

OpenShift 4.3: New Improved Topology View

The topology view in the Red Hat OpenShiftConsole’s Developer’s Perspective provides a visual representation of the application structure. It helps developers to clearly identify one resource type from the other, as well as understand the overall communication dynamics within the application. 
Launched with the 4.2 release of OpenShift, topology view has already earned a spotlight in the cloud-native application development arena. The constant feedback cycles and regular follow-ups on the ongoing trends in the developer community have helped in shaping a great experience in the upcoming release. This blog focuses on a few features in the topology view added for OpenShift 4.3.
1. Toggle between the list view and the graph view
In response to the user community, the topology view now comes with a toggle button to quickly switch between the list view and the graph view for a given project. While the graph view comes in very handy in use cases that require cognizance of the role played by individual components in the application architecture, list views could be helpful for more data-focused and investigative tasks. Introduction of this toggle would enable seamless navigation through views irrespective of the contrast in use-cases.

2. Menu for contextual actions
The topology view has a list of components available as a part of the graph. There are various kinds of resource types, connectors, groupings, individual items such as event sources, each one of which supports a different set of action in context. Users could access this exclusive menu for each of the listed items by performing a right-click over them, which further opens a dropdown list with all the available actions. Users could click anywhere outside the menu to make it disappear from the view.

3. Creating a binding between resources
The topology view allows for creating a connection between any given pair of resources by simply dragging a handle from the origin nodes and dropping it over a target node. It reduces the cognitive load on the developer by doing a smart assessment of whether an operator-managed backing service is available for creating the intended binding. In the absence of an operator managed backing service, an annotation-based connection is created. 

4. Real-time visualization of pod transition
The topology view in 4.3 provides convenient and upfront access to scale up/down and increase/decrease your pod count via the side panel. Similarly, users could also start rollout or recreate pods for a given node from the contextual menu(accessed through a right-click or from the actions button on the side panel). Upon performing the associated interaction from the side panel to accomplish any of the mentioned actions, users would get to see a real-time visualization of the transitions that the pods go through.

5. Deleting an applications
Topology view now supports deleting an application from the graph view. By invoking the contextual menu the given application grouping, either by performing a right-click or through the side panel, users could access the delete action. Upon confirming the action, the application group—comprised of components with the associated label, as defined by the Kubernetes-recommended labels—are deleted.

6.  Visualization of Event Sources sink 
The topology view shows elements from Knative Eventing, namely event sources, which help give a developer quick insight into which event sources will trigger their application by looking at it visually.

7. Viewing Knative Services and associated revisions
Users are now enabled to view Knative Service and the associated Revisions/Deployments in the topology view. The revisions in a service which are in the active traffic block would be displayed as a group on the topology view, along with the information on their traffic consumption.

With the continuous evolution of Kubernetes related technology and the introduction of new practices and integrations, OpenShift is constantly updated to reflect the progression. 
Learn More
Interested in learning more about application development with OpenShift? Here are some resources which may be helpful:

Red Hat resources on application development on OpenShift: developers.redhat.com/openshift

Provide Feedback

Join our OpenShift Developer Experience Google Group, participate in discussions or attend our Office Hours Feedback session
Drop us an email with your comments about the OpenShift Console user experience.

 

 
The post OpenShift 4.3: New Improved Topology View appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Get Your Windows Apps Ready for Kubernetes

The post Get Your Windows Apps Ready for Kubernetes appeared first on Mirantis | Pure Play Open Cloud.
Kubernetes continues to evolve, with exciting new technical features that unlock additional real-world use cases. Historically, the orchestrator has been focused on Linux-based workloads, but Windows has started to play a larger part in the ecosystem; Kubernetes 1.14 declared Windows Server worker nodes as “Stable”.
The teams at Mirantis have been helping customers with Windows Containers for more than three years, beginning in earnest with the Modernizing Traditional Applications (MTA) program. At the time, the only orchestration option for Windows Containers was Swarm; however, the expansion of Kubernetes support has enabled Mirantis to apply its deep experience with Windows Container orchestration to the Kubernetes platform.
Why Windows Applications?
Simply put, there are still a significant number of Windows-based applications running in enterprise datacenters around the world, providing value to organizations. Development teams are often comprised of engineers well-versed and experienced in the C# language and .NET application framework, both of which regularly rank highly in StackOverflow’s yearly Developer Survey.
Such applications represent years of investment and engineering team enablement, but they also represent challenges across development, deployment, and operations. Containers provide a myriad of benefits to these workloads, including portability, security, and scalability.
The ending of support for Windows Server 2008 has created a situation in which many organizations are assessing their options for moving workloads onto a supported operating system. A variety of potential paths to take for such an effort exist, including:

Refactoring and Upgrading by re-developing .NET Framework applications into the more modern .NET Core is not a small task, and requires substantial time and people resources. This makes sense for a subset of an application portfolio, but becomes impractical when scaling to dozens or hundreds of applications.

Custom Support Agreements may be a short-term fix, but are extremely expensive and merely a bandaid that momentarily postones more comprehensive remediation.

“Lifting and shifting” servers to a public cloud provider is an option to gain security fixes, but is also a short-term solution to a broader problem that comes with wholly different economic impacts, and additional technical architecture considerations.

Containerizing with Kubernetes enables workloads to pick up the benefits of containers while moving onto the modern Windows Server 2019 operating system by targeting by an on-premises environment, or a public cloud as part of a broader cloud migration strategy.

Taking the Kubernetes path provides the most benefits for these legacy workloads, enabling an organization to standardize on how it builds, shares, and runs applications. A microservice application built last week and a monolithic application built last decade can run side-by-side on a single cluster, reducing and consolidating the number of platforms and operational overhead necessary to support an organization.
While each application is unique, there are a series of considerations that the Mirantis team focuses on when engaging with customers along the Windows Container journey: identity, storage, and logging.
Identity
The most common mechanism for user authentication and authorization in legacy .NET Framework applications is Integrated Windows Authentication (IWA). This scheme enables an application developer to easily add identity support to an application, and for that application to interact with Active Directory when running on a server that is joined to an Active Directory Domain Controller.
When an application utilizing IWA is containerized, the first hurdle is often how to integrate with Active Directory (AS). AD Domain Controllers were designed in the pre-container era, when a server would join and stay joined for years or decades. Container lifecycles are far shorter, with pods being created and destroyed regularly as part of orchestration operations. Instead of every container having to join and leave the domain, the pattern is to join the underlying host worker nodes to the domain, then pass a credential into necessary containers. 
In this case, the credential used is a “Group Managed Service Account” (gMSA), a long-existing feature of Active Directory employed with containers to enable IWA. Support for gMSAs in Kubernetes has advanced swiftly over the past year, with 1.16 moving the feature to Beta. For workloads using IWA in non-container environments today, a mapping exercise is done to move permissions from a traditional AD user account to a gMSA account that can be used with the container. Once completed, Windows Containers can utilize IWA as-is without the need for costly changes to the code base’s authentication model.
Storage
Before Twelve-Factor Applications became popular, it was common for monolithic workloads to maintain various “stateful” data within the application itself. When possible, however, the current recommendation is to externalize such stateful data into caches, databases, queues, or other mechanisms so that applications are more easily scalable. 
When an application can’t externalize its state data, it can use various Kubernetes features to ensure that if a pod is re-scheduled or destroyed, the data is still safe and usable by a future pod. For Linux pods, the Container Storage Interface (CSI) is the preferred method for storing stateful information, but CSI support for Windows pods is still maturing. In the interim, FlexVolume plugins are available for SMB and iSCSI that provide volume support for Windows pods. Once deployed to host nodes, Windows pods can then mount stateful storage directly into the pod at a specified file path, essentially providing a “removable” drive that can be “moved” to the new pod.
Logging
A common pattern in Linux Containers is for applications to write information directly to the standard output (STDOUT) stream. This is the data that is visible in tools such as the Docker CLI and kubectl:

Windows applications do not follow this convention, however, instead writing data to Event Tracing for Windows (ETW), Performance Counters, custom files, and so on. Unfortunately, that means that when using tools such as the Docker CLI or kubectl, little to no data is available to aid in container debugging:

Fortunately, to help developers and operators, Microsoft has introduced an exciting open source tool called LogMonitor, which acts as a conduit between logging locations within Windows Containers and the container’s standard out stream. The Dockerfile is used to bring a binary into the image; that binary is configured with a JSON file to tailor itself to a specific application.  Docker can then provides a logging experience simlar to that of to Linux Containers. On the tool’s roadmap are additional Kubernetes-related features, such as support for ConfigMaps.
Summary
Containerizing .NET Framework and other WIndows-based applications enables workloads to take advantage of Kubernetes’ capabilities for decreasing costs, increasing availability, and enhancing operational agility. Get started today with your applications to take advantage of how the community is rapidly adding Windows pod-related capabilities that further refine and mature the experience. 
Are you looking at moving Microsoft Windows-based apps to Kubernetes?  Get in touch to hear more about how our expertise can accelerate your adoption of Kubernetes through an enterprise-grade platform, or schedule a demo to see Docker Enterprise in action.
The post Get Your Windows Apps Ready for Kubernetes appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis