Come See Us at KubeCon San Diego!

 
We’ve compiled all of the information you need to find us at KubeCon San Diego, November 18 through 21. Talks by Red Hatters coming up at this giant event will focus on topics like KubeVirt, CRI-O, Jaeger, Tekton, Thanos and other open source Kubernetes projects. We’re also hosting two keynote sessions:
 

Tuesday, November 19, 2019, 9:47 AM

Keep Kubernetes Caffeinated
 (Exhibit Hall AB)

Erin Boyd
Today, we have a whole host of amazing coffee makers that can take a pod of coffee, brew it, deploy it into your cup, add the milk and sweetener, and deliver it just how you like it. In the same way, Kubernetes Operators are taking the complexity out of producing, deploying, and operating applications. One particular example of where Operators are making a big impact is Storage. Storage features in Kubernetes are evolving to solving more complex problems such as data replication and support for object storage. Come and see how the Rook project is extending these storage capabilities to deliver your applications—just like your favorite cup of coffee.

 

Wednesday, November 20, 2019, 9:59 AM

E2E 5G Network Services are Going Cloud Native
 (Exhibit Hall AB)

Azhar Sayeed with Heather Kirksey (Linux Foundation) and Fu Qiao (China Mobile)
It’s no secret that Kubernetes has gained significant traction in the cloud and enterprise software ecosystem, but less widely known is how this momentum is now moving into global telco networks as the next major area of adoption. Building on the momentum from a live keynote demo In Amsterdam last fall (see the demo here), a team made up of volunteers from several project communities, companies, and network operators has taken a cloud native approach to developing an E2E 5G network demonstration built on open source infrastructure. The demo will use a live prototype running in labs around the world using k8s and other open source technologies to deliver a fully containerized 5G-network on stage in San Diego. The demo will showcase both how the telecom industry is using cloud native software to build out their next gen networks, and also show solution providers what’s possible in this exciting new space.

 
Of course, we’re also hosting the OpenShift Commons Gathering, this time, on a boat! Come learn about Kubernetes, Red Hat OpenShift and how open source software can be put to work.
The post Come See Us at KubeCon San Diego! appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Migrating your applications to Openshift 4

If you’re looking for a path to upgrade your Red Hat OpenShift 3.7+ cluster to OpenShift 4.2, you’re in luck. The Cluster Application Migration tool (CAM) was built to migrate stateful and stateless applications from a source cluster to a destination cluster.
The initial intent of this tool is to address the OCP 3.7+ to OCP 4.2+ upgrade scenarios. That said, as requested by many Openshift users, it will also be possible to use this tool to migrate applications between OCP4 clusters.
This tool is based on two popular open source projects: Velero and Restic.

https://github.com/vmware-tanzu/velero

https://restic.net/

High-Level Architecture

Velero is installed on both your source and destination cluster, and object storage is used to store your backup information before getting restored on your target cluster. The OpenShift Container Platform Migration API orchestrates your migration from a central location. This API includes an easy to use user-interface and is usually installed on your target cluster by an operator.
Migrating Persistent Volumes
The migration of your PVs was a major focus for us in building this migration tool. CAM 1.0 offers two different mechanisms to migrate your data: MOVE or COPY.
 1. Move the Persistent Volume
Moving PVs requires shared storage between your source and destination cluster (ex: NFS). This is the fastest option and offers minimal downtime as the data is not copied, but only re-attached to the destination cluster. 

 2. Copy the Persistent Volume
This is a two-step approach which can support all storage backends (default option). Your data is first copied on your replication repository and then restored on your target cluster. Your migration can be first “stage” to copy most of the data without any impact to your source cluster. Then, your can “migrate” which will only copy files that have been changed since your last staging, reducing the downtime significantly before the final cut-over.

The Migration Web UI
Your migrations can be fully executed from a simple web UI. The configuration steps are simple: 

 Add both your source and destination clusters by providing your endpoint and credentials.
Add an object storage repository by also providing the end point and your credentials.
Finally, create a migration plan listing all of the projects (namespaces) that you would like to migrate.

Things to Know
 1. Cluster scoped resources
Cluster scoped resources are currently not migrated by this tool as they are not part of your namespace. Only resources inside your namespace are migrated. Our CPMA tool can help you find those resources and configure them to your new targeted cluster before executing your migration.
 2. Moving traffic to your new cluster
Once your migration is completed, your traffic must be redirected to your new cluster. The most common way to do this would be to either update your DNS entry or change your load balancer configuration. Using a load balancer in a multi-cluster configuration is a great way to reduce your downtime. If you are currently only using one cluster, this might be a good opportunity to look at the advantages of running a fully redundant cluster architecture.

 3. Cluster Admin Requirement
In CAM 1.0, you will need cluster admin privileges to perform a migration. This is something we hope to improve in the near future to allow app owners to migrate their own applications without such privilege.
 
The post Migrating your applications to Openshift 4 appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Cloud innovation enhances fan experience

Hate standing in queues? Me too. In fact, I’ll avoid them even if there’s no alternative. This is how Nicco was born. I was at a sports bar, and the queue for drinks was so long that I refused to get in line. Over three hours, I had one drink and one snack and walked away feeling like my friends and I would have enjoyed ourselves more at home.
Consider this:

Approximately 81 percent of companies believe they provide great experiences. Only 8 percent of clients agree. [Source:2018 Gartner Customer Experience in Marketing Survey]
86 percent of customers who received a great customer experience were likely to repurchase from the same company. [Source: Temkin Experience Ratings, 2018]

This got me thinking … How much does a seamless in-venue experience impact customer loyalty, and by extension, income?
Two core functions of modern venue experiences
1. Everyone enjoys live entertainment. No one enjoys queues.
We needed a solution designed to reduce queues and wait times — in bars, restaurants, stadiums or events — by allowing users to order and pay from their seats. If users can spend more time enjoying the action, they’ll be more likely to return. Select, order, collect. Simple.
2. Businesses are going in blind. Which products are being consumed? Where, and by whom?
We needed to create a customer engagement solution that would help businesses better understand buyer behaviour and create personalised customer experiences. We knew that a mobile app would be the perfect user interface to achieve both objectives and gain access to previously unseen data.
The story of Nicco: From iteration(s) to reality
Having no technical employees, we saw value in the IBM Garage when we gained access to a team of experienced architects, developers and designers that were ready to bring our idea to life. And so our journey began …
1. The importance of framing an idea.
During an Initial Framing Session, we looked at different business themes and screened benchmarks, determining that we needed to narrow our focus to stadium experience. While our dream was bigger, success means perfecting one use case before expanding to others, balancing the need for data collection capabilities with user experience.
2. Design thinking: Solving the problem through ideation.
During a two-day IBM Enterprise Design Thinking Workshop, we worked with the IBM Garage team to look at stadium experiences from the customer’s perspective. What would add the most value to an attendee’s experience? We did everything from persona creation and scenario mapping to wireframing. Here’s a full list of what goes into the process.
We agreed that the food and beverage purchasing process could be modernised and expedited with concepts such as mobile ordering and in-seat delivery. We talked through details such as building payment systems; creating iPhone, Android and vendor apps; and designing an analytics dashboard.
3. Build, test, iterate.
After a brief cloud innovation architecture workshop, we decided to focus on building an MVP (minimum viable product) for iOS users. Initially, we built a clickable wireframe prototype, which was helpful during discussions with investors.
We then completed the first Nicco MVP, an iOS app that allows a user to choose a merchant located within the stadium, view its menu, order, pay and receive a notification when the order is ready for pickup.
As of October 2019, we have completed three pilots at Lottoland Stadium, primarily to test functionality and interest. Through constant iteration, we determined the need for integration with a point of sale (POS) management system and an administration dashboard. Somewhat unexpectedly, the app also enhanced safety due to fewer people in queues blocking aisles and doors.
4. Building our stack.
We built our production solution on IBM Cloud, which gave us flexibility, openness, reliability, scale and speed. We leveraged technology from the IBM Cloud Catalog, including IBM Cloud Kubernetes Service for the application microservices, IBM Cloud App ID for user authentication, IBM API Connect for API management, IBM Cloud Object Storage for image storage, IBM Cloud Monitoring and IBM Cloud Log Analysis for application health and performance monitoring, and MongoDB for application data management.
What’s next for Nicco?
Over time, we plan to use Nicco’s detailed transaction data and IBM Watson to interpret customer profile data with AI. We will continue to iterate our application with IBM Garage based on user feedback, architectural inquiry, venue demand and new research that becomes available.
The IBM Garage team is our Nicco team. They’re engaged and passionate and provide us with a perspective that has been invaluable during this build. Frankly, we could have hired an offshore team to do this at a lower cost, but if we did that, we wouldn’t have had the confidence to tell large enterprises, stadiums and investors that the technology in our solution works. Whether you’re a startup or large enterprise, tapping into IBM Garage expertise and methodologies is a competitive advantage.
We’ll be back with updates soon. For now, I’ve got to run – Nicco just alerted me that my order is ready.
To learn about Nicco and our solution, check out this video. Want to experience the IBM Garage for yourself? Schedule a complimentary visit to the IBM Garage to get started.
The post Cloud innovation enhances fan experience appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Why Use Containers, Kubernetes, and OpenShift for AI/ML Workloads?

Containers and Kubernetes are proving to be very valuable in helping accelerate Artificial Intelligence (AI) and Machine Learning (ML) lifecycle for organizations worldwide. ExxonMobil, BMW, Volkswagen, Discover Financial Services, Ministry of Defense (Israel), Boston Children’s Hospital, are some organizations have operationalized Red Hat OpenShift, industry leading Kubernetes-based container platform, to accelerate data science workflows, and build intelligent applications. These intelligent applications are helping achieve key business goals and providing competitive differentiation.
In a recent blog, I explained how these emerging cloud-native technologies are playing a vital role in helping solve ML Lifecycle execution challenges, and accelerate the delivery of intelligent applications. You may be thinking…”ok, so where do we start to learn about this topic?”
To help you get started on this journey, we have developed a short video that explains in under three minutes how containers, Kubernetes, and OpenShift can accelerate AI/ML initiatives for your organization. Whether you are working at your desk, driving, riding on a train, walking, or something else, this quick video will do the job for you! As always, feedback is highly appreciated.
 

The post Why Use Containers, Kubernetes, and OpenShift for AI/ML Workloads? appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

OpenShift 4.2 vSphere Install Quickstart

OpenShift 4.2 vSphere Install Quickstart
In this blog we will go over how to get you up and running with an OpenShift 4.2 install on VMware vSphere. There are many methods to work with vSphere that allows automating in creating the necessary resources for installation. These include using Terraform and Ansible to help expedite the creation of resources. In this blog, we will focus on getting familiar with the process; so I will be going over how to do the process manually.
Environment Overview
For this installation I am using vSphere version 6.7.0 and ESXi version 6.7.0 Update 3. I will be following the official documentation for installing OpenShift 4 on vSphere. There, you can read more information about prerequisites including the need to set up DNS, DHCP, Load Balancer, Artifacts, and other ancillary services/items. I will be going over the prerequisites for my environment.
Prerequisites
It’s important that you get familiar with the prerequisites by reading the official documentation for OpenShift. I will go over the prerequisites at a high level, and link examples. It’s important to note that, although this is user provisioned infrastructure, the OpenShift 4 installer is specific about how things are named; and what it expects to be there.
vSphere Credentials
I will be using my administrative credentials for vSphere. I will also be passing these credentials to the OpenShift 4 installer and, by extension, to the OpenShift cluster. It’s not a requirement to do so, and you can install without passing the credentials. This will effectively turn your installation into a “Bare Metal” type of installation and you’ll lose the ability to dynamically create VDMKs for your applications at install time. You can set this up post-installation (we will go over that later).
DNS
The first consideration you need to take into account when setting up DNS for OpenShift 4 is the “cluster id”. The “cluster id” uniquely identifies each OpenShift 4 cluster in your domain, and this ID also becomes part of your cluster’s FQDN. The combination of your “cluster id” and your “domain” creates what I like to call a “cluster domain”. For example; with a cluster id of openshift4 and my domain of example.com, the cluster domain (i.e. the FQDN) for my cluster is openshift4.example.com.
DNS entries are created using the $CLUSTERID.$DOMAIN cluster domain FQDN nomenclature. All DNS lookups will be based on this cluster domain. Using my example cluster domain, openshift4.example.com, I have the following DNS entries set up in my environment. Note that the etcd servers are pointed to the IP of the masters, and they are in the form of etcd-$INDEX
[chernand@laptop ~]$ dig master1.openshift4.example.com +short
192.168.1.111
[chernand@laptop ~]$ dig master2.openshift4.example.com +short
192.168.1.112
[chernand@laptop ~]$ dig master3.openshift4.example.com +short
192.168.1.113
[chernand@laptop ~]$ dig worker1.openshift4.example.com +short
192.168.1.114
[chernand@laptop ~]$ dig worker2.openshift4.example.com +short
192.168.1.115
[chernand@laptop ~]$ dig bootstrap.openshift4.example.com +short
192.168.1.116
[chernand@laptop ~]$ dig etcd-0.openshift4.example.com +short
192.168.1.111
[chernand@laptop ~]$ dig etcd-1.openshift4.example.com +short
192.168.1.112
[chernand@laptop ~]$ dig etcd-2.openshift4.example.com +short
192.168.1.113

Also, it’s important to set up reverse DNS for these entries as well (since you’re using DHCP, this is particularly important).
[chernand@laptop ~]$ dig -x 192.168.1.111 +short
master1.openshift4.example.com.
[chernand@laptop ~]$ dig -x 192.168.1.112 +short
master2.openshift4.example.com.
[chernand@laptop ~]$ dig -x 192.168.1.113 +short
master3.openshift4.example.com.
[chernand@laptop ~]$ dig -x 192.168.1.114 +short
worker1.openshift4.example.com.
[chernand@laptop ~]$ dig -x 192.168.1.115 +short
worker2.openshift4.example.com.
[chernand@laptop ~]$ dig -x 192.168.1.116 +short
bootstrap.openshift4.example.com.

The DNS lookup for the API endpoints also needs to be in place. OpenShift 4 expects api.$CLUSTERDOMAIN and api-int.$CLUSTERDOMAIN to be configured, they can both be set to the same IP address – which will be the IP of the Load Balancer.
[chernand@laptop ~]$ dig api.openshift4.example.com +short
192.168.1.110
[chernand@laptop ~]$ dig api-int.openshift4.example.com +short
192.168.1.110

A wildcard DNS entry needs to be in place for the OpenShift 4 ingress router, which is also a load balanced endpoint.
[chernand@laptop ~]$ dig *.apps.openshift4.example.com +short
192.168.1.110

In addition to the mentioned entries, you’ll also need to add SRV records. These records are needed for the masters to find the etcd servers. This needs to be in the form of _etcd-server-ssl._tcp.$CLUSTERDOMMAIN in your DNS server.
[chernand@laptop ~]$ dig _etcd-server-ssl._tcp.openshift4.example.com SRV +short
0 10 2380 etcd-0.openshift4.example.com.
0 10 2380 etcd-1.openshift4.example.com.
0 10 2380 etcd-2.openshift4.example.com.

Please review the official documentation to read more about the prerequisites for DNS before installing.
DHCP
The certificates OpenShift configures during installation is for communication between all the components of OpenShift, and is tied to the IP address and dns name of the Red Hat Enterprise Linux CoreOS (RHCOS) nodes.
Therefore it’s important to have DHCP with address reservation in place. You can do this with MAC Address filtering for the IP reservation. When creating your VMs, you’ll need to take note of the MAC address assigned in order to configure your DHCP server for IP reservation.
Load Balancer
You will need a load balancer to frontend the APIs, both internal and external, and the OpenShift router. Although Red Hat has no official recommendation to which load balancer to use, one that supports SNI is necessary (most load balancers do this today).
You will need to configure port 6443 and 22623 to point to the bootstrap and master nodes. The below example is using HAProxy (NOTE that it must be TCP sockets to allow SSL passthrough):
frontend openshift-api-server
bind *:6443
default_backend openshift-api-server
mode tcp
option tcplog

backend openshift-api-server
balance source
mode tcp
server btstrap 192.168.1.116:6443 check
server master1 192.168.1.111:6443 check
server master2 192.168.1.112:6443 check
server master3 192.168.1.113:6443 check

frontend machine-config-server
bind *:22623
default_backend machine-config-server
mode tcp
option tcplog

backend machine-config-server
balance source
mode tcp
server btstrap 192.168.1.116:22623 check
server master1 192.168.1.111:22623 check
server master2 192.168.1.112:22623 check
server master3 192.168.1.113:22623 check

You will also need to configure 80 and 443 to point to the worker nodes. The HAProxy configuration is below (keeping in mind that we’re using TCP sockets):
frontend ingress-http
bind *:80
default_backend ingress-http
mode tcp
option tcplog

backend ingress-http
balance source
mode tcp
server worker1 192.168.1.114:80 check
server worker2 192.168.1.115:80 check

frontend ingress-https
bind *:443
default_backend ingress-https
mode tcp
option tcplog

backend ingress-https
balance source
mode tcp
server worker1 192.168.1.114:443 check
server worker2 192.168.1.115:443 check

More information about load balancer configuration (and general networking guidelines) can be found in the official documentation.
Web server
A web server is needed in order to hold the ignition configurations to install RHCOS. Any webserver will work as long as the webserver can be reached by the bootstrap, master, and worker nodes during installation. I will be using Apache. I created a directory specifically for the ignition files:
[root@webserver ~]# mkdir -p /var/www/html/ignition/

Artifacts
You will need to obtain the installation artifacts by visiting try.openshift.com, there you can login and click on “VMware vSphere” to get the installation artifacts. You will need:

OpenShift4 Client Tools
OpenShift4 OVA
Pull Secret

You will need to put the client and the installer in your $PATH, in my example; I put mine in my /usr/local/bin path.
[chernand@laptop ~]$ which oc
/usr/local/bin/oc
[chernand@laptop ~]$ which kubectl
/usr/local/bin/kubectl
[chernand@laptop ~]$ which openshift-install
/usr/local/bin/openshift-install

I’ve also downloaded my pullsecret as pull-secret.json and saved it under a ~/.openshift directory I created.
[chernand@laptop ~]$ file ~/.openshift/pull-secret.json
/home/chernand/.openshift/pull-secret.json: JSON data

A ssh-key is needed. This is used in order to login to the RHCOS server if you ever need to debug the system.
[chernand@laptop ~]$ file ~/.ssh/id_rsa.pub
/home/chernand/.ssh/id_rsa.pub: OpenSSH RSA public key

For more information about ssh and RHCOS, visit the official documentation site.
Installation
Once you have the prerequisites in place, you’re ready to begin the installation. The current installation of OpenShift 4 on vSphere must be done in stages. I will go over each stage step by step.
vSphere Preparations
In your vSphere web UI, after you login, navigate to “VMs and Templates” (it’s the icon that looks like a piece of paper). From here right click on your datacenter and select New Folder → New VM and Template Folder. Name this new folder the name of your cluster id. In my case, I named mine openshift4. You should have a new folder that looks like this.

Next, import the OVA by right clicking the folder and select “Deploy OVF Template”. Make sure you select the folder for this cluster as the destination, then click next.

Go ahead and select an ESXi host for the destination compute resource, after that is done it will display the OVA information.

After this is displayed go ahead and click “Next”. This will display the storage destination dialog. Choose the appropriate destination datastore, and set the virtual disk format to “Thin” if you wish.

The next screen asks to select a destination virtual network, I am using the default “VM Network”, so I accept the defaults.

After clicking “Next”, the “Customize Template” section comes up. We won’t be customizing the template here, so leave these blank and click “Next”.

The next page will give you an overview with the title “Ready To Complete”, click “Next” to finish the importing of the OVA.

The OVA template should be in your cluster folder. It should look something like this:

Next, right click the imported OVA and select “Edit Settings”. The “Edit Settings” dialog box appears and should look like this:

Click on the “VM Options” and expand the “Advanced” section. Set “Latency Sensitivity” to “High”.

Next, click on “Edit Configuration…” next to the “Configuration Parameters” section. You will add the following:

guestinfo.ignition.config.data and set the value to chageme
guestinfo.ignition.config.data.encoding set this value to base64
disk.EnableUUID set this value to TRUE

It should look something like this:

Click “OK” to go back to the “VM Options” page and then click “OK” again to save these settings.
Now, right click the imported OVA and select Clone → Clone to Template. The “Clone Virtual Machine To Template” wizard starts. It’ll ask you to name this template and where to store it. I will be creating the master template first so I will name it “master-template” and save it in my “openshift4” folder. You can also convert this OVA to a template as well.

On the next screen select a compute destination. Choose one of your ESXi hosts, and click “Next”.
On the following screen, select the appropriate datastore for your environment (make sure you select “Thin” as the disk format) and click “Next”.

Next, there will be the “Ready to complete” page, giving you an overview.

Click on “Finish” to finish the creation of the master template.
Now you will do the same steps AGAIN, except you’ll be creating a template for the workers/bootstrap nodes. I named this template “worker-bootstrap-template”. When you are finished, you should have something like this.

NOTE: You can also have just one OpenShift 4 template and adjust the CPU/RAM if desired. If you plan on churning lots of nodes for multiple deployments then multiple templates may make more sense.

Generate Install Configuration
Now that you’ve prepped vSphere for installation. You can go ahead and generate the install-config.yaml file. This file tells OpenShift about the environment that you’re going to install. Before you create this file you’ll need an installation directory to store all your artifacts. You can name this directory whatever you like; I’m going to name mine openshift4.
[chernand@laptop ~]$ mkdir openshift4
[chernand@laptop ~]$ cd openshift4/

I’m going to export some environment variables that will make the creation of the install-config.yaml file easier. Please substitute your configuration where applicable.
[chernand@laptop openshift4]$ export DOMAIN=example.com
[chernand@laptop openshift4]$ export CLUSTERID=openshift4
[chernand@laptop openshift4]$ export VCENTER_SERVER=vsphere.example.com
[chernand@laptop openshift4]$ export VCENTER_USER=”administrator@vsphere.local”
[chernand@laptop openshift4]$ export VCENTER_PASS=’supersecretpassword’
[chernand@laptop openshift4]$ export VCENTER_DC=DC1
[chernand@laptop openshift4]$ export VCENTER_DS=datastore1
[chernand@laptop openshift4]$ export PULL_SECRET=$(< ~/.openshift/pull-secret.json)
[chernand@laptop openshift4]$ export OCP_SSH_KEY=$(< ~/.ssh/id_rsa.pub)

Once you’ve exported those, go ahead and create the install-config.yaml file in the openshift4 directory by running the following:
[chernand@laptop openshift4]$ cat < install-config.yaml
apiVersion: v1
baseDomain: ${DOMAIN}
compute:
– hyperthreading: Enabled
name: worker
replicas: 0
controlPlane:
hyperthreading: Enabled
name: master
replicas: 3
metadata:
name: ${CLUSTERID}
networking:
clusterNetworks:
– cidr: 10.254.0.0/16
hostPrefix: 24
networkType: OpenShiftSDN
serviceNetwork:
– 172.30.0.0/16
platform:
vsphere:
vcenter: ${VCENTER_SERVER}
username: ${VCENTER_USER}
password: ${VCENTER_PASS}
datacenter: ${VCENTER_DC}
defaultDatastore: ${VCENTER_DS}
pullSecret: ‘${PULL_SECRET}’
sshKey: ‘${OCP_SSH_KEY}’
EOF

I’m going over the options at a high level:

baseDomain – This is the domain of your environment.
metadata.name – This is your clusterid
Note: this makes all FQDNS in the openshift4.example.com domain.
platform.vsphere – This is your vSphere specific configuration. This is optional and you can find a “standard” install config example in the docs.
pullSecret – This pull secret can be obtained by going to cloud.redhat.com
Note: I saved mine as ~/.openshift/pull-secret.json
sshKey – This is your public SSH key (e.g. id_rsa.pub)

NOTE: The OpenShift installer removes this file during the install process, so you may want to keep a copy of it somewhere.

Create Ignition Files
The next step in the process is to create the installer manifest files using the openshift-install command. Keep in mind that you need to be in the install directory you created (in my case that’s the openshift4 directory).
[chernand@laptop openshift4]$ openshift-install create manifests
INFO Consuming “Install Config” from target directory
WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings

Note that the installer tells you the the masters are schedulable. For this installation, we need to set the masters to not schedulable.
[chernand@laptop openshift4]$ sed -i ‘s/mastersSchedulable: true/mastersSchedulable: false/g’ manifests/cluster-scheduler-02-config.yml
[chernand@laptop openshift4]$ cat manifests/cluster-scheduler-02-config.yml
apiVersion: config.openshift.io/v1
kind: Scheduler
metadata:
creationTimestamp: null
name: cluster
spec:
mastersSchedulable: false
policy:
name: “”
status: {}

To find out more about why you can’t run workloads on OpenShift 4.2 on the control plane, please refer to the official documentation.
Once the manifests are created, you can go ahead and create the ignition files for installation.
[chernand@laptop openshift4]$ openshift-install create ignition-configs
INFO Consuming “Master Machines” from target directory
INFO Consuming “Openshift Manifests” from target directory
INFO Consuming “Worker Machines” from target directory
INFO Consuming “Common Manifests” from target directory

Next, create an append-bootstrap.ign ignition file. This file will tell RHCOS where to download the bootstrap.ign file to configure itself for the OpenShift cluster.
[chernand@laptop openshift4]$ cat < append-bootstrap.ign
{
“ignition”: {
“config”: {
“append”: [
{
“source”: “http://192.168.1.110:8080/ignition/bootstrap.ign”,
“verification”: {}
}
]
},
“timeouts”: {},
“version”: “2.1.0”
},
“networkd”: {},
“passwd”: {},
“storage”: {},
“systemd”: {}
}
EOF

Next, copy over the bootstrap.ign file over to this webserver.
[chernand@laptop openshift4]$ scp bootstrap.ign root@192.168.1.110:/var/www/html/ignition/

We’ll need the base64 encoding for each of the ignition files we’re going to pass to VSphere when we create the VMs. Do this by encoding the files and putting the result in a file for later use.
[chernand@laptop openshift4]$ for i in append-bootstrap master worker
do
base64 -w0 < $i.ign > $i.64
done
[chernand@laptop openshift4]$ ls -1 *.64
append-bootstrap.64
master.64
worker.64

You are now ready to create the VMs.
Creating the Virtual Machines
Log back into the vSphere webui to create the virtual machines from the templates you created. Navigate to “VMs and Templates” (the icon that looks like a sheet of paper); and then right click the “worker-bootstrap-template” and select New VM From this Template… This brings up the “Deploy From Template” wizard. Name this VM “bootstrap” and make sure it’s in the openshift4 folder.

After you click next, select one of your ESXi hosts in your cluster as a destination compute resource and click “Next”. On the next page, it’ll ask you to select a datastore for this VM. Select the appropriate store for your cluster and make sure you thin provision the disk.

After clicking next, check off “Customize this virtual machine’s hardware” on the “Select clone options” page.

After you click next, it’ll bring up the “Customize hardware” page. For the bootstrap we are setting 4 CPUs, 8GB of RAM, 120GB of HD space, and I will also set the custom MAC address for my DHCP server. It should look something like this:

On that same screen click on “VM Options” and expand the “Advanced” menu. Scroll down to the “Configuration Parameters” section and click on “Edit Configuration…”. This will bring up the parameters menu. There, you will change the guestinfo.ignition.config.data value from changeme to the contents of your append-bootstrap.64 file.
[chernand@laptop openshift4]$ cat append-bootstrap.64

Here is a screenshot of my configuration:

Click “OK” and then click “Next” on the “Customize Hardware” screen. This will bring you to the overview page.

Click “Finish”, to create your bootstrap VM.
You need to perform these steps at least 5 more times (3 times for the masters and 2 more times for the workers). Use the following table to configure your servers, which is based on the resource requirements listed on the official documentation.

MACHINEvCPURAMSTORAGEguestinfo.ignition.config.data

master416 GB120 GBOutput of: cat openshift4/master.64

worker28 GB120 GBOutput of: cat openshift4/worker.64

Once you’ve created your 3 masters and 2 workers, you should have 6 VMs in total. One for the bootstrap, three for the masters, and two for the workers.

Now boot the VMs. It doesn’t matter which order you boot them in, but I booted mine in the following order:

Bootstrap
Masters
Workers

Bootstrap Process
Back on the installation host, you can now finish the bootstrap complete process for the OpenShift installer.
[chernand@laptop openshift4]$ openshift-install wait-for bootstrap-complete –log-level debug
DEBUG OpenShift Installer v4.2.0
DEBUG Built from commit 90ccb37ac1f85ae811c50a29f9bb7e779c5045fb
INFO Waiting up to 30m0s for the Kubernetes API at https://api.openshift4.example.com:6443…
INFO API v1.14.6+2e5ed54 up
INFO Waiting up to 30m0s for bootstrapping to complete…
DEBUG Bootstrap status: complete
INFO It is now safe to remove the bootstrap resources

Once you see this message; you can safely delete the bootstrap VM and continue with the installation.
Finishing Install
Once the bootstrap process is complete, the cluster is actually up and running; but not in a state where it’s ready to receive workloads. To finish the install process first export the KUBECONFIG environment variable.
[chernand@laptop openshift4]$ export KUBECONFIG=~/openshift4/auth/kubeconfig

You can now access the API. You first need to check if there are any CSRs that are pending for any of the nodes. You can do this by running oc get csr, this will list all the CSRs for your cluster.
[chernand@laptop openshift4]$ oc get csr
NAME AGE REQUESTOR CONDITION
csr-4hn7m 6m36s system:node:master3.openshift4.example.com Approved,Issued
csr-4p6jz 7m8s system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
csr-6gvgh 6m21s system:node:worker2.openshift4.example.com Approved,Issued
csr-8q4q4 6m20s system:node:master1.openshift4.example.com Approved,Issued
csr-b5b8g 6m36s system:node:master2.openshift4.example.com Approved,Issued
csr-dc2vr 6m41s system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
csr-fwprs 6m22s system:node:worker1.openshift4.example.com Approved,Issued
csr-k6vfk 6m40s system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
csr-l97ww 6m42s system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
csr-nm9hr 7m8s system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued

You can approve any pending CSRs by running the following command (please read more about certificates in the official documentation):
[chernand@laptop openshift4]$ oc get csr –no-headers | awk ‘{print $1}’ | xargs oc adm certificate approve

After you’ve verified that all CSRs are approved, you should be able to see your nodes.
[chernand@laptop openshift4]$ oc get nodes
NAME STATUS ROLES AGE VERSION
master1.openshift4.example.com Ready master 9m55s v1.14.6+c07e432da
master2.openshift4.example.com Ready master 10m v1.14.6+c07e432da
master3.openshift4.example.com Ready master 10m v1.14.6+c07e432da
worker1.openshift4.example.com Ready worker 9m56s v1.14.6+c07e432da
worker2.openshift4.example.com Ready worker 9m55s v1.14.6+c07e432da

In order to complete the installation, you need to add storage to the image registry. For testing clusters, you can set this to emptyDir (for more permanent storage, please see the official doc for more information).
[chernand@laptop openshift4]$ oc patch configs.imageregistry.operator.openshift.io cluster –type merge –patch ‘{“spec”:{“storage”:{“emptyDir”:{}}}}’

Please note that using a VDMK is not supported for the registry.

At this point, you can now finish the installation process.
[chernand@laptop openshift4]$ openshift-install wait-for install-complete
INFO Waiting up to 30m0s for the cluster at https://api.openshift4.example.com:6443 to initialize…
INFO Waiting up to 10m0s for the openshift-console route to be created…
INFO Install complete!
INFO To access the cluster as the system:admin user when using ‘oc’, run ‘export KUBECONFIG=/home/chernand/openshift4/auth/kubeconfig’
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.openshift4.example.com
INFO Login to the console with user: kubeadmin, password: STeaa-LjEB3-fjNzm-2jUFA

Once you’ve seen this message, the install is complete and the cluster is ready to use. If you provided your vSphere credentials, you’ll have a storageclass set.
[chernand@laptop openshift4]$ oc get sc
NAME PROVISIONER AGE
thin (default) kubernetes.io/vsphere-volume 13m

You can use this storageclass to dynamically create VDMKs for your applications.
If you didn’t provide your vSphere credentials, you can consult the VMware Documentation site for how to set up storage integration with Kubernetes.
Conclusion
In this blog we went over how to install OpenShift 4 on VMware using the UPI method using DHCP. We also displayed the vSphere integration that allows OpenShift to create VDMKs for the applications. In my next blog, I will be going over how to install using static IPs. So, stay tuned!
Red Hat OpenShift Container Platform and VMware vSphere are a great combination for running an enterprise container platform on a virtual infrastructure. For the last several years our joint customers have successfully deployed OpenShift on vSphere for their production ready applications.
We invite you to try OpenShift 4 on VMware, and run enterprise ready Kubernetes on vSphere today!
The post OpenShift 4.2 vSphere Install Quickstart appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

How to build an edge cloud part 1: Building a simple facial recognition system

The post How to build an edge cloud part 1: Building a simple facial recognition system appeared first on Mirantis | Pure Play Open Cloud.
If you look at the internet, there’s a lot of talk about edge clouds and what they are — from a conceptual level.  But not too many people are telling you how to actually build one. Today we’re going to start to change that.
In the next few articles, we’re going to build a simple edge cloud that demonstrates the basic concepts behind why you’d want to use them and how they work.  We’ll be emulating an edge-cloud based surveillance camera system that analyses a live video stream looking for faces, then notifies the user when a stranger appears on their doorstep.
The target edge cloud architecture
The idea behind an edge cloud is that it moves processing closer to where the actual data is, but moves it to more powerful hardware where necessary.  Consider this architecture:

From a general standpoint, we have the end user client, which interacts with an edge cloud, which then feeds into a regional or central cloud. But let’s make that more concrete.
In our surveillance camera example, the client is the camera itself.  This might be a doorbell camera, or a webcam, or even someone’s cell phone.  In another context, it might be a gaming device, a cash register, an industrial sensor, or any other Internet of Things (IoT) object.
The client feeds to the edge cloud.  So in this case, the camera is sending a video stream to an application on an edge cloud.  In the real world, this might be a cloud that’s located directly in a cell tower handling local transactions, or it might be a small cloud inside a retail establishment.  For the purpose of our example, the edge cloud will host an application that analyses the video and looks for frames that include faces.
The edge cloud can feed to a regional cloud, or directly to a central cloud.  For example, in a real environment, the edge cloud might report transactions and inventory levels back to a cloud in a nearby corporate data center. In our example, the edge cloud will feed into a regional cloud, which will take any frames that include faces and identify the people in those frames by comparing them against “known” occupants and visitors.  If a “stranger” is found, the regional cloud reports the incident to the central cloud.
The central cloud is the nerve center for the entire operation; in a real world application this might be the main corporate database or other centralized systems. In our example, the central cloud is responsible for making decisions on what to do if a stranger is seen in the surveillance camera. It might alert law enforcement or take other actions. For our purposes, we’ll send an email with the stranger’s photo so the user can make a judgement of what to do.
Over the course of the next several articles, we’ll go through the process of building up these clouds so you can see how they can work together.  We’ll create containers managed by Kubernetes, look at how we can run VM-based resources in those clouds, and even look at options for storage that will help us move data around without having to do it explicitly.
For the moment, however, we need to start with the basics: creating the pieces of functionality our cloud is going to need.
Getting started: detecting faces in the video
The first piece of functionality we’re going to build is the ability to detect when faces appear in a video.  This may seem like a complicated process, but it’s actually pretty straightforward, especially using a library such as OpenCV, which is made specifically for computer vision tasks.
You can find a simple explanation of how face detection works in this article, or more details of the math behind it here, but the short version is that OpenCV has pre-trained classifiers that know how to recognize facial features such as eyes, noses, ears, and so on.  We’ll make use of those classifiers to detect whether and where OpenCV sees a face.
Start by making sure you have Python 3 installed, then then install OpenCV using pip:
pip3 install opencv-python
Now we’re ready to get started.  In our example we’re assuming that we have a connected video camera streaming into the system, but if you don’t actually have a connected camera you want to hack into at the moment (and I don’t) we can go ahead and use our webcam using OpenCV’s capabilities. 
Create a new file called camera.py and add the following code:
import cv2

# For webcam
cap = cv2.VideoCapture(0)

# For predefined video file
# cap = cv2.VideoCapture(‘filename.mp4′)

while True:
   # Get a single frame
   found_frame, frame = cap.read()

   # Display the captured frame
   cv2.imshow(‘Video Feed’, frame)

   # Stop when escape key is pressed
   k = cv2.waitKey(30) & 0xff
   if k==27:
       break

cap.release()
After importing the OpenCV library, we’re creating an object that represents the video we’re going to analyze.  In my case I’m going to use the webcam, but you also have the option to use a predefined video file instead.
Once we’ve done that, we’re setting up an infinite loop that goes through and reads the current frame from the camera, returning a boolean value representing whether an image was successfully acquired, as well as the actual image frame itself.  From there we’re displaying the frame in a window called “Video Feed”.
Now, although what we’re seeing is going to LOOK like video, it’s actually just a series of frames, each displayed for 30 ms.  To make that happen, we’re using the cv2.waitKey() function; if we didn’t, the image wouldn’t appear at all — or would disappear so fast it would be like it never happened.  If you set the delay to 0, the image will remain static until the user presses a key.
Since we’re looking for the key anyway we can use this function as a way to exit the loop, using the 0xff mask to help get the ASCII value of the actual key pressed.
Finally, we’re releasing the camera object to clean things up. 
Now if you go ahead and run this file with Python 3, you’ll see a window that shows what your webcam sees.

Now aside from the questionable taste of my bedroom furniture, you can see that we’ve got a pretty large window, and that it’s titled with the name we gave it.  If you try to close this window, you’ll find the next frame just pops right up. To close it you need to either hit the escape key or go back to the terminal and press CTRL-C.
Now let’s start actually looking for faces in these frames.  The first thing we need to do is get the training data for the classifier.  The CascadeClassifier actually has a number of different files we can get depending on what we’re looking for, but we want the whole face, so download the haarcascade_frontalface_default.xml file.  (In the OpenCV Github repo you can see some of your other options, such configurations for just eyes, or for smiles, or for Russian license plates, for that matter.  The important thing is that the classifier has to load the proper training to find specific objects.
import cv2

face_cascade = cv2.CascadeClassifier(‘haarcascade_frontalface_default.xml’)

# To capture video from webcam.

Now we can go ahead and use the classifier to detect the faces.  To make things easier for the classifier, we’ll first convert the frame to a grayscale image:

while True:
   # Get a single frame
    found_frame, frame = cap.read()

   # Convert to grayscale and detect faces
   gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
   faces = face_cascade.detectMultiScale(gray, 1.1, 4)
   for (x, y, w, h) in faces:
        cv2.rectangle(frame, (x, y), (x+w, y+h), (255, 0, 0), 2)

   # Display the captured frame
   cv2.imshow(‘Video Feed’, frame)

After we convert the color, we can go ahead and do the detection.  In this case we’re using the detectMultiScale() function, which uses a progressively more detailed look at the image.  (For a more detailed explanation of how it works and the parameters involved, see the documentation.)
The detection method gives us a list of values that represent the area of the image in which it thinks there’s a face, and we can go ahead and add a rectangle to the image to show where it thinks those faces are.  We’ll specify the image, the upper left and lower right corners, the color (note that this is in blue-green-red, not red-green-blue) and the thickness of the line. (You can use the cv2.circle() function as well.)
Now if we run it we get a look at where OpenCV thinks the face is in the image:

Notice that I said where it THINKS the face is in the image; I wasn’t able to capture it here but you’re virtually guaranteed to get false positives in these images.  For example, for some reason it sees faces in my venetian blinds. (Which is kind of creepy, actually.)
Later, we’ll be vetting these faces so that we don’t get a ton of false positives sent to the user.
Now we need to deal with the frames that have faces in them.  In the real world we may send them on immediately, but for now let’s just save them to disk:
import cv2

i = 0
def send_image(face_img):
   global i
   i = i+1
   if i % 10 == 0:
       filename = “saved”+str(i)+”.jpg”
        cv2.imwrite(filename, face_img)

face_cascade = cv2.CascadeClassifier(‘haarcascade_frontalface_default.xml’)

   gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
   faces = face_cascade.detectMultiScale(gray, 1.1, 4)

   for (x, y, w, h) in faces:
        cv2.rectangle(frame, (x, y), (x+w, y+h), (255, 0, 0), 2)

   if len(faces) > 0:
        send_image(frame)

   # Display the captured frame
   cv2.imshow(‘Video Feed’, frame)

First we’re creating the new function send_image(), and telling it to save every 10th image, so we don’t overwhelm the server (or the user).  Then just before we display the image, we’re checking to see if it has any faces, and if so, we’re sending it to the send_image() function, where it’ll get saved to disk.  Note that because we’re doing this AFTER the cv2.rectangle() call, the saved image does have the rectangles in it.
In the next section we’ll send the image on to the next step, but first we need to build that script.
Sending faces to the regional cloud for recognition
In later parts of this series, we’ll look at different ways to share data around a cloud architecture, but for the moment, we’re simply going to send images over HTTP.  To do that, we need to create a simple web server. Rather than building an entire WSGI architecture, we’re going to use Flask, which lets us easily create a development HTTP server.  Installation is straightforward with pip:
pip3 install flask
Now let’s go ahead and create the web server.  Later, we’ll move all of these scripts to their own clouds, but for now you can create it on the same machine on which you created camera.py.  
Create a new file called receiveframe.py and add the following code:
from flask import Flask

app = Flask(__name__)

@app.route(‘/’)
def docroot():
     return “This is the regional cloud.”

if __name__ == ‘__main__':
     app.run(host=’0.0.0.0′, port=5000)

This is perhaps the most basic web server you can build.  We’re creating the Flask object, in this case called app, using a decoration to specify that for the route “/”, or the main document root, we want to execute the docroot() function.  (The function is arbitrary; you can call it anything.) The function itself returns a simple text string.
Finally, we’re just specifying that we want the app to listen on all IPs, and on port 5000.  (The default is to listen on port 5000, but only from localhost.) Now if we run this script, we’ll see that it stays alive:
$ python receiveframe.py
* Serving Flask app “receiveframe” (lazy loading)
* Environment: production
  WARNING: This is a development server. Do not use it in a production deployment.
  Use a production WSGI server instead.
* Debug mode: off
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
And then if we point a browser to it, we’ll see the text returned.

So that’s the simple version.  Now let’s create a “page” that takes a file upload.  You are probably familiar with uploading files from the browser; on the backend, we need to receive that file with a POST request:
from flask import Flask
from flask import request
from werkzeug.utils import secure_filename

app = Flask(__name__)

@app.route(‘/’)
def docroot():
     return “This is the regional cloud.”

@app.route(‘/check_image’, methods = [‘GET’, ‘POST’])
def check_image():

   if request.method == ‘POST':
       frame = request.files[‘face_frame’]
       filename = secure_filename(“saved_”+frame.filename)
       frame.save(filename)
       return “Saved “+filename
   else:
        return “Not post.”

if __name__ == ‘__main__':
     app.run(host=’0.0.0.0′, port=5000)
First we’re importing the request object from the flask package, and then the secure_filename function.  This function does manipulation on filenames to prevent someone from hacking your server by uploading a file named, say, “../../../init.d”.  That’s not so important now, but it will be later when we’re taking the filename that’s submitted by an external routine.
Next we’re creating a second URL to be served, /check_image, and specifying that it can accept both GET and POST requests.  If the request is a POST request, we’re requesting the request.files array, and pulling the parameter named file. Note that this is an arbitrary name; as long as it matches what’s being submitted, this will work.
Now let’s send a file to see this in action.  We could build an HTML form, but we’ll do it directly from the script just to make things simpler (and because we’re going to need to know how to do this anyway).  We’ll use the Python Requests library:
pip3 install requests
Now put an image file in the same directory and call it target.jpg, just so we have something to send, and add the following to the script:
from flask import Flask
from flask import request
from werkzeug.utils import secure_filename
import requests

app = Flask(__name__)

@app.route(‘/’)
def docroot():
    files = {‘face_frame': open(‘target.jpg’, ‘rb’)}
   result = requests.post(‘http://localhost:5000/check_image’, files=files)
    return result.text

@app.route(‘/check_image’, methods = [‘GET’, ‘POST’])
def check_image():

As you can see, we’re simply creating a JSON object that includes a parameter of face_frame with a value that is the binary content of the file we want to send.  We’re then sending that as the payload of a POST request.
Now if we go ahead and call the web server, it will make a request to the check_image URL, send the file and return the result text, which is what was returned by check_image():

If we check the filesystem, we can see the saved_target.jpg file:
$ ls
camera.py saved_target.jpg
haarcascade_frontalface_default.xml target.jpg
receiveframe.py
Now we’ve got the file, so we can look at whether it’s a “known” person.  The first thing we need to do is get the face_recognition library and the dlib library it depends on.  Install dlib and then get face_recognition with
pip3 install face_recognition
What face_recognition does is create a set of encodings of known faces, then compare an “unknown” face to those encodings and determine whether it matches any of the known people.  We won’t go into what a “match” is here in detail — you can check the documentation for more information — but keep in mind that this is not perfect. For example, these images match:

But with default settings, these are considered two different people, even though we as humans know they’re not:

I’ve created a knownusers directory of photos of people.  In this case they’re characters from Star Wars, because … Star Wars, but in the real application the user would submit people who are expected to be at the house, and the system would segregate these “known” individuals by user.
We’ll start by reading that directory and creating encodings for those images:

import requests
import face_recognition
import os

images = []
known_faces = []
directory = ‘knownusers’
for filename in os.listdir(directory):
   this_image = face_recognition.load_image_file(directory+”/”+filename)
   images.append(this_image)
   for face in face_recognition.face_encodings(this_image):
        known_faces.append(face)

app = Flask(__name__)

In this case we’re simply looping through all of the files in our knownusers directory and using those files to create an array of images; for each image, we’re also creating an array of face encodings.  Note that the code assumes that there can be more than one face in these “known” photos.
Now we’re ready to compare our target image:

@app.route(‘/check_image’, methods = [‘GET’, ‘POST’])
def check_image():

   if request.method == ‘POST':
       frame = request.files[‘face_frame’]
       filename = secure_filename(“saved_”+frame.filename)
       frame.save(filename)
       try:
           unknown_image = face_recognition.load_image_file(filename)
           unknown_face_encoding = face_recognition.face_encodings(unknown_image)[0]
           results = face_recognition.compare_faces(known_faces, unknown_face_encoding)
           return “{}”.format(not True in results)
       except Exception as inst:
           print(“No face in “+filename)
           print(inst)
            return “False”

if __name__ == ‘__main__':
      app.run(host=’0.0.0.0′, port=5000)
We’ve already saved the target image to the filesystem, so now we can load it using the same routines we used to load the “known” images.  A couple of things to note here, regarding facial recognition. First, remember that the original images had some false positives; this routine eliminates those, but this may give us a situation in which the image has zero faces, and since we’re just looking at the “first” face in the image, that would cause an error.  To solve this problem, we’re enclosing the routine in a try/except block.
Second, when we run the comparison, it returns a list of boolean values stating whether the target image matches each face.  If it does match, it will the results list will return True, so if all values are False, the user is a stranger, and we want to return True. (Got that?) And of course, if there are no faces, there are no strangers, so we’re returning False.
Now if we run the script we’ll see True or False depending on whether the image is of a person in our knownusers directory.
So in this case, the picture — which was a picture of me — is of a stranger.  Now we just want to go ahead and report the stranger:

import os

def report_stranger(filename):
    print(“Stranger!!!”)

images = []

           results = face_recognition.compare_faces(known_faces, unknown_face_encoding)
           is_stranger = (not True in results)
           if is_stranger:
               report_stranger(filename)
            return “{}”.format(is_stranger)
       except Exception as inst:
           print(“No face in “+filename)

We’ll take care of the actual reporting process in the next step, but there’s one more thing we have to do:  connect the camera to this routine. In camera.py, add the following, just as we did before:
import cv2
import requests

i = 0
def send_image(face_img):
   global i
   i = i+1
   if i % 10 == 0:
       filename = “saved”+str(i)+”.jpg”
       cv2.imwrite(filename, face_img)
       files = {‘face_frame': open(filename, ‘rb’)}
       r = requests.post(‘http://localhost:5000/check_image’, files=files)

face_cascade = cv2.CascadeClassifier(‘haarcascade_frontalface_default.xml’)

We’re simply loading the file and sending it to the check_image routine.  In this case, we don’t care about the results, so we’re not even checking it.  
Make sure that the web server is running in one window and run camera.py in another.  In the webserver window, you should see the output telling you there is a “stranger”:
$ python3 receiveframe.py
* Serving Flask app “receiveframe” (lazy loading)
* Environment: production
  WARNING: This is a development server. Do not use it in a production deployment.
  Use a production WSGI server instead.
* Debug mode: off
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
Stranger!!!
127.0.0.1 – – [04/Nov/2019 10:59:14] “POST /check_image HTTP/1.1″ 200 –
Stranger!!!
127.0.0.1 – – [04/Nov/2019 10:59:16] “POST /check_image HTTP/1.1″ 200 –
Stranger!!!
127.0.0.1 – – [04/Nov/2019 10:59:35] “POST /check_image HTTP/1.1″ 200 –
No face in saved_saved80.jpg
list index out of range
127.0.0.1 – – [04/Nov/2019 10:59:37] “POST /check_image HTTP/1.1″ 200 –
Stranger!!!
If you turn the camera so there are no faces, the output will stop, because nothing is being sent to the web server — that is unless there’s a false positive, such as the one shown here.  We can pull that up and see what the original routine thought was a face:

As you can see, there really IS a face there, but it’s too dark to recognize.  You can also see the non-faces marked in the blinds.
You can easily change this routine to send the false positives to another routine to help tune the machine learning models.
Now that we’ve got these two routines hooked up, the last step is to do the actual reporting.
Sending notifications to the central cloud
We’re in the final stretch!  All we need now is a routine that will take care of the reporting. There are all kinds of options here, from tracking to sending an email; we’ll start with an email for now.
You’ll need an email address you can send from; if you use a Gmail account, make sure to turn on “less secure app access”.
From here, it’s a matter of using the email package:
import email, smtplib, ssl
from flask import request
from email import encoders
from email.mime.base import MIMEBase
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from flask import Flask

app = Flask(__name__)

@app.route(‘/sendmail’, methods = [‘GET’, ‘POST’])
def sendmail():

   subject = “Stranger alert!”
   body = “Your camera has spotted an unidentified individual.”
   sender_email = “edgeemaildemo@gmail.com”
   receiver_email = “user@example.com”
    password = input(“Type your password and press enter: “)

   # Create a multipart message and set headers
   message = MIMEMultipart()
   message[“From”] = sender_email
   message[“To”] = receiver_email
   message[“Subject”] = subject
    message.attach(MIMEText(body, “plain”))

    text = message.as_string()

   # Log in to server using secure context and send email
   context = ssl.create_default_context()
   with smtplib.SMTP_SSL(“smtp.gmail.com”, 465, context=context) as server:
       server.login(sender_email, password)
       server.sendmail(sender_email, receiver_email, text)
    return “Sent.”

if __name__ == ‘__main__':
      app.run(host=’0.0.0.0′, port=5050)

Here we’re once again creating a simple webserver (since we’ll ultimately be sending the photo of the stranger to the user).  We’re simply creating the message, then creating an SSL connection, logging into the server, and sending the mail.  
Note also that because we’re running this on the same computer as the receiveframe.py script, we’ve moved it to port 5050.
If we run this webserver, then access it by calling the URL http://localhost/sendmail, we’ll get an email in our inbox:

Now we need to go ahead and add the image to it:
import email, smtplib, ssl
from flask import request
from email import encoders
from email.mime.base import MIMEBase
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from werkzeug.utils import secure_filename
import requests

from flask import Flask
app = Flask(__name__)

@app.route(‘/’)
def docroot():
   files = {‘stranger': open(‘target.jpg’, ‘rb’)}
   r = requests.post(‘http://localhost:5050/sendmail’, files=files)
    return str(r.text)

@app.route(‘/sendmail’, methods = [‘GET’, ‘POST’])
def sendmail():

   subject = “Stranger alert!”
   body = “Your camera has spotted an unidentified individual.”
   sender_email = “edgeemaildemo@gmail.com”
   receiver_email = “user@example.com”
   password = input(“Type your password and press enter: “)

    if request.method == ‘POST':

       # Create a multipart message and set headers
       message = MIMEMultipart()
       message[“From”] = sender_email
       message[“To”] = receiver_email
        message[“Subject”] = subject

        message.attach(MIMEText(body, “plain”))

       f = request.files[‘stranger’]
       filename = secure_filename(f.filename)
       f.save(filename)

       with open(filename, “rb”) as attachment:
           part = MIMEBase(“application”, “octet-stream”)
            part.set_payload(attachment.read())

       # Email has to be ASCII characters
       encoders.encode_base64(part)
       part.add_header(
           “Content-Disposition”,
           f”attachment; filename= {filename}”,
       )

        message.attach(part)

       text = message.as_string()
       # Log in to server using secure context and send email
       context = ssl.create_default_context()
       with smtplib.SMTP_SSL(“smtp.gmail.com”, 465, context=context) as server:
           server.login(sender_email, password)
           server.sendmail(sender_email, receiver_email, text)
       return “Sent.”
   else:
        return “Not POST.”

if __name__ == ‘__main__':
     app.run(host=’0.0.0.0′, port=5050)
Starting at the beginning, we’re not doing anything new, just sending the target image to the sendmail routine. From there, we’re saving the file just as we did before, but then we’re creating a new part for the email and adding the binary data to it.  We then convert it to text and attach it to the message.
Now if we call the main URL for the web server — http://localhost:5050 — we’ll still get a simple message of “Sent.” but if we check our email, the image is attached.

Now we just have to get the receiveframe.py script to send any stranger images to this routine using the report_stranger() function:
import face_recognition
import os

def report_stranger(filename):
    files = {‘stranger': open(filename, ‘rb’)}
    r = requests.post(‘http://localhost:5050/sendmail’, files=files)

images = []
known_faces = []

Now if we run the camera, we’ll get a bunch of emails showing frames from the camera:

If you take any of those photos and add it to the knownusers folder, you’ll see that you no longer get those emails.  (You’ll have to stop the receiveframe.py script and start it up again so that it picks up the new images.)
Ok!  So that’s all our base functionality.  
Next up:  “cloudifying” the application
Now that we have all of our base functionality we can focus on turning this into an edge cloud solution.  In part 2, we’ll look at deploying Kubernetes clusters and creating containerized versions of these routines to run on them.
The post How to build an edge cloud part 1: Building a simple facial recognition system appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Red Hat OpenShift or Red Hat OpenStack Platform: When to use What and Why?

Our original article about the differences between Red Hat OpenShift and Red Hat OpenStack Platform still gets a lot of Web traffic, despite it being seven years old. We thought it was time to revisit the topic of differences between OpenShift and OpenStack. To start, let’s look at the advantages that VMs offered over traditional, legacy hardware solutions.
The Virtualization Revolution
When virtualization came onto the scene, the ability to add new applications to existing hardware took the familiar concept of an operating system running an application and made it more manageable, more scalable and therefore more efficient. By deploying an IaaS platform, such as Red Hat OpenStack Platform, VMs can be run natively, to meet the hyperscale-demands of massive network providers, and can even provide Bare Metal as a Service for high performance applications – without the usual management complexity. Natively, Red Hat OpenStack Platform runs VMs and bare metal, but when paired with Red Hat OpenShift, it can also handle containers – more on that later. Consider this platform the most prevalent model, powering the majority of workloads.
The Container Evolution
Containers are self “contained,” meaning they don’t each need their own operating system and therefore are much more agile, easier to maintain and iterate upon and therefore, easier to bring-to-market. Because of this simplicity, containers can be deployed almost anywhere – from low-powered edge devices, to cloud service providers to core data centers. Because of this simplicity, the underlying infrastructure is less relevant because containers are designed to be resilient at the application layer instead of relying on infrastructure for high availability.
A container platform like Red Hat OpenShift, lets these containerized applications run on the infrastructure they are best suited for while maintaining portability. If data locality or compliance are important, containers can be run on-prem. If you don’t have a data center, you can run them in a public cloud. Combined with their agility, the flexibility of containers opens many doors for IT. Red Hat OpenShift offers a consistent user experience, regardless of the infrastructure, making it easier to deploy containers anywhere natively.
What can Red Hat do to help?
Red Hat OpenStack Platform can handle today’s virtual machines and bare metal systems at scale, and with a single management interface. Red Hat OpenShift Platform offers container-based systems management which can be layered on top of Red Hat OpenStack, VMware vSphere or your cloud services provider’s infrastructure. This is the hybrid model to make the transition from VMs to containers a reality – one that will be gradual as more applications arrive as containers . As more layers of abstraction are added which grant greater flexibility and resilience, the importance of the underlying infrastructure continues to decline, allowing IT to focus more on applications, updates and driving business – not maintenance and management. 
Where is Red Hat going?
Containers are the future. No more guest OSes, no more system configuration drift, no more divergent environments – we are going where you are going. Back in 2012 when we first spoke of the differences between OpenShift and Red Hat OpenStack Platform, this transition was only just beginning. Today, the migration from virtual machines to containers is a large part of the often discussed Digital Transformation enterprises are so excited about. Using Red Hat OpenShift and Red Hat OpenStack Platform together enables that transition to take place on your own schedule, at your own pace. Red Hat is here to help you choose the right speed for that change – whether you want to go full bore with a full stack, or focus on an application-based approach, making the move one virtual machine at a time.
The post Red Hat OpenShift or Red Hat OpenStack Platform: When to use What and Why? appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Microsoft SQL Server 2019 Takes Full Advantage of OpenShift and Linux Containers

If you’re an old school computer user, you could be forgiven for thinking it crazy that Microsoft SQL Server runs on Linux. Since the 2017 release of that enterprise database, Red Hat Enterprise Linux has been a supported platform. Yet, this strategic coming together of two former competitors isn’t over yet.
Microsoft SQL Server 2019 offers a host of quality-of-life improvements for users. These include faster query processing, in-memory database improvements and a new feature Microsoft is calling Big Data Clusters. 
Big Data Clusters is a fairly direct name for some truly impressive capabilities being embedded into MS SQL Server. Rather than pushing yet another database into your ecosystem, alongside dozens of others most likely, MS SQL Server 2019 will be able to act as a sort of data gateway for all of those other datastores. The goal is to allow a query written in Microsoft’s TSQL to be spread out to other systems that may use other variants of SQL, such as Oracle’s PLSQL. Thus, developers will be able to write all of their database queries in TSQL, yet have them parsed by MongoDO, Terradata or Oracle, among other planned target databases.
This enables developers to focus on a single query language, while still being able to take advantage of some of the benefits of other database types. That means all the powerful machine learning features inside SQL Server 2019 can be brought to bear on outside data sets. If you’re already got Spark and Hadoop, however, SQL Server 2019 can talk to those systems as well.
Vin Yu, program manager at Microsoft, said that, “We want customers building apps with SQL Server. If they are familiar with using SQL Server, they can continue to build their apps. What’s challenging is [having] the right database components and multiple endpoints. You can connect to SQL Server and connect to other datastores. You don’t have to know the different semantics of SQL,” said Yu. This particular feature is known as PolyBase.
Yu said this type of interaction is enabled by the transition to container-based operations. While traditional thinking has held that databases don’t run well in containers, primarily due to storage and high availability concerns, Yu said that SQL Server made the jump to containers easily, thanks to the existing work that had been done to port the database to Linux.
“When we support RHEL 8, we will also have SQL Server 2019 on  RHEL 8 containers coming out. From our perspective that was the biggest enabler. From then on, it’s like packaging any app in a container,” said Yu, highlighting the timeline for the planned addition of RHEL 8 support sometime next year.
The work at Microsoft to enable SQL Server 2019 on Red Hat OpenShift is also focused on building out a High Availability Kubernetes Operator for the database. Kubernetes Operators codify human knowledge by automating the deployment and lifecycle management of applications (including databases) running on OpenShift.  “This operator we’re working on for Big Data Clusters is really going to be focused on setting up HA. This is one of the biggest pain points for any enterprise customer, because it involves manually setting up multiple SQL Servers. We’re taking a huge pain point away,” said Yu.
The operator pattern of deploying services inside a Kubernetes cluster was appealing to Microsoft because it enables developers to be their own administrators, rather than monopolizing database administrators with the day-to-day task of building out infrastructure for projects.
“If you look at how databases were deployed even 10 years ago, you had to have a specialist install each individual machine. A DBA’s job was not just to support the databases, but to help developers. Every time a developer messed up, a DBA had to come fix it. Containers allow DBAs to package a database, and anyone in the company can deploy this with the single click of a button. That’s one area containers solve a problem.
And this is why Microsoft has begun to implement features in MS SQL Server that cannot be found outside of a containerized environment. “You can only use Big Data Clusters in a containerized deployment,” said Yu. “There is no other way. With SQL Server, Microsoft is making bets on the container ecosystem as a whole. In order to use specific features, you have to be using containers. In terms of ease of use over time, we’re going to see this evolve in a more mature fashion. One thing we always talk about is that containers are the new VMs. It took many years to adopt VMs, and people argued if they were the right choice. Now they’re the standard. Instead of questioning [Containers], we’re making a bet and jumping onto it early.”
SQL Server 2019 is now generally available to run natively on RHEL 8 or as a RHEL-based container image.  Support for Microsoft SQL Server 2019 Big Data Clusters should arrive on OpenShift platforms in early 2020.
The post Microsoft SQL Server 2019 Takes Full Advantage of OpenShift and Linux Containers appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Ansible Operators October 2019 Update

During this month’s Operator Framework SIG Meeting, I presented an update on the latest happenings with Ansible Operators (slides here). I touched on a few topics that I wanted to share with the greater universe.

Ansible Operators are being demonstrated as a way to bring existing automation into Kubernetes
Report from AnsibleFest Atlanta 2019 Workshop
Mcrouter Operator powered by Ansible Operator
GitHub Project Board

What is an Operator?
Operators are a design pattern made public in a 2016 CoreOS blog post. The goal of an Operator is to put operational knowledge into software. Operators implement and automate activities in a piece of software running inside your Kubernetes cluster, by integrating natively with Kubernetes concepts and APIs. We call this a Kubernetes-native application.
Webinar: Building Kubernetes Operators in an Ansible-native way
Tim Appnel and Chris Short walk through Kubernetes, Operators, and how using Ansible Operators lowers the barrier to entry to automating with Kubernetes.

Ansible Operators bringing existing automation to Kubernetes
In general, the Operator Framework, Kubernetes, and Ansible are a match made in heaven. They use YAML, they play a distinct part in the automation landscape of the future, and are all open source.
What Ansible Operators enable is the ability to take an Ansible Role (even one from Ansible Galaxy) that targets Kubernetes, and with almost no extra effort turn it into a Kubernetes Operator. This means that your Ansible content can be deployed into a Kubernetes cluster, and will trigger in reaction to events that Kubernetes emits.
In general, one thing I’ve noticed with Ansible Operators is that a lot of people have understanding of Kubernetes primitives and how they work together. But, few of those people are Go developers. Operators, like most of the cloud native ecosystem, are traditionally written in Go. The Operator SDK’s inclusion of Ansible helps a much larger group write Operators. More people can write Ansible than Go it seems.
Building Kubernetes Operators with Ansible Hands-on Workshop
I mentioned most of this in my AnsibleFest Atlanta 2019 Trip Report but here are the big takeaways from the presentation:

Sold out! All 100 registrations taken.
138 seats in room; about a dozen or so folks filled in
Workshop content is public now: http://workshop.coreostrain.me

But the most important part of this section was thanking the team of folks involved in getting an Ansible Operators workshop to AnsibleFest Atlanta. Huge thank you to Michael Hrivnak, Shawn Hurley, Fabian von Feilitzsch, Melvin Hillsman, Jeff Geerling, Tim Appnel, and Matt Dorn.
Mcrouter Operator powered by Ansible Operator
The hardest we have with Ansible Operators is how to demonstrate a very powerful tool for distributed systems in a time boxed scenario. Trying to explain Kubernetes and Operators to a room full of Ansible folks with varying degrees of Kubernetes experience was the challenge laid out to the team before AnsibleFest Atlanta.
Tim Appnel found a good use case to demonstrate the capabilities of an Ansible Operator that’s not too complex but, does allow for data to enter into a system instantiated by an Ansible Operator: Mcrouter

Mcrouter is a memcached protocol router for scaling memcached deployments. It’s a core component of cache infrastructure at Facebook and Instagram where mcrouter handles almost 5 billion requests per second at peak. Mcrouter is developed and maintained by Facebook.

Matt Dorn, Jeff Geerling (who did a marvelous Mcrouter Operator write-up), and Melvin Hillsman got it working for AnsibleFest. It is now part of the learn.opernshift.com Katacoda site for you to get up and running live in a web browser: Mcrouter Operator powered by Ansible Operator
Ansible Operator Community Project Board
The team is going to be using a GitHub Project Board (that I still need to populate) to track work that the group is doing. The goal is for asynchronous work from distributed groups to be able to occur and be tracked from one place.
If you’re working on an Ansible Operator and are interested in getting help, publishing it on OperatorHub.io, or just want me to talk about it, please feel free drop me a line. We’re always interested in hearing how folks are building things.
Want to Learn More?
Join the Operator Framework Special Interest Group to get up and running in the Operator Framework community.
OperatorHub.io is a home for the Kubernetes community to share Operators. Find an existing Operator
The post Ansible Operators October 2019 Update appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Recap: OpenShift Commons Gathering on AI and ML – San Francisco [Slides and Videos]

It’s A Wrap! OpenShift Commons Gathering on AI and Machine Learning took place on Oct 28th in San Francisco co-located with ODCS/West 
The OpenShift Commons Gathering on AI & ML at ODCS/West featured production AI/ML Workload Case study talks from Discover.com and ExxonMobil, deep dives into Red Hat’s OpenDataHub.io initiative and much more!
The OpenShift Commons Gathering on Artificial Intelligence and Machine Learning in San Francisco at ODSC/West  brought together data scientists and Kubernetes experts from all over the world to discuss the container technologies, operators, the operator framework, best practices for cloud-native application developers and the open source software projects that underpin the OpenShift ecosystem to help take us all to the next level in delivering cloud-native computing resources for AI & ML workloads. This gathering featured data scientists, developers, project leads, cloud architects, operator builders, sysadmins, and cloud-native practitioners coming together to explore the next steps in making container technologies successful and secure at scale.
Here are the slides and videos from the proceedings:

Welcome & Cross-Community Collaboration in Action
Diane Mueller (Red Hat)
Slides
Video

Keynote: Baking AI Ethics into Your AI Infrastructure Today
Daniel Jeffries (Pachyderm)
Slides
Video

Open Data Hub: Machine-Learning-As-A-Service Platform Deep Dive
Sherard Griffin (Red Hat)
Slides
Video

Delivering On-Demand Analytics Environments for Data Scientists at Discover
Brandon Harris and Anirudh Pathe (Discover Financial Services)
Slides
Video

Deep Learning Workloads with NVIDIA GPUs on OpenShift
Mehnaz Mahbub (Super Micro) | Mayur Shetty (Red Hat)
Slides
Video

ML/AI and Operators Case Study: Databricks, Azure and Kubernetes
Azadeh Khojandi and Jordan Night (Microsoft)
Slides
Video

Panel: Building Kubernetes Operators for AI & ML Workloads – moderator: Sherard Griffin (Red Hat)
Azadeh Khojandi (Microsoft), Pramod Ramarao (NVIDIA), Kamil Bajda-Pawlikowski (Starburst Data), Sunny Siu (ProphetStor),Ryan Dawson (Seldon), Gopal Krishnan (CognitiveScale)
N/A
Video

Lightning Talk: Diamanti HCI for OpenShift
Hiral Patel (Diamanti)
Slides
Video

Lightning Talk: Hybrid/Multi Cluster Storage
Erin Boyd (Red Hat)
Slides
Video

Lightning Talk: Exascale Schoal Architechture in Ceph
Kyle Bader (Red Hat)
Slides
Video

OpenShift and Machine Learning at ExxonMobil
Cory Latschkowski (ExxonMobil)
Slides
Video

Big Fast SQL with Presto on OpenShift
Kyle Bader (Red Hat),  Kamil Bajda-Pawlikowski (Starburst Data)
Slides
Video

AIOps on OpenShift
Sunny Siu (ProphetStor) , Tushar Katarki (Red Hat)
Slides
Video

Panel: The Future and Ethics of AI – moderator Erin Boyd (Red Hat)
Alex Housely (Seldon) m Daniel Riek (Red Hat), Frederick Kautz (Doc.ai)
N/A
Video

AMA Panel – Red Hat Project Leads, PMs, Data Scientists & Engineers – moderated by Julio Tapia (Red Hat)
Multiple Panelists – Live Q/A with Audience
N/A
Video

Road Ahead for AI/ML on OpenShift
Diane Mueller (Red Hat)
Slides
Video

 
To stay abreast of all the latest releases and events, please join the OpenShift Commons and join our mailing lists & slack channel.
This OpenShift Commons Gathering took place concurrently with ODSC/West, 2019 in San Francisco. It featured some of the brightest technical minds in cloud technology discussing the future of OpenShift and its related upstream open source projects. With OpenShift Container Platform quickly gaining adoption around the world, the OpenShift Commons Gathering offered talks from upstream project leads, and case studies from users across multiple industries and use cases. This event also included face-to-face meetings for all the OpenShift Commons Special Interest Groups and allowed ample time for peer-to-peer networking.
Want to learn more about AI and Machine Learning on OpenShift?
OpenShift Commons Machine Learning SIG monthly meetings are public and open to all.  Our next meeting will be on November 1, 2019 meeting details and agenda here! 
To join the OpenShift Commons ML SIG and get on the mailing list for future events, briefings and SIG meetings, sign up here: https://commons.openshift.org/sig/OpenshiftMachineLearning.html
What is OpenShift Commons?
Commons builds connections and collaboration across OpenShift communities, projects, and stakeholders. In doing so we’ll enable the success of customers, users, partners, and contributors as we deepen our knowledge and experiences together.
Our goals go beyond code contributions. Commons is a place for companies using OpenShift to accelerate its success and adoption. To do this we’ll act as resources for each other, share best practices and provide a forum for peer-to-peer communication.
Join OpenShift Commons today!
The post Recap: OpenShift Commons Gathering on AI and ML – San Francisco [Slides and Videos] appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift