Mirantis to Democratize Connectivity with Magma, a Converged Access Gateway Developed by Facebook

The post Mirantis to Democratize Connectivity with Magma, a Converged Access Gateway Developed by Facebook appeared first on Mirantis | Pure Play Open Cloud.
Open-source Magma is integrated with Mirantis Cloud Platform to give network operators an open, cost effective foundation for next generation mobile networks 
Magma Developer Conference, Menlo Park, CA, September 9, 2019 — Today, Mirantis announced that it is helping to bring the open-source converged access gateway software platform Magma, developed by Facebook, to mobile operators around the world. Mirantis has worked over the last six months to integrate, test and certify Magma with Mirantis’ Kubernetes-based infrastructure edge offering, called MCP Edge.

Unlike the core infrastructure that is generally uniform and centralized, edge infrastructure consists of many points of presence with architecture varying as a function of proximity to the end user and the type of application. MCP Edge integrates OpenStack, Kubernetes and Mirantis’ flexible infrastructure manager, DriveTrain, empowering operators to deploy a combination of container, VM and bare metal points of presence (POPs) connected by a unified management plane.

Magma is an open-source software platform designed to seamlessly integrate with the existing evolved packet core back end of a mobile network operator and extend it with new capabilities at the network edge, such as carrier Wi-Fi or EPC as-a-service. Run Magma’s centralized cloud-based controller on a public or private cloud environment, and start with just a single site.

“New entrants into the mobile operator space like Reliance Jio have been able to quickly capture huge market share by building networks at 20% the cost of traditional players and then passing these savings to the consumers,” said Boris Renski, co-founder and CMO, Mirantis. “Network virtualization and open source building blocks are key to achieving these savings. Mirantis already helped some of the biggest operators virtualize their network infrastructure using open standards. Offering services and support for Magma will help us take this further and help mobile operators launch new edge and 5G services cost effectively.”  

The certified combination of MCP Edge and Magma converged access gateway will:

    Enable operators to manage their networks more efficiently with more automation, less downtime, better predictability, and more agility to add new services and applications
    Enable federation between existing mobile network operators and new infrastructure providers for expanding rural infrastructure
    Allow operators who are constrained with licensed spectrum to add capacity and reach by using Wi-Fi and CBRS

“In order to bring affordable internet access to underserved areas of the world, we work to empower telecom companies and vendors with carrier grade, open source software for building next generation mobile networks,” said Amar Padmanabhan, Magma team lead at Facebook. “Mirantis’ experience and capabilities in building open source-based carrier networks make them an excellent partner to collaborate with on this journey.”

This news comes on the heels of Mirantis’ recent collaboration with core contributors of the Airship community by integrating much of the code into Mirantis Cloud Platform (MCP). Airship takes advantage of Kubernetes to define a unified, declarative and cloud-native way for operators to manage containerized software delivery of cloud infrastructure services.

If you are interested in learning more about Mirantis’ involvement in the Magma project, Mirantis will be hosting a webinar on September 24th at 10am. You can register here.
About Mirantis
Mirantis helps enterprises and telcos address key challenges with running Kubernetes on-premises with pure open source software. The company employs a unique build-operate-transfer delivery model to bring its flagship product, Mirantis Cloud Platform (MCP), to customers. MCP features full-stack enterprise support for Kubernetes and OpenStack and helps companies run optimized hybrid environments supporting traditional and distributed microservices-based applications in production at scale.

To date, Mirantis has helped more than 200 enterprises and service providers build and operate some of the largest open clouds in the world. Its customers include iconic brands such as Adobe, Comcast, Reliance Jio, State Farm, STC, Vodafone, Volkswagen, and Wells Fargo. Learn more at www.mirantis.com.
Contact information:
Joseph Eckert for Mirantis

jeckertflak@gmail.comThe post Mirantis to Democratize Connectivity with Magma, a Converged Access Gateway Developed by Facebook appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

OpenStack vs AWS Total Cost of Ownership: Assumptions behind the TCO Calculator

The post OpenStack vs AWS Total Cost of Ownership: Assumptions behind the TCO Calculator appeared first on Mirantis | Pure Play Open Cloud.
It’s easy to think that the least expensive way to spin up cloud servers is to use a service such as Amazon Web Services. After all, you don’t need to buy hardware or electricity, or even hire people to manage it, right?  Well … not so fast. It’s not as simple as that. While there are lots of situations in which AWS is the least expensive, there are times when it’s less costly to bring your workloads on-prem with an OpenStack deployment.
Here at Mirantis, we’ve created a Total Cost of Ownership calculator that compares the two.  For example, if you have 300 virtual machines, AWS can be more cost-effective, but once you get to 400, OpenStack is the way to go.  But before relying on any TCO calculator, it’s important to know the assumptions behind it so that you can understand how those determinations are made.
In this article, we’ll explain how the TCO calculator comes to its conclusions, and you can feel free to download the full spreadsheet yourself to take a look at how it all fits together.
Common assumptions
Whether you’re using AWS or OpenStack, there are certain things you need to take into consideration.  For example, just because you’re using a cloud doesn’t mean that you don’t need staff, and it doesn’t mean that VMs will always be 100% occupied — and all of that will impact your costs. 
In our case, we’re making the following general assumptions:

Average data-out from the cloud per VM is 1 TB/month, or 385,802 bytes/sec
Each compute node hosts an average of 28 VMs
Each VM needs an average of 20 GB of block storage (or 560 GB per server)
Storage is provided with a ratio of raw:usable space of 3.3, allowing for redundancy, and so on

Efficiencies
It’s important to understand that whether you run in the private cloud/on-prem or in the public cloud, there is going to be some level of inefficiency.  For example, we’re assuming that at any given time, only 60% of your OpenStack cloud hardware is in use, but because it’s private cloud/on-premise, you’re still paying for it.  
You don’t have this problem with AWS, because you can always just stop instances you’re not using, but that doesn’t mean you will always run at 100% efficiency. The TCO calculator assumes 80% efficiency for AWS instances, taking into account that there will be times instances are simply too big for the workloads running on them (but you still pay full price for the VM).
Personnel
Staffing may not seem like an issue common to both on- and off-prem clouds, but it is.
For OpenStack clouds, we’re assuming that you will need an OpenStack admin team with a minimum of two persons, because your cloud has to be up 24/7.  We’re also assuming that one admin can handle up to 50 individual servers. Once you get over 200 servers, we’re assuming that allowing for “fractional” staff is acceptable.
For AWS, you’ll still need at least one AWS internal administrator to support users who are in the cloud.
We’re assuming a full-time admin costs $140,000/year. Now let’s look at some assumptions that are specific to each platform.  
OpenStack-specific assumptions
Let’s start with OpenStack.  If you’re running an on-prem OpenStack cloud, you’ll have expenses that won’t exist for the public cloud, such as hardware and electricity.
Hardware and software:
The TCO calculator assumes that each physical server consists of:

System: Intel 2U R2312WT Wildcat-pass Server System
Dual Intel® Xeon® E5-2690v4 14 Core 2.60GHz 35MB Cache
256GB LRDIMM DDR4 2400
2.5TB SSD (5 x Micron M600 512GB SATA 2.5″ SSD)
2 x GE
3-year warranty

For storage we’re assuming each server consists of the following:

SuperStorage Server 6028R-E1CR12L
Single Intel® Xeon® processor E5-2620 v3, 6-Core 2.4G 20M 8GT/s QPI
64GB RAM
Storage: Chipset 12x 3.5″ 6TB SATA3 HDDs + 1x 800GB NVMe SSD (in rear 2.5″ Hot-swap OS drives (mirrored 80GB SSD))
Dual-port 10G SFP+ via AOC-STGN-I2S
2U w/ redundant Hot-swap 920W power supplies
3-year warranty

We’re also assuming a 30% discount off hardware list prices. We’ll depreciate the hardware over 3 years, and assume it needs to be replaced after 4 years.
We also need to take into consideration a number of other factors, including:

Bandwidth costs in and out of the datacenter
$19.00 per Mbps/year

H/W & bandwidth price degradation/decline year over year
10%

Admin cost reduction due to automation year over year
5%

Ratio of compute nodes to OpenStack Controller nodes
25

Power cost ($/kWh)
$0.10

Networking overhead (End of Row switches, WAN connectivity) as a % of server hardware acquisition cost
10%

Networking annual maintenance/support as a % of network hardware and software costs
15%

Networking admin cost as a % of total IT admin effort
8%

Rack annual maintenance/support as % of Rack costs
15%

Average power/rack
10.0 kW

2 x 10 GB Top of the Rack(ToR)  switches + 2 x power distribution unit (PDU) + Rack  chassis + 6 x fiber optic cables + 42 x Cat5 cables
(https://www.fs.com/c/10g-switches-3256 has cheaper switches)
$16,698

Monthly cost to operate a rack
$1,500

Install cost per rack
$2,000

Lab overhead per node (for testing/staging) / year
$50

Storage server raw storage
72,000 GB => (12 x 6 TB)

 
AWS-specific assumptions
Similarly, there are costs that are specific to an AWS implementation. Some, such as Enterprise Support, have corresponding costs on the OpenStack side.  Others, such as cost differentials for reserved instances, are specific to AWS. The TOC calculator assumes the following:

AWS machine selected (2 vCPU, 8 GB memory)
m4.large

AWS price  & inter-datacenter pipe costs degradation/decline year over year
3%

Number of LBaaS
1

% of AWS VMs using reserved instances
30%

% Discount on reserved instances (Typical range [31 –  60]%)
47%

Direct links to AWS
$24,000 per year

Inter-region data transfer/month in terms of the data xfer out
1

Enterprise support as a % of spend on AWS services
4%

AWS support team (internal)
1.00 Full-Time Equivalent

AWS Discount Tier
14%

 
So what’s best?
OK, so what does the TCO analysis tell us?  Which is a better choice from a financial standpoint? As with so many things, it depends on your situation.  For example, AWS is generally less expensive if you need multiple data centers — unless you have a large number of VMs, in which case OpenStack is less expensive. So do yourself a favor and plug your information into the TCO calculator and find out. Please let us know what you find!
The post OpenStack vs AWS Total Cost of Ownership: Assumptions behind the TCO Calculator appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

IBM Aspera helps media and entertainment companies push the limits of the cloud

As the growth in video production, content generation and media storage continues to explode throughout the world, advanced cloud-native transfer technologies have become absolutely essential to the daily operations of many teams in the media industry. IBM Aspera is responding with product updates designed to enable next generation cloud media workflows. This year at the International Broadcasting Convention (IBC), IBM Aspera will be highlighting its new containerized software architecture optimized to run on Red Hat OpenShift.
Rapid adoption of the IBM Aspera on Cloud service continues with well over 500 percent annual growth in data transferred through the platform since its 2018 launch. In anticipation of continued growth and more advanced workflows migrating to the service, we are demonstrating new automation functionality and a newly released open source framework that offers developers the ability to add an additional layer of security to media asset exchange.
Read on to learn more about these updates and stop by our booth in Hall 7 Booth B.25 for a demonstration and to learn how customers benefit from IBM Aspera across a wide variety of use cases, including how our streaming technology powered 100 percent remote editing of the 2019 Women’s World Cup. For a more detailed review, schedule a meeting with one of our team members.
IBM Aspera highlights for the IBC Show 2019
During IBC Show 2019, IBM Aspera will showcase its latest advancements across the following areas:
1. New hybrid multicloud workflow automation
Organizations across media and entertainment need to enable and expand cloud-based workflows that seamlessly connect hybrid environments. With the new automation app now available to early access clients of IBM Aspera on Cloud, users can streamline content delivery with recurring or event-based transfer workflows using an easy-to-use graphical workflow designer tool.
Additionally, organizations using IBM Aspera Orchestrator can now use a new plug-in for IBM Watson Language Translator that works in collaboration with other available Watson plug-ins for speech-to-text, text-to-speech and video enrichment services. This translation plug-in can translate text in many languages using Watson neural machine translation capabilities to improve the speed and accuracy of text translation.
2. More secure exchange of media assets
In order to facilitate secure asset movement through multicloud architectures, IBM Aspera is partnering with other industry-leading technology providers to build a reference implementation of a blockchain-based distributed ledger and establish a digital asset trust network (DATN). Together with BeBop Technologies, Breaker.io, Irdeto, Linius and NECF, IBM Aspera is releasing an open source project that allows developers to establish an immutable chain of custody for media assets. When implemented, this digital asset trust framework (DATF) will add an additional layer of security to content exchange and enable smart contract execution logic that can trigger automated file transfers based upon business requirements. The integration of DATF tools results in mitigated risks and increased levels of collaboration and productivity, enabling all partners to co-create value faster in modern, highly distributed cloud-based environments.
Additionally, IBM Aspera on Cloud customers can request early access to an integration with the Irdeto forensic watermarking service, which streamlines tracking and protection of proprietary content sent with IBM Aspera. Irdeto is an established leader in rapid identification and disruption of piracy attempts, using the reach and scale of public cloud in order to track down content breaches and halt unauthorized distribution supply chains.
3. Cloud-native architecture
IBM Aspera is also continuing to innovate and modernize our core technology. With the latest release of IBM Aspera High-speed Transfer Server, customers can now transfer and share their content using IBM Aspera containerized, scalable software certified to run on Red Hat OpenShift and available as part of the IBM Cloud Pak for Integration.

4. Additional IBM Aspera portfolio enhancements
Additional enhancements have been implemented across the IBM Aspera application suite.
IBM Aspera Orchestrator. The latest release of Aspera Orchestrator includes security, API and workflow performance advancements as well as several new third-party plug-ins. We will also be previewing designer usability enhancements at the IBC conference.
IBM Aspera Faspex. The latest release of IBM Aspera Faspex includes new functionality designed to enhance security, performance and administration. The new Out of Transfer File Validation (OTFV) feature introduces significant performance gains for large file transfers. Sender Quotas are another new administrative tool that provides the ability to limit transfer volumes by user. Additionally, we will demonstrate a new stand-alone HTTP Gateway for deployment in restrictive environments and two-factor login authentication using SMS integration.
IBM Aspera Streaming for Video. Finally, a recent beta of IBM Aspera Streaming for Video includes new capabilities that enable full bi-directional communication and flexible substitution for TCP (transmission control protocol) across an even wider variety of deployment environments to support additional streaming use cases. The team is also showcasing an easy-to-use web-based management application for IBM Aspera Streaming for Video that provides device auto-discovery and monitoring.
Connect with IBM Aspera
IBM Aspera is bringing new containerized software architecture optimized to run on Red Hat OpenShift, new automation functionality and a newly released open source framework to help media and entertainment companies bring new experiences to consumers.
If you are able to attend the show, you can schedule a meeting with IBM Aspera. Or, learn more about IBM Aspera.
The post IBM Aspera helps media and entertainment companies push the limits of the cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Technology evolution and market transformation

It’s no secret that the pace of innovation and change has accelerated rapidly over the last few years. From the explosion of social media to online shopping and apps to track your driver, digital transformation has quickly become an integral part of daily life. In the last year alone, DataReportal shares there are over 4 billion internet users up nine percent from 2018 to 2019.
The IT industry is also undergoing a rapid transformation. The integration of digital technology into all areas of business has quickly created new business models and ways of working to challenge traditional systems. New business models are forcing companies to review their strategies and processes in order to remain competitive and deliver a higher level of service to their customers.
IBM values our customers and understands that today’s business challenges have never been more complex. We know that businesses need to get the most from their software investments. As a result, we offer a support and subscription set of offerings that empowers businesses to maintain, manage and deliver software at the highest possible service through IBM Software Subscription and Support (IBM S&S).
IBM Software Subscription and Support puts client business needs first, offering security, expertise and unrivaled technical support. More importantly, it offers a higher return on IBM software investment, through incremental enhancement, fixes and security patches. As well as allowing customers to capitalize on new features and functions that will increase performance and productivity, IBM Software Subscription and Support also reduces overall software acquisition costs, saving clients time and money while vastly reducing risks.
Maximizing ROI with IBM Software Subscription and Support
Software company, Lodestar Solutions is helping companies build successful analytics strategies. Heather Cole, Lodestar Solutions CEO, lays out winning strategies that all IBM Cognos Analytics and Cognos Planning Analytics users should take to maximize return on their analytics investments.
Her tips include taking four strategic actions: roadmapping, annual license reviews, upgrading and renewing IBM Software Subscription and Support. Together these four basic strategies help users optimize performance and add new AI capabilities. They also enhance core reporting and dashboarding functionality and improve integration to extract better insights.
With IBM Software Subscription and Support, Lodestar Solutions is able to take advantage of individual support assets and deploy IBM software faster, with less effort, saving time and money for users and IT. Faster software deployment with ease of use means an accelerated business ROI.  
Putting business needs first
IBM software support offerings address the most challenging business needs of clients across all industries, large and small, helping them to spend less time talking to us, and more time concentrating on their business. Our support includes preventive care from product fixes, security patches and updates, as well as aggressive problem resolution through direct access to our deeply skilled, industry-leading technical support professionals.
Industry-leading security updates are built in to IBM products and updated regularly with each new release and version. This helps businesses avoid downtime and security breaches. And with embedded cognitive features from IBM Watson, we have target response times from two hours, depending on the severity level.
IBM and our valued authorized Business Partners are the only supplier in the market that can provide critical version upgrades or security patches to IBM programs. Alternate offerings often focus on other support activities and can never substitute for the critical program upgrades that are part of IBM Subscription & Support.
Protecting technology investments
IBM helps provide clients with genuine investment protection through professional, expert software support. All customers who have active in-house IBM Software Subscription and Support have access to this service. Clients also benefit from access to trusted IBM partners who offer level one to level three support depending on the solution and provide best-in-class solutions for highly customized offerings. Performance enhancements reduce the cost to build, run and maintain IBM software by securing fewer hardware purchases needed for more capacity.
Don’t let short-term cost savings lead to higher costs in the long run. Using IBM Subscription & Support helps clients avoid vulnerability and ensures they have access to the latest program updates and critical security patches for optimal success.
Innovating to improve performance
IBM Support & Subscription offers the best protection for your IBM software in this digitally transforming world. It delivers innovation with new and improved features engineered to improve performance and functionality. Patent leadership and best-in-class innovation provides you with significant and radical new capabilities that will take your business into the future and beyond.
Want to remain competitive and deliver a higher level of service to your customers? Learn how IBM Software Support and Subscription can benefit your business today.
The post Technology evolution and market transformation appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

A Survey on Storage

Storage is a big part of any application design strategy, but containers throw something of a monkey wrench into the traditional storage models. You’ve likely noticed that Red Hat has a few dogs in the storage race, most notably Red Hat Container Storage. We’re curious to see how much our readers care about this topic, and as such, we’ve whipped up yet another survey we’d be tickled pink if you could fill out for us. We’re not sending any emails, marketing materials or salespeople out in response to this survey, we just want to check the oil, as it were. So if you’ve got a little extra time, would you mind answering a few quick questions for us? As always, our goal is to better serve our readers, not to better sell to them. We’re trying to treat our blog like a magazine, and before we start covering new topics and adding how-to’s on storage, we’d like to see if those are topics that would interest you. Thanks for your time!

Loading…

The post A Survey on Storage appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

How to Deploy and Manage PostgreSQL on OpenShift using the ROBIN Operator

This is a guest post by Ankur Desai, director of product at Robin.io.
After successfully deploying and running stateless applications, a number of developers are exploring the possibility of running stateful workloads, such as PostgreSQL, on OpenShift. If you are considering extending OpenShift for stateful workloads, this tutorial will help you experiment on your existing OpenShift environment by providing step-by-step instructions.
This tutorial will walk you through:

How to deploy a PostgreSQL database on OpenShift using the ROBIN Operator
Create a point-in-time snapshot of the PostgreSQL database
Simulate a user and rollback to a stable state using the snapshot
Clone the database for the purpose of collaboration
Backup the database to the cloud using AWS S3 bucket
Simulate data loss/corruption and use the backup to restore the database

Install the ROBIN Operator from OperatorHub
Before we deploy PostgreSQL on OpenShift, let’s first install the ROBIN operator from the OperatorHub and use the operator to install ROBIN Storage on your existing OpenShift environment. You can find the Red Hat certified ROBIN operator here. Use the “Install” button and follow the instructions to install the ROBIN operator. Once the operator is installed you can use the “ROBIN Cluster” Custom Resource Definition at the bottom of the webpage to create a ROBIN cluster.
ROBIN Storage is an application-aware container storage that offers advanced data management capabilities and runs natively on OpenShift. ROBIN Storage delivers bare-metal performance and enables you to protect (via snapshots and backups), encrypt, collaborate (via clones and git like push/pull workflows) and make portable (via Cloud-sync) stateful applications that are deployed using Helm Charts or Operators.
Create a PostgreSQL Database
After you have installed ROBIN, let’s install the PostgreSQL client as the first step, so that we can use Postgresql once deployed.
yum install -y https://download.postgresql.org/pub/repos/yum/10/redhat/rhel-7-x86_64/pgdg-redhat10-10-2.noarch.rpm
yum install -y postgresql10
Let’s confirm that OpenShift cluster is up and running.
oc get nodes
You should see an output similar to below, with the list of nodes and their status as “Ready.”

Let’s confirm that ROBIN is up and running. Run the following command to verify that ROBIN is ready.
oc get robincluster -n robinio

Let’s setup helm now. Robin has helper utilities to initialize helm.
robin k8s deploy-tiller-objects
robin k8s helm-setup
helm repo add stable https://kubernetes-charts.storage.googleapis.com
Let’s create a PostgreSQL database using Helm and ROBIN Storage. Before continuing, it’s important to note that the process shown, using Helm and Tiller, is provided as an example only.  The supported method of using Helm charts with OpenShift is via the Helm operator.
Using the below Helm command, we will install a PostgreSQL instance. When we installed the ROBIN operator and created a “ROBIN cluster” custom resource definition, we created and registered a StorageClass named “robin-0-3” with OpenShift. We can now use this StorageClass to create PersistentVolumes and PersistentVolumeClaims for the pods in OpenShift. Using this StorageClass allows us to access the data management capabilities (such as snapshot, clone, backup) provided by ROBIN Storage. For our PostgreSQL database, we will set the StorageClass to robin-0-3 to benefit from data management capabilities ROBIN Storage brings.
helm install stable/postgresql –name movies –tls  –set persistence.storageClass=robin-0-3 –namespace default –tiller-namespace default
Run the following command to verify our database called “movies” is deployed and all relevant Kubernetes resources are ready.
helm list -c movies –tls –tiller-namespace default
You should be able to see an output showing the status of your Postgres database.

You would also want to make sure Postgres database services are running before proceeding further. Run the following command to verify the services are running.
oc get service | grep movies

Now that we know the PostgreSQL services are up and running, let’s get Service IP address of our database. 
export IP_ADDRESS=$(oc get service movies-postgresql -o jsonpath={.spec.clusterIP})
Let’s get the password of our PostgreSQL database from Kubernetes Secret
export POSTGRES_PASSWORD=$(oc get secret –namespace default movies-postgresql -o jsonpath=”{.data.postgresql-password}” | base64 –decode)
Add data to the PostgreSQL database
We’ll use movie data to load data into our Postgres database.
Let’s create a database “testdb” and connect to “testdb”.

PGPASSWORD=”$POSTGRES_PASSWORD” psql -h $IP_ADDRESS -U postgres -c “CREATE DATABASE testdb;”

For the purpose of this tutorial, let’s create a table named “movies”.
PGPASSWORD=”$POSTGRES_PASSWORD” psql -h $IP_ADDRESS -U postgres -d testdb -c “CREATE TABLE movies (movieid TEXT, year INT, title TEXT, genre TEXT);”
We need some sample data to perform operations on. Let’s add 9 movies to the “movies” table.
PGPASSWORD=”$POSTGRES_PASSWORD” psql -h $IP_ADDRESS -U postgres -d testdb -c “INSERT INTO movies (movieid, year, title, genre) VALUES
  (‘tt0360556’, 2018, ‘Fahrenheit 451’, ‘Drama’),
  (‘tt0365545’, 2018, ‘Nappily Ever After’, ‘Comedy’),
  (‘tt0427543’, 2018, ‘A Million Little Pieces’,’Drama’),
  (‘tt0432010’, 2018, ‘The Queen of Sheba Meets the Atom Man’, ‘Comedy’),
  (‘tt0825334’, 2018, ‘Caravaggio and My Mother the Pope’, ‘Comedy’),
  (‘tt0859635’, 2018, ‘Super Troopers 2’, ‘Comedy’),
  (‘tt0862930’, 2018, ‘Dukun’, ‘Horror’),
  (‘tt0891581’, 2018, ‘RxCannabis: A Freedom Tale’, ‘Documentary’),
  (‘tt0933876’, 2018, ‘June 9’, ‘Horror’);”
Let’s verify data was added to the “movies” table by running the following command.
PGPASSWORD=”$POSTGRES_PASSWORD” psql -h $IP_ADDRESS -U postgres -d testdb -c “SELECT * from movies;”
You should see an output with the “movies” table and the nine rows in it as follows:

We now have a PostgreSQL database with a table and some sample data. Now, let’s take a look at the data management capabilities ROBIN brings, such as taking snapshots, making clones, and creating backups.
Register the PostgreSQL Helm release as an application
To benefit from the data management capabilities, we’ll register our PostgreSQL database with ROBIN. Doing so will let ROBIN map and track all resources associated with the Helm release for this PostgreSQL database. 
Let’s first get the ‘robin’ client utility and set it up to work with this OpenShift cluster.
To get the link to download ROBIN client do:
oc describe robinclusters -n robinio
You should see an output similar to below:

Find the field ‘Get _ Robin _ Client’ and run the corresponding command to get the ROBIN client.
curl -k https://10.9.40.125:29451/api/v3/robin_server/client/linux -o robin
In the same output above notice the field ‘Master _ Ip’ and use it to setup your ROBIN client to work with your openshift cluster, by running the following command.
export ROBIN_SERVER=10.9.40.125
Now you can register the Helm release as an application with ROBIN. Doing so will let ROBIN map and track all resources associated with the Helm release for this PostgreSQL database. To register the Helm release as an application, run the following command:
robin app register movies –app helm/movies

Let’s verify ROBIN is now tracking our PostgreSQL Helm release as a single entity (app).
robin app status –app movies
You should see an output similar to this:

We have successfully registered our Helm release as an app called “movies”.
Snapshot and Rollback a PostgreSQL Database on OpenShift
If you make a mistake, such as unintentionally deleting important data, you may be able to undo it by restoring a snapshot. Snapshots allow you to restore the state of your application to a point-in-time. 
ROBIN lets you snapshot not just the storage volumes (PVCs) but the entire database application including all its resources such as Pods, StatefulSets, PVCs, Services, ConfigMaps etc. with a single command. To create a snapshot, run the following command.
robin snapshot create snap9movies movies –desc “contains 9 movies” –wait
Let’s verify we have successfully created the snapshot.
robin snapshot list –app movies
You should see an output similar to this:

We now have a snapshot of our entire database with information of all 9 movies.
Rolling back to a point-in-time using snapshot
We have 9 rows in our “movies” table. To test the snapshot and rollback functionality, let’s simulate a user error by deleting a movie from the “movies” table.
PGPASSWORD=”$POSTGRES_PASSWORD” psql -h $IP_ADDRESS -U postgres -d testdb -c “DELETE from movies where title = ‘June 9′;”
Let’s verify the movie titled “June 9”  has been deleted.
PGPASSWORD=”$POSTGRES_PASSWORD” psql -h $IP_ADDRESS -U postgres -d testdb -c “SELECT * from movies;”
You should see the row with the movie “June 9” does not exist in the table anymore.

Let’s run the following command to see the available snapshots:
robin app info movies
You should see an output similar to the following. Note the snapshot id, as we will use it in the next command.

Now, let’s rollback to the point where we had 9 movies, including “June 9”, using the snapshot id displayed above.
robin app rollback movies Your_Snapshot_ID –wait

To verify we have rolled back to 9 movies in the “movies” table, run the following command.
PGPASSWORD=”$POSTGRES_PASSWORD” psql -h $IP_ADDRESS -U postgres -d testdb -c “SELECT * from movies;”
You should see an output similar to the following:

We have successfully rolled back to our original state with 9 movies! 
Clone a PostgreSQL Database Running on OpenShift
ROBIN lets you clone not just the storage volumes (PVCs) but the entire database application including all its resources such as Pods, StatefulSets, PVCs, Services, ConfigMaps, etc. with a single command. 
Application cloning improves the collaboration across Dev/Test/Ops teams. Teams can share applications and data quickly, reducing the procedural delays involved in re-creating environments. Each team can work on their clone without affecting other teams. Clones are useful when you want to run a report on a database without affecting the source database application, or for performing UAT tests or for validating patches before applying them to the production database, etc.
ROBIN clones are ready-to-use “thin copies” of the entire app/database, not just storage volumes. Thin-copy means that data from the snapshot is NOT physically copied, therefore clones can be made very quickly. ROBIN clones are fully-writable and any modifications made to the clone are not visible to the source app/database.
To create a clone from the existing snapshot created above, run the following command. Use the snapshot id we retrieved above.
robin clone create movies-clone Your_Snapshot_ID –wait

Let’s verify ROBIN has cloned all relevant Kubernetes resources.
oc get all | grep “movies-clone”
You should see an output similar to below.

Notice that ROBIN automatically clones the required Kubernetes resources, not just storage volumes (PVCs), that are required to stand up a fully-functional clone of our database. After the clone is complete, the cloned database is ready for use.
Get Service IP address of our postgresql database clone, and note the IP address. 
export IP_ADDRESS=$(oc get service movies-clone-movies-postgresql -o jsonpath={.spec.clusterIP})
Get Password of our postgresql database clone from Kubernetes Secret
export POSTGRES_PASSWORD=$(oc get secret movies-clone-movies-postgresql -o jsonpath=”{.data.postgresql-password}” | base64 –decode;)
To verify we have successfully created a clone of our PostgreSQL database, run the following command.
PGPASSWORD=”$POSTGRES_PASSWORD” psql -h $IP_ADDRESS -U postgres -d testdb -c “SELECT * from movies;”
You should see an output similar to the following:

We have successfully created a clone of our original PostgreSQL database, and the cloned database also has a table called “movies” with 9 rows, just like the original.
Now, let’s make changes to the clone and verify the original database remains unaffected by changes to the clone. Let’s delete the movie called “Super Troopers 2”.
PGPASSWORD=”$POSTGRES_PASSWORD” psql -h $IP_ADDRESS -U postgres -d testdb -c “DELETE from movies where title = ‘Super Troopers 2′;”
Let’s verify the movie has been deleted.
PGPASSWORD=”$POSTGRES_PASSWORD” psql -h $IP_ADDRESS -U postgres -d testdb -c “SELECT * from movies;”
You should see an output similar to the following with 8 movies.

Now, let’s connect to our original PostgreSQL database and verify it is unaffected.
Get Service IP address of our postgresql database. 
export IP_ADDRESS=$(oc get service movies-postgresql -o jsonpath={.spec.clusterIP}) 
Get Password of our original postgre database from Kubernetes Secret.
export POSTGRES_PASSWORD=$(oc get secret –namespace default movies-postgresql -o jsonpath=”{.data.postgresql-password}” | base64 –decode;)
To verify that our PostgreSQL database is unaffected by changes to the clone, run the following command.
Let’s connect to “testdb” and check record :

PGPASSWORD=”$POSTGRES_PASSWORD” psql -h $IP_ADDRESS -U postgres -d testdb -c “SELECT * from movies;”

You should see an output similar to the following, with all 9  movies present:

This means we can work on the original PostgreSQL database and the cloned database simultaneously without affecting each other. This is valuable for collaboration across teams where each team needs to perform unique set of operations.
To see a list of all clones created by ROBIN run the following command:
robin app list
Now let’s delete the clone. Clone is just any other ROBIN app so it can be deleted using ‘robin app delete’ command.
robin app delete movies-clone –wait

Backup a PostgreSQL Database from OpenShift to AWS S3
ROBIN elevates the experience from backing up just storage volumes (PVCs) to backing up entire applications/databases, including their metadata, configuration, and data.                                                                                               
A backup is a full copy of the application snapshot that resides on completely different storage media than the application’s data. Therefore, backups are useful to restore an entire application from an external storage media in the event of catastrophic failures, such as disk errors, server  failures, or entire data centers going offline, etc. (This is assuming your backup doesn’t reside in the data center that is offline, of course.)
Let’s now backup our database to an external secondary storage repository (repo).  Snapshots (metadata + configuration + data) are backed up into the repo. 
ROBIN enables you to back up your Kubernetes applications to AWS S3 or Google GCS ( Google Cloud Storage). In this demo we will use AWS S3 to create the backup.
Before we proceed, we need to create an S3 bucket and get access parameters for it. Follow the documentation here. 
Let’s first register an AWS repo with ROBIN:
robin repo register pgsqlbackups s3://robin-pgsql/pgsqlbackups awstier.json readwrite –wait

Let’s confirm that our secondary storage repository is successfully registered:
robin repo list

You should see an output similar to the following :
Let’s attach this repo to our app so that we can backup its snapshots there:
robin repo attach pgsqlbackups movies –wait
Let’s confirm that our secondary storage repository is successfully attached to app:
robin app info movies
You should see an output similar to the following :

Let’s backup up our snapshot to the registered secondary storage repository:
robin backup create bkp-of-my-movies Your_Snapshot_ID pgsqlbackups –wait
Let’s confirm that the snapshot has been backed up in S3:
robin app info movies
You should see an output similar to the following :

Let’s also confirm that backup has been copied to remote S3 repo:
robin repo contents pgsqlbackups
You should see an output similar to the following :

The snapshot has now been backed up into our AWS S3 bucket.
Now since we have backed-up our application snapshot to cloud, let’s delete that snapshot locally.
robin snapshot delete Your_Snapshot_ID –wait

Now let’s simulate a data loss situation by deleting all data from the “movies” table.                 
$PGPASSWORD=”$POSTGRES_PASSWORD” psql -h $IP_ADDRESS -U postgres -d testdb -c “DELETE from movies;”
Let’s verify all data is lost.
$PGPASSWORD=”$POSTGRES_PASSWORD” psql -h $IP_ADDRESS -U postgres -d testdb -c “SELECT * from movies;”

We will now use our backed-up snapshot on S3 to restore data we just lost.
Now let’s restore snapshot from the backup in cloud and rollback our application to that snapshot.
robin snapshot pull movies Your_Backup_ID –wait

Remember, we had deleted the local snapshot of our data. Let’s verify the above command has pulled the snapshot stored in the cloud. Run the following command:
robin snapshot list –app movies

Now we can rollback to the snapshot to get our data back and restore the desired state.
robin app rollback Your_Snapshot_ID movies –wait
Let’s verify all 9 rows are restored to the “movies” table by running the following command:

PGPASSWORD=”$POSTGRES_PASSWORD” psql -h $IP_ADDRESS -U postgres -d testdb -c “SELECT * from movies;”

As you can see, we can restore the database to a desired state in the event of data corruption. We simply pull the backup from the cloud and use it to restore the database.
Running databases on OpenShift can improve developer productivity, reduce infrastructure cost, and provide multi-cloud portability. To learn more about using ROBIN Storage on OpenShift, visit the ROBIN Storage for OpenShift solution page.
The post How to Deploy and Manage PostgreSQL on OpenShift using the ROBIN Operator appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

OpenStack vs AWS Total Cost of Ownership: Assumptions behind the TCO Calculator

The post OpenStack vs AWS Total Cost of Ownership: Assumptions behind the TCO Calculator appeared first on Mirantis | Pure Play Open Cloud.
It’s easy to think that the least expensive way to spin up cloud servers is to use a service such as Amazon Web Services. After all, you don’t need to buy hardware or electricity, or even hire people to manage it, right?  Well … not so fast. It’s not as simple as that. While there are lots of situations in which AWS is the least expensive, there are times when it’s less costly to bring your workloads on-prem with an OpenStack deployment.
Here at Mirantis, we’ve created a Total Cost of Ownership calculator that compares the two.  For example, if you have 300 virtual machines, AWS can be more cost-effective, but once you get to 400, OpenStack is the way to go.  But before relying on any TCO calculator, it’s important to know the assumptions behind it so that you can understand how those determinations are made.
In this article, we’ll explain how the TCO calculator comes to its conclusions, and you can feel free to download the full spreadsheet yourself to take a look at how it all fits together.
Common assumptions
Whether you’re using AWS or OpenStack, there are certain things you need to take into consideration.  For example, just because you’re using a cloud doesn’t mean that you don’t need staff, and it doesn’t mean that VMs will always be 100% occupied — and all of that will impact your costs. 
In our case, we’re making the following general assumptions:

Average data-out from the cloud per VM is 1 TB/month, or 385,802 bytes/sec
Each compute node hosts an average of 28 VMs
Each VM needs an average of 20 GB of block storage (or 560 GB per server)
Storage is provided with a ratio of raw:usable space of 3.3, allowing for redundancy, and so on

Efficiencies
It’s important to understand that whether you run in the private cloud/on-prem or in the public cloud, there is going to be some level of inefficiency.  For example, we’re assuming that at any given time, only 60% of your OpenStack cloud hardware is in use, but because it’s private cloud/on-premise, you’re still paying for it.  
You don’t have this problem with AWS, because you can always just stop instances you’re not using, but that doesn’t mean you will always run at 100% efficiency. The TCO calculator assumes 80% efficiency for AWS instances, taking into account that there will be times instances are simply too big for the workloads running on them (but you still pay full price for the VM).
Personnel
Staffing may not seem like an issue common to both on- and off-prem clouds, but it is.
For OpenStack clouds, we’re assuming that you will need an OpenStack admin team with a minimum of two persons, because your cloud has to be up 24/7.  We’re also assuming that one admin can handle up to 50 individual servers. Once you get over 200 servers, we’re assuming that allowing for “fractional” staff is acceptable.
For AWS, you’ll still need at least one AWS internal administrator to support users who are in the cloud.
We’re assuming a full-time admin costs $140,000/year. Now let’s look at some assumptions that are specific to each platform.  
OpenStack-specific assumptions
Let’s start with OpenStack.  If you’re running an on-prem OpenStack cloud, you’ll have expenses that won’t exist for the public cloud, such as hardware and electricity.
Hardware and software:
The TCO calculator assumes that each physical server consists of:

System: Intel 2U R2312WT Wildcat-pass Server System
Dual Intel® Xeon® E5-2690v4 14 Core 2.60GHz 35MB Cache
256GB LRDIMM DDR4 2400
2.5TB SSD (5 x Micron M600 512GB SATA 2.5″ SSD)
2 x GE
3-year warranty

For storage we’re assuming each server consists of the following:

SuperStorage Server 6028R-E1CR12L
Single Intel® Xeon® processor E5-2620 v3, 6-Core 2.4G 20M 8GT/s QPI
64GB RAM
Storage: Chipset 12x 3.5″ 6TB SATA3 HDDs + 1x 800GB NVMe SSD (in rear 2.5″ Hot-swap OS drives (mirrored 80GB SSD))
Dual-port 10G SFP+ via AOC-STGN-I2S
2U w/ redundant Hot-swap 920W power supplies
3-year warranty

We’re also assuming a 30% discount off hardware list prices. We’ll depreciate the hardware over 3 years, and assume it needs to be replaced after 4 years.
We also need to take into consideration a number of other factors, including:

Bandwidth costs in and out of the datacenter
$19.00 per Mbps/year

H/W & bandwidth price degradation/decline year over year
10%

Admin cost reduction due to automation year over year
5%

Ratio of compute nodes to OpenStack Controller nodes
25

Power cost ($/kWh)
$0.10

Networking overhead (End of Row switches, WAN connectivity) as a % of server hardware acquisition cost
10%

Networking annual maintenance/support as a % of network hardware and software costs
15%

Networking admin cost as a % of total IT admin effort
8%

Rack annual maintenance/support as % of Rack costs
15%

Average power/rack
10.0 kW

2 x 10 GB Top of the Rack(ToR)  switches + 2 x power distribution unit (PDU) + Rack  chassis + 6 x fiber optic cables + 42 x Cat5 cables
(https://www.fs.com/c/10g-switches-3256 has cheaper switches)
$16,698

Monthly cost to operate a rack
$1,500

Install cost per rack
$2,000

Lab overhead per node (for testing/staging) / year
$50

Storage server raw storage
72,000 GB => (12 x 6 TB)

 
AWS-specific assumptions
Similarly, there are costs that are specific to an AWS implementation. Some, such as Enterprise Support, have corresponding costs on the OpenStack side.  Others, such as cost differentials for reserved instances, are specific to AWS. The TOC calculator assumes the following:

AWS machine selected (2 vCPU, 8 GB memory)
m4.large

AWS price  & inter-datacenter pipe costs degradation/decline year over year
3%

Number of LBaaS
1

% of AWS VMs using reserved instances
30%

% Discount on reserved instances (Typical range [31 –  60]%)
47%

Direct links to AWS
$24,000 per year

Inter-region data transfer/month in terms of the data xfer out
1

Enterprise support as a % of spend on AWS services
4%

AWS support team (internal)
1.00 Full-Time Equivalent

AWS Discount Tier
14%

 
So what’s best?
OK, so what does the TCO analysis tell us?  Which is a better choice from a financial standpoint? As with so many things, it depends on your situation.  For example, AWS is generally less expensive if you need multiple data centers — unless you have a large number of VMs, in which case OpenStack is less expensive. So do yourself a favor and plug your information into the TCO calculator and find out. Please let us know what you find!
The post OpenStack vs AWS Total Cost of Ownership: Assumptions behind the TCO Calculator appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

OpenShift 4.1 UPI environment deployment on Microsoft Azure Cloud

Red Hat released Red Hat OpenShift Container Platform 4.1 (OCP4) earlier this year, introducing installer provisioned infrastructure and user provisioned infrastructure approaches on Amazon Web Services (AWS). The Installer provisioned infrastructure method is quick and only requires you to have AWS credentials, access to Red Hat telemetry and domain name. Red Hat also released a user provisioned approach where OCP4 can be deployed by leveraging CloudFormation, templates and using the same installer to generate ignition configuration files.
Since Microsoft Azure is getting more and more business attention, natural question would be: when is OCP4 going to be released on Azure Cloud? At the time of the writing OCP4 on Azure using installer is in developer preview. 
One of the main challenges with running OCP4 on Azure with the installer provisioned infrastructure method is setting upcustom Ingress infrastructure (e.g. custom Network Security Groups or custom Load Balancer for routers), because the Cluster Ingress Operator creates a Public facing Azure Load Balancer to serve routers by default, and once the cluster is deployed, the Ingress Controller type cannot be changed.
If it is deleted, or the OpenShift router Service type is changed, the Cluster Ingress Operator will reconcile and recreate the default controller object.
Trying to alter Network Security Groups by whitelisting allowed IP ranges will cause Kubernetes to reconcile the configuration to it’s desired state.
One of the ways is to deploy OCP4 on Azure Cloud by creating the objects manually with the user provisioned infrastructure approach, and then recreating the default ingress controller object just after control plane is deployed.
Openshift Container Platform 4.1 components
Our cluster consists of 3 master and 2 compute nodes. Master nodes are fronted with 2 Load Balancers, 1 Public facing for external API calls, and 1 Private for internal cluster communication. Compute nodes are using the same Public facing Load Balancer as the masters, but if needed they can each have their own Load Balancer.

Figure 1. OCP 4.1 design diagram with user provisioned infrastructure on Azure Cloud
Instances sizes
The OpenShift Container Platform 4.1 environment has some minimum hardware requirements.

Instance type
Bootstrap
Control plane
Compute nodes

D2s_v3


X

D4s_v3
X
X
X

Above VM sizes might change once Openshift Container Platform 4.1 is officially released for Azure.
Azure Cloud preparation for OCP 4.1 installation
The preparation steps here are the same as for Installer Provisioned Infrastructure. You need to complete these steps:

DNS Zone.
Credentials.
Cluster Installation (Follow the guide until cluster deployment section).

NOTE: The free Trial account is not enough and Pay As You Go is recommended with increased quota for vCPU
 
User Provisioned Infrastructure based OCP 4.1 installation
When using this method, you can:

Specify the number of masters and workers you want to provision
Change Network Security Group rules in order to lock down the ingress access to the cluster 
Change Infrastructure component names
Add tags

This Terraform based approach will split VMs across 3 Azure Availability Zones and will use 2 Zone Redundant Load Balancers (1 Public facing to serve OCP routers and api and 1 Private to serve api- int)
Deployment can be split into 4 steps:

Create the Control Plane (masters) and Surrounding Infrastructure (LB,DNS,VNET etc.) 
Set the default Ingress controller to type “HostNetwork”
Destroy Bootstrap VM
Create Compute (worker) nodes

This method uses the following tools:

terraform >= 0.12 • openshift-cli
git
jq (optional)

Prerequisites
We will deploy Red Hat Openshift Container Platform v4.1 on Microsoft Azure Cloud by using Terraform, since it is one of the most popular Infrastructure-as-Code tools.
Download Git repository content containing terraform scripts:
git clone https://github.com/JuozasA/ocp4-azure-upi.git
cd ocp4-azure-upi
Download the openshift-install binary and get the pull-secret. The OpenShift Installer binary and pull secret can be downloaded following this link.
 
Copy openshift-install binary to /usr/local/bin directory:
 cp openshift-install /usr/local/bin/
 
Generate install config files:
./openshift-install create install-config –dir=ignition-files
? SSH Public Key /home/user_id/.ssh/id_rsa.pub
? Platform azure
? azure subscription id xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
? azure tenant id xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
? azure service principal client id xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
? azure service principal client secret xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
? Region <Azure region>
? Base Domain example.com
? Cluster Name <cluster name. this will be used to create subdomain, e.g. test.example.com>
? Pull Secret [? for help]
Edit the install-config.yaml file to set the number of compute, or worker, replicas to 0:
 compute:
  – hyperthreading: Enabled
   name: worker
   platform: {}
   replicas: 0
Generate Kubernetes manifests which defines the objects bootstrap nodes will have to create initially:
 openshift-install create manifests –dir=ignition-files
Remove the files that define the control plane machines and worker machinesets:
rm -f ignition-files/openshift/99_openshift-cluster-api_master-machines-*
rm -f ignition-files/openshift/99_openshift-cluster-api_worker-machineset-*
Because you create and manage the worker machines yourself, you do not need to initialize these machines.
Obtain the Ignition config files. More about Ignition utility here. 
openshift-install create ignition-configs –dir=ignition-files
Extract the infrastructure name from the Ignition config file metadata:
 jq -r .infraID ignition-files/metadata.json

Open terraform.tfvars file and fill in the variables:
azure_subscription_id = “”
azure_client_id = “”
azure_client_secret = “”
azure_tenant_id = “”
azure_bootstrap_vm_type = “Standard_D4s_v3″ <- Size of the bootstrap VM
azure_master_vm_type = “Standard_D4s_v3″ <- Size of the Master VMs
azure_master_root_volume_size = 64 <- Disk size for Master VMs
azure_image_id = “/resourceGroups/rhcos_images/providers/Microsoft.Compute/images/rhcostestimage” <- Location of coreos image
azure_region = “uksouth” <- Azure region (the one you’ve selected when creating install-config)
azure_base_domain_resource_group_name = “ocp-cluster” <- Resource group for base domain and rhcos vhd blob.
cluster_id = “openshift-lnkh2″ <- infraID parameter extracted from metadata.json
base_domain = “example.com”
machine_cidr = “10.0.0.0/16″ <- Address range which will be used for VMs
master_count = 3 <- number of masters
Open worker/terraform.tfvars and fill in information there as well. 
Start OCP v4.1 Deployment
Initialize Terraform directory:
terraform init
Run Terraform Plan and check what resources will be provisioned:
terraform plan
Once ready, run Terraform apply to provision Control plane resources:
terraform apply
Once the Terraform job is finished, run openshift-install. It will check when the bootstrapping is finished:
openshift-install wait-for bootstrap-complete –dir=ignition-files
Once the bootstrapping is finished, export the kubeconfig environment variable and replace the default Ingress Controller object with the one with the endpointPublishingStrategy of type HostNetwork. This will disable the creation of the Public facing Azure Load Balancer and will allow you to have custom Network Security Rules which won’t be overwritten by Kubernetes.
export KUBECONFIG=$(pwd)/ignition-files/auth/kubeconfig

oc delete ingresscontroller default -n openshift-ingress-operator

oc create -f ingresscontroller-default.yaml
Since we don’t need the bootstrap VM anymore, we can remove it:
terraform destroy -target=module.bootstrap
Now we can continue with provisioning the Compute nodes:
cd worker
terraform init
terraform plan
terraform apply
cd ../
Since we are provisioning Compute nodes manually, we need to approve kubelet CSRs:
worker_count=`cat worker/terraform.tfvars | grep worker_count | awk ‘{print $3}’`
while [ $(oc get csr | grep worker | grep Approved | wc -l) != $worker_count ]; do
 oc get csr -o json | jq -r ‘.items[] | select(.status == {} ) | .metadata.name’ | xargs oc adm certificate approve
 sleep 3
done
Check openshift-ingress service type (it should be type: ClusterIP):
oc get svc -n openshift-ingress
NAME                    TYPE CLUSTER-IP EXTERNAL-IP          PORT(S) AGE               
router-internal-default   ClusterIP 172.30.72.53   <none> 80/TCP,443/TCP,1936/TCP   37m
Wait for installation to be completed. Run the openshift-install command: 
openshift-install wait-for install-complete –dir=ignition-files
Last command will output the cluster console url and kubeadmin username/password.
Scale Up
In order to add additional worker nodes, we use terraform scripts in the scaleup directory. Fill in other information in terraform vars:
azure_subscription_id = “”
azure_client_id = “”
azure_client_secret = “”
azure_tenant_id = “”
azure_worker_vm_type = “Standard_D2s_v3″
azure_worker_root_volume_size = 64
azure_image_id = “/resourceGroups/rhcos_images/providers/Microsoft.Compute/images/rhcostestimage”
azure_region = “uksouth”
cluster_id = “openshift-lnkh2″
Run terraform init and the script:
cd scaleup
terraform init
terraform apply
It will ask you to provide the Azure Availability Zone number where you would like to deploy new node and to provide the worker node number (if it is 4th node, then the number is 3 [indexing starts from 0 rather than 1])
Approving server certificates for nodes
To allow API server to communicate with the kubelet running on nodes, you need to approve the CSR generated by each kubelet.
You can approve all Pending CSR requests using:
oc get csr -o json | jq -r ‘.items[] | select(.status == {} ) | .metadata.name’ | xargs oc adm certificate approve
Conclusion
OpenShift Container Platform 4.1 Internet ingress access can be restricted by changing Network Security Groups on Azure Cloud if we inform the Ingress controller not to create a public facing Load balancer. Since we are using Terraform to provision infrastructure, multiple infrastructure elements are changeable and the whole OpenShift Container Platform 4.1 infrastructure provisioning can be added to the wider infrastructure provisioning pipeline, e.g. Azure DevOps.
It is worth mentioning that at the time of writing, Red Hat OpenShift Container Platform 4.1 deployed on user provisioned infrastructure is not yet supported on Microsoft Azure Cloud and some of the features might not work as you expect, e.g. Internal Image Registry is ephemeral and all images will be gone if the image registry pod get restarted. 
 
The post OpenShift 4.1 UPI environment deployment on Microsoft Azure Cloud appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

How To Capture Database Changes

We really couldn’t add any information to Sadhana Nandakumar’s excellent walkthrough for capturing and propagating database changes using Red Hat AMQ Streams and Red Hat Fuse. She’s done a terrific job of laying out the steps and guiding users through the process of setting up such a system on Red Hat OpenShift. From Sadhana’s text:
The idea is to enable applications to respond almost immediately whenever there is a data change. We capture the changes as they occur using Debezium and stream it using Red Hat AMQ Streams. We then filter and transform the data using Red Hat Fuse and send it to Elasticsearch, where the data can be further analyzed or used by downstream systems.
Head on over to the Red Hat Developer Blog to read the full how-to, with instructions and links to everything you’ll need.
The post How To Capture Database Changes appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift