Introducing BigQuery Flex Slots for unparalleled flexibility and control

Organizations of all sizes look to BigQuery to meet their growing analytics needs. We hear that customers value BigQuery’s radically innovative architecture, serverless delivery model, and integrated advanced capabilities in machine learning, real-time analytics, and business intelligence. To help you balance explosive demand for analytics with the need for predictable spend, central control, and powerful workload management, we recently launched BigQuery Reservations.Today we are introducing Flex Slots, a new way to purchase BigQuery slots for short durations, as little as 60 seconds at a time. A slot is the unit of BigQuery analytics capacity. Flex Slots let you quickly respond to rapid demand for analytics and prepare for business events such as retail holidays and app launches. Flex Slots are rolling out to all BigQuery Reservations customers in the coming days!Flex Slots give BigQuery Reservations users immense flexibility without sacrificing cost predictability or control.Flex Slots are priced at $30 per slot per month, and are available in increments of 500 slots.It only takes seconds to deploy Flex Slots in BigQuery Reservations. You can cancel after just 60 seconds, and you will only be billed for the seconds Flex Slots are deployed.Benefits of Flex SlotsYou can seamlessly combine Flex Slots with existing annual and monthly commitments to supplement steady-state workloads with bursty analytics capability. You may find Flex Slots especially helpful for short-term uses, including:Planning for major calendar events, such as the tax season, Black Friday, popular media events, and video game launches. Meeting cyclical periods of high demand for analytics, like Monday mornings.Completing your data warehouse evaluations and dialing in the optimal number of slots to use.Major calendar events. For many businesses, specific days or weeks of the year are crucial. Retailers care about Black Friday and Cyber Monday, gaming studios focus on the first few days of launching new titles, and financial services companies worry about quarterly reporting and tax season. Flex Slots enable such organizations to scale up their analytics capacity for the few days necessary to sustain the business event, and scale down thereafter, only paying for what they consumed.Payment technology provider Global Payments plans to add even more flexibility to their usage with this feature. “BigQuery has been a steady engine driving our Merchant Portal Platform and analytics use cases. As a complex multinational organization, we were anxious to leverage BigQuery Reservations to manage BigQuery cost and resources. We had been able to manage our resources effectively in most areas but were missing a few,” says Mark Kubik, VP BI, data and analytics, application delivery at Global Payments. “With Flex Slots, we can now better plan for automated test suites, load testing, and seasonal events and respond to rapid growth in our business. We are eager to implement this new feature in our workloads to drive efficiency, customer experience, and improved testing.”Cyclical demand. If the majority of your users log into company systems at nine every Monday morning to check their business dashboards, you may spin up Flex Slots to rapidly respond to increased demand on your data warehouse. This is something that the team at Forbes has found helpful. “Moving to BigQuery Reservations enabled us to self-manage our BigQuery costs,” says David Johnson, vice president, business intelligence, Forbes. “Flex Slots will give us an additional layer of flexibility—we can now bring up slots whenever we have a large processing job to complete, and only pay for the few minutes they were needed.”Evaluations. Whether you’re deciding on BigQuery as your cloud data warehouse or trying to understand the right number of BigQuery slots to purchase, Flex Slots provide the flexibility to quickly experiment with your environment.The BigQuery advantageFlex Slots are especially powerful considering BigQuery’s unique architecture and true separation of storage and compute. Because BigQuery is serverless, provisioning Flex Slots doesn’t require instantiating virtual machines. It’s a simple back-end configuration change, so acquiring Flex Slots happens very quickly. And because BigQuery doesn’t rely on local disk for performance, there is no warm-up period with poor and unpredictable performance. Flex Slots perform optimally from the moment they’re provisioned. Flex Slots is an essential part of our BigQuery Reservations platform. BigQuery Reservations give intelligence-hungry enterprises the control necessary to enable their organizations with a powerful tool like BigQuery while minimizing fiscal and security risks:With Reservations, administrators can centrally decide who in their organization can make purchasing decisions, neutralizing the fear of shadow IT.  Users can manage and predict their organizations’ BigQuery spend and conformism to fixed budgets.Administrators can optionally manage how their departments, teams, and workloads get access to BigQuery in order to meet their specific analytics needs. Flex Slots offer BigQuery users an unparalleled level of flexibility—purchase slots for short bursts to complement your steady-state workloads. Getting started with Flex SlotsFlex Slots are rolling out as we speak, and will be available in the coming days in the BigQuery Reservations UI.You can purchase Flex Slots alongside monthly and annual commitment types, with the added benefit of being able to cancel them at any time after the first 60 seconds. To get started right away, try the BigQuery sandbox. If you are thinking about migrating to BigQuery from other data warehouses, check out our data warehouse migration offer. Learn more about:Flex Slots documentationBigQuery flat-rate pricing documentationReservations Quickstart guideReservations documentationWhat is a BigQuery slot? DocumentationChoosing between on-demand and flat-rate pricing modelsEstimating the number of slots to purchaseGuide to workload management with Reservations
Quelle: Google Cloud Platform

BigQuery leads the way toward modern data analytics

At Google, we think you should have the right tools and support to let you embrace data growth. Enterprises and digital-native organizations are generating incredible value from their data using Google Cloud’s smart analytics platform. At the heart of the platform is BigQuery, a cloud-native enterprise data warehouse. BigQuery helps organizations develop and operationalize massively scalable, data-driven intelligent applications for digital transformation.   Enterprises are modernizing with BigQuery to unlock blazing-fast business insights Businesses are using BigQuery to run mission-critical applications at scale in order to optimize operations, improve customer experiences, and lower total cost of ownership (TCO). We have customers running queries on massive datasets, as large as 100 trillion rows, and others running more than 10,000 concurrent queries across their organization. We’re seeing adoption across regions and industry verticals including retail, telecommunications, financial services, and more.Wayfair is one example of a retailer that was looking to scale its growing $8 billion global business while providing a richer experience for its 19 million active customers, 6,000 employees, and 11,000 suppliers. By moving to BigQuery, Wayfair can now make real-time decisions, from merchandising and personalized customer experiences to marketing and promotional campaigns. Wayfair’s data-driven approach provides the company with valuable and actionable insights across every part of the business. And they’re now able to seamlessly fulfill millions of transactions during peak shopping seasons. Financial services company KeyBank is migrating to BigQuery for scalability and reduced costs as compared to their on-prem data warehouse. “We are modernizing our data analytics strategy by migrating from an on-premises data warehouse to Google’s cloud-native data warehouse, BigQuery,” says Michael Onders, chief data officer at Keybank. “This transformation will help us scale our compute and storage needs seamlessly and lower our overall total cost of ownership. Google Cloud’s smart analytics platform will give us access to a broad ecosystem of data transformation tools and advanced machine learning tools so that we can easily generate predictive insights and unlock new findings from our data.” Other customers finding success with our smart analytics tools are Lowe’s, Sabre, and Lufthansa. They’re all modernizing their data analytics strategies and transforming their businesses to remain competitive in a changing data landscape.Product innovation is simplifying migrations and improving price predictabilityWe are continuing to make it easy to modernize your data warehouse with BigQuery. The new product capabilities we’re announcing are helping customers democratize advanced analytics, be assured of price predictability, and simplify migrations at scale.Simplifying migrations at scale: We’re helping customers fast-track data warehouse migrations to BigQuery with the general availability of RedShift and S3 migration tools. Customers can now seamlessly migrate from Amazon Redshift and Amazon S3 right into BigQuery with BigQuery Data Transfer Service. Customers such as John Lewis Partnership, Home Depot, Reddit, and Discord have all accelerated business insights with BigQuery by freeing themselves of the performance and analytics limitations of their Teradata and Redshift environments. ”Migrating from Redshift to BigQuery has been game-changing for our organization,” says Spencer Aiello, tech lead and manager, machine learning at Discord. “We’ve been able to overcome performance bottlenecks and capacity constraints as well as fearlessly unlock actionable insights for our business.” Offering enterprise readiness and price predictability: Enterprise customers need price predictability to do accurate forecasting and planning. We recently launched Reservations for workload management, and today, we’re pre-announcing beta availability of BigQuery Flex Slots, which enable customers to instantly scale up and down their BigQuery data warehouse to meet analytics demands without sacrificing price predictability. With Flex Slots, you can now purchase BigQuery commitments for short durations—as little as seconds at a time. This lets organizations instantly respond to rapid demand for analytics and plan for major business events, such as retail holidays and game launches. Learn more about Flex Slots here. We’re also pre-announcing the beta availability of column-level access controls in BigQuery. With BigQuery column-level security, you can now have access policies applied not just at the data container level, but also to the meaning and/or content of the data in your columns across your enterprise data warehouse. Finally, we now support unlimited DML/DDL statements on a table in BigQuery—find more details here.Democratizing advanced analytics: We’re making advanced analytics even more accessible to users across an organization. We’re excited to announce that BigQuery BI Engine is becoming generally available. Customers can analyze large and complex datasets interactively with sub-second query response time and high concurrency for interactive dashboarding and reporting. One Fortune 500 global media outlet using BI Engine summarized it well: “To deliver timely insights to our editors, journalists and management, it’s important we answer questions quickly. Once we started using BigQuery BI Engine, we saw an immediate performance boost with our existing Data Studio dashboards—everyone’s drilling down, filtering, and exploring data at so much faster a pace.” Learn more here.All these product innovations and more are helping customers jump-start their digital transformation journeys with ease. Our cohesive partner ecosystem creates a strong foundationWe’re making deep investments in our partner ecosystem and working with global and regional system integrators (GSIs) and other tech partners to simplify migrations across the planning phase, offer expertise, and make go-to-market delivery easier. GSI partners such as Wipro, Infosys, Accenture, Deloitte, Capgemini, Cognizant, and more have dedicated centers of excellence and Google Cloud partner teams. These teams are committed to defining and executing on a joint business plan, and have built end-to-end migration programs, accelerators, and services that are streamlining the modernization path to BigQuery. The Accenture Data Studio, Infosys Migration Workbench (MWB), and Wipro’s GCP Data and Insights Migration Studio are all examples of partner solutions that can help modernize your analytics landscape by supporting migrations at scale.Partners are essential for many cloud migration journeys. “Enterprises today are seeking to be data-driven as they navigate their digital journey,” says Satish H.C., EVP, data analytics at Infosys. “For our clients, we enable this transformation with our solutions like Digital Brain, Information Grid, Data Marketplace and Next Gen Analytics platform, powered by Google-native technologies like BigQuery, BigQuery ML, AI Platform and Cloud Functions.”“We are excited to be partnering with Google Cloud to help streamline data warehouse migrations to BigQuery so that organizations can unlock the full potential of their data,” says – Sriram Anand, Managing Director, North America Lead for Data Engineering at Accenture. “As our clients are managing increasingly fast-changing business needs, they are looking for ways to scale up to petabytes of data on-demand without performance disruptions and run blazing-fast queries to drive business innovation.”Tech partners are also core to our data warehouse modernization solution. With Informatica, customers can easily and securely migrate data and its schema from their on-prem applications and systems into BigQuery. Datometry and CompilerWorks both help customers migrate workloads without having to rewrite queries. Datometry eliminates the need to rewrite queries and instead converts the incoming request into target dialect on the fly, while CompilerWorks converts queries’ source dialect SQL into target dialect SQL. Along with their core offerings, these tech partners have also developed additional migration accelerators.We’re also happy to announce that SADA, a leading global business and technology consultancy managed service provider, just announced a multi-year agreement with Google Cloud. They will be introducing new solutions to help organizations modernize data analytics and data warehousing with Google Cloud, including support for Netezza, Teradata, and Hadoop migrations to BigQuery. These solutions offer a shorter time to value on new releases, expedite decision making with data-driven insights, and allow customers to focus more on innovation. Learn more here.Making the BigQuery moveWe’re seeing this momentum across our smart analytics portfolio as industry analysts such as Gartner and Forrester have recognized Google Cloud as a Leader in five new analyst reports over the past year, including the new Forrester Wave™: Data Management for Analytics, Q1 2020. These launches, updates, and new migration options are all designed to help businesses digitally transform their operations. Try the BigQuery sandbox to get started with BigQuery right away. Jumpstart your modernization journey with the data warehouse migration offer, and get expert design guidance and tools, partner solutions, and funding support to expedite your cloud migration.
Quelle: Google Cloud Platform

Behördenwebseiten: Corona-Informationen laden nur langsam

Viele Bürger wollen sich zur Zeit bei Behörden über die aktuellen Coronavirus-Erkrankungen informieren. Doch wer sich beim Gesundheitsministerium, beim Robert-Koch-Institut oder bei anderen Verantwortlichen erkundigen will, stellt fest: Deren Webseiten laden nur langsam und teilweise gar nicht. (Coronavirus, Internet)
Quelle: Golem

Enterprise Kubernetes with OpenShift (Part one)

The question “What’s the difference between Kubernetes and OpenShift?” comes up every now and then, and it is quite like asking: “What’s the difference between an engine and a car?”
To answer the latter, a car is a product that immediately makes you productive: it is ready to get you where you want to go. The engine, in return, won’t get you anywhere unless you assemble it with other essential components that will form in the end a … car.
As for the first question, in essence, you can think of it as Kubernetes being the engine that drives OpenShift, and OpenShift as the complete car (hence platform) that will get you where you want to.
This question comes up every now and then, so the goal of this blog post is to remind you that:

at the heart of OpenShift IS Kubernetes, and that it is a 100% certified Kubernetes, fully open source and non-proprietary, which means:

The API to the OpenShift  cluster is 100% Kubernetes.
Nothing changes between a container running on any other Kubernetes and running on OpenShift. No changes to the application.

 OpenShift brings added-value features to complement Kubernetes, and that’s what makes it a turnkey platform, readily usable in production, and significantly improving the developer experience, as will be shown throughout the post. That’s what makes it both the successful Enterprise Platform-as-a-Service (PaaS) everyone knows about from a developer perspective, but also the very reliable Container-as-a-Service from a production standpoint.

OpenShift IS Kubernetes, 100% Certified by the CNCF
 
Certified Kubernetes is at the core of OpenShift. Users of `kubectl` love its power, once they are done with the learning curve. Users transitioning from an existing Kubernetes Cluster to OpenShift frequently point out how much they love redirecting their kubeconfig to an OpenShift cluster and have all of their existing scripts work perfectly.
 
You may have heard of the OpenShift CLI tool called `oc`. This  tool is command compatible with `kubectl`, but adds a few extra special helpers that help get your job done. But first, let’s see how oc is just kubectl:

kubectl commands
oc commands

kubectl get pods

oc get pods

kubectl get namespaces

oc get namespaces

kubectl create -f deployment.yaml

oc create -f deployment.yaml

 
Here are the results of using kubectl commands against an OpenShift API:

kubectl get pods => well, it returns … pods

 

kubectl get namespaces => well, it returns… namespaces

kubectl create -f mydeployment.yaml => it creates the kubernetes resources just like it would on any other Kubernetes platform. Let’s verify that with the following video:

 
In other words, the Kubernetes API is fully exposed in OpenShift, 100% compliant to the upstream. That’s why OpenShift is a Certified Kubernetes distribution by the CNCF.
 
OpenShift brings added-value features to complement Kubernetes
 
While the Kubernetes API is 100% accessible within OpenShift, the kubectl command-line lacks many features that could make it more user-friendly, and that’s why Red Hat complements Kubernetes with a set of  features and command-line tools like OC (OpenShift client) and ODO (OpenShift DO, targeting developers).
1 – “oc” complements “Kubectl” with extra power and simplicity
OC for instance is the OpenShift command-line that adds several features over kubectl, like the ability to create new namespaces, easily switch context, and commands for developers such as the ability to build container images and deploy applications directly from source code or binaries, also known as the Source-to-image process, or s2i.
Let’s take a look at a few instances of where oc has built-in helpers and additional functionality to make your day to day life easier.
First example is namespace management. Every Kubernetes cluster has multiple namespaces, usually to provide environments from development to production, but also for every developer that will need sandbox environments for instance. This means you’re going to switch between them frequently, since kubectl commands are contextual to your namespace. If you’re using kubectl, you will frequently see folks use helper scripts to do this, but with oc you just say oc project foobar to switch to foobar.
And if you can’t remember your namespace name? You can just list it out with oc get projects. What if you only had access to a subset of the namespaces on the cluster? That command should list them out right? Not so with kubectl, unless you have RBAC access to list all namespaces on the cluster, which is not frequently granted on larger clusters. But with oc, you easily get a list of your namespaces. A small way Openshift is enterprise-ready and designed to scale with both your human users and applications.
 
2 – ODO improves the developer experience over kubectl
Another tool that Red Hat brings with OpenShift is ODO, a command-line that streamlines the developer experience, allowing him to quickly deploy local code to a remote OpenShift cluster, and have an efficient inner loop where all his changes can instantly be synced with the running container in the remote OpenShift, avoiding the burden of rebuilding the image, pushing it to a registry then deploying it again.
Here are a few examples where “oc” or odo command makes life easier to work with containers and Kubernetes.
In the following section, let’s compare a kubectl-based workflow to using oc or odo.

Deploying code to OpenShift without being a YAML native-speaker:

Kubernetes / kubectl

$> git clone https://github.com/sclorg/nodejs-ex.git
1- Create a Dockerfile that builds the image from code
————–
FROM node
WORKDIR /usr/src/app
COPY package*.json ./
COPY index.js ./
COPY ./app ./app
RUN npm install
EXPOSE 3000
CMD [ “npm”, “start” ]
————–
2- build the image
$> podman build …
3- login to a registry
podman login …
4- Push the image to a registry
podman push
5- Create yaml files that will help deploy the app (deployment.yaml, service.yaml, ingress.yaml) are the bare minimum
6- deploy the manifest files:
Kubectl apply -f .

OpenShift / oc

$> oc new-app https://github.com/sclorg/nodejs-ex.git –name myapp

OpenShift / odo

$> git clone https://github.com/sclorg/nodejs-ex.git
$> odo create component nodejs myapp
$> odo push

 

Switching contexts: changing working namespace or working cluster

Kubernetes / kubectl

1- create context in kubeconfig for “myproject”
2- kubectl set-context …

OpenShift / oc

oc project “myproject”

 
 
Quality assurance process: “I have coded a new alpha feature, should we ship it to production?”
When you try a prototype car and the guy says: “I’ve put in some new types of brakes, honestly I’m not sure if they’re safe yet… but GO AHEAD and try it!”, do you blindly do so? I guess NO, and we feel the same way at Red Hat :)
That’s why we might refrain from alpha features until they mature, and we have battle-tested them during our qualification process and feel it is safer to use. Usually it goes through a Dev Preview phase, then a Tech Preview phase, then a General Availability Phase when they’re stable for production.
Why is that so ? Because, like in any other software craftsmanship, some initial concepts in Kubernetes might never make it in the final release, or they could functionally but in a very different implementation than what was initially delivered as an alpha feature. Because Red Hat is supporting more than a thousand customers for business-critical missions with OpenShift, we believe in delivering a stable and long-supported platform.
Red Hat is actively delivering frequent OpenShift releases, and updates the Kubernetes version within OpenShift, for instance OpenShift 4.3 which is the current GA release embeds Kubernetes 1.16, just one version behind upstream Kubernetes version 1.17; this is on-purpose in order to deliver production-grade Kubernetes and do extra Quality Assurance within the OpenShift release cycle.
 
The Kubernetes escalation flaw: “There is a critical Kubernetes bug in Production, do I need to upgrade all my production up to 3 releases to get the fix?”
In the Kubernetes upstream project, fixes are usually delivered in the next release, and sometimes backported to 1 or 2 minor releases, spanning a 6 months time frame.
Red Hat has a proven track record of fixing critical bugs earlier than others, and on a much longer time frame. Take a look at the Kubernetes privilege escalation flaw (CVE-2018-1002105) that was discovered in kubernetes 1.11 and backward fixed upstream only until 1.10.11, leaving all previous kubernetes 1.x until 1.9 subject to the flaw.
On the opposite, Red Hat patched OpenShift until version 3.2 (or Kubernetes 1.2), spanning 9 OpenShift releases backwards, showing it would actively support its customers in these difficult situations. (See this blog for further information).
 
Kubernetes upstream benefits from OpenShift and Red Hat’s contributions to code
Red Hat is the second largest code-contributor to Kubernetes behind Google and currently employs 3 of the top 5 Kubernetes code contributors. Little known is that many critical features of upstream Kubernetes have been contributed by Red Hat. Some major examples of these are:

RBAC: for some time, Kubernetes didn’t implement RBAC features (ClusterRole, ClusterRoleBinding), until Red Hat engineers decided to implement them in Kubernetes itself rather than as an added-value feature of OpenShift. So is Red Hat afraid of improving Kubernetes? Of course not, that’s what makes it Red Hat and not just any other Open Core software provider. Improvements that are made in the upstream communities mean more sustainability and broad adoption, which ultimately is the goal, to make these open source projects drive benefits for the customers.
Pod Security Policies: Initially, these concepts that allow secure execution of applications within pods were present in OpenShift and known as Security Context Constraints or SCC. Again, Red Hat decided to backport them upstream, and now everyone using Kubernetes or OpenShift benefits from it.

There are many more examples, but these are simple illustrations that Red Hat is committed to make Kubernetes an even more successful project.
 
So now the real question: what is the difference between OpenShift and Kubernetes? :)
 
By now, I hope you understand that Kubernetes is a core component of OpenShift, but nonetheless ONE component among MANY others. That means that just installing Kubernetes to have a production grade platform is not enough: you’ll need to add authentication, networking, security, monitoring, logs management, … That means you will also have to pick your tools among everything available (see CNCF landscape to get an idea of the complexity of the ecosystem), and maintain the cohesion of all of them as a whole; but also do updates and regression tests whenever there is a new version of one of these components. That finally means you are turning into a software editor, except that you are spending effort on building and maintaining a platform rather than investing on business value that will differentiate you from your competitors.
With OpenShift, Red Hat has decided to shield this complexity and deliver a comprehensive platform, including not only Kubernetes at its core, but also all the essential open source tools that make it an enterprise-ready solution to confidently run your production. Of course, in case you already have your own stacks, then you can opt-out and plug into your existing solutions.
OpenShift – a smarter Kubernetes Platform
 
Let’s look at Figure 1: surrounding Kubernetes are all the areas where Red Hat adds features that are not in Kubernetes by design, among which:
1- A trusted OS foundation: RHEL CoreOS or RHEL
Red Hat has been the leading provider of Linux for business-critical applications for over 20 years, and is putting its experience in delivering a SOLID and TRUSTED foundation for running containers in production. RHEL CoreOS shares the same kernel as RHEL, but is optimized for running containers and managing Kubernetes clusters at scale: and takes lower footprint and its immutable nature makes it easier to install clusters, adds auto-scaling, auto-remediation for workers, etc. All these features makes it the perfect foundation to deliver the same OpenShift experience, anywhere from bare-metal to private clouds and public clouds
2- Automated operations
Automated installation and day-2 Operations are OpenShift key features that make it easier to administrate, upgrade, and provide a first-class container platform. The usage of operators at the core of OpenShift 4 architecture is a strong foundation to make this possible.
OpenShift 4 also includes an extremely rich ecosystem of operators based solutions, developed by Red Hat and by 3rd party partners (see the operators catalog for Red Hat hosted operators, or operatorhub.io, a public marketplace created by Red Hat, to see community operators too).

OpenShift 4 gives you access to over 180 operators from the integrated catalog
 
3- Developer services
Since 2011, OpenShift has been a PaaS or Platform-as-a-Service, meaning that its goal is to make developer’s daily life easier, allowing them to focus on delivering code and offering a rich set of out-of-the-box support for languages such as Java, Node.js, PHP, Ruby, Python, Go and services like CICD, databases, etc. OpenShift 4 offers a rich catalog of over 100 services delivered through Operators, either by Red Hat or by our strong ecosystem of partners.
OpenShift 4 also adds a graphical UI (the developer console) dedicated to developers, allowing them to easily deploy applications to their namespaces from different sources (git code, external registries, Dockerfile…) and providing a visual representation of the application components to materialize how they interact together.
The developer console shows the components of your application and eases interaction with Kubernetes
 
In addition, OpenShift provides Codeready sets of tools for developers, such as Codeready Workspaces, a fully containerized web-IDE that runs on top of OpenShift itself, providing an IDE-as-a-service experience. For developers who still want to run everything on their laptop, they can rely on Codeready Containers, which is an all-in-one OpenShift 4 running on the laptop.

The integrated webIDE-as-a-service allows you to efficiently develop on Kubernetes/OpenShift
 
OpenShift also offers out-of-the-box advanced CI/CD features; such as containerized Jenkins with a DSL to accelerate writing your pipelines, or Tekton (now in Tech preview) for a more Kubernetes-native CICD experience. Both solutions offer a native integration with the OpenShift console, allowing to trigger pipeline, view deployments, logs etc.
4- Application services
OpenShift allows you to deploy traditional stateful applications, alongside cutting-edge cloud-native applications, by supporting modern architectures such as microservices or serverless. In fact; OpenShift Service Mesh provides Istio, Kiali and Jaeger out-of-the-box to support your adoption of microservices. OpenShift Serverless includes Knative but also joint initiatives with Microsoft such as Keda to provide Azure functions on top of OpenShift.

The integrated OpenShift ServiceMesh (Istio, Kiali, Jaeger) helps you with microservices development
 
To reduce the gap between legacy applications and containers, OpenShift allows you now to even migrate your legacy virtual machines to OpenShift itself by using Container Native Virtualization (now in TechPreview), making hybrid applications a reality and easing portability across clouds, both private and public.

A Windows 2019 Virtual Machine running natively on OpenShift with Container Native Virtualization (currently in Tech preview)
 
 
5- Cluster Services
Every enterprise-grade platform requires supporting services like monitoring, centralized logs, security mechanisms, authentication and authorization, networking management, and these are all features that come out-of-the-box with OpenShift with out-of-the-box open source solutions like ElasticSearch, Prometheus, Grafana. All these solutions are packed with pre-built dashboards, metrics, alerts that come from Red Hat’s experience in monitoring clusters at scale, giving you right-away the most important information for your production.
OpenShift also adds essential enterprise services like authentication with a built-in oauth provider, integration to your identity providers such as LDAP, ActiveDirectory, OpenID Connect, and so on.

Out-of-the-Box Grafana dashboards allows you to monitor your OpenShift cluster
 

Out-of-the-Box Prometheus metrics and alerting rules (+150) allows you to monitor your OpenShift cluster
What’s next?
This rich set of features and the deep-expertise Red Hat has in the Kubernetes ecosystem are the reason why OpenShift has a significant head start against other solutions in the market, as we can see in the following figure (see this article for more information).
 

“So far, Red Hat stands out as the market leader with 44 percent market share.
The company is reaping the fruit of its hands-on sales strategy, where they consult and train enterprise developers first and then monetize once the enterprise deploys containers in production.”
(source: https://www.lightreading.com/nfv/containers/ihs-red-hat-container-strategy-is-paying-off/d/d-id/753863)
 
I hope you enjoyed this first part of a series to come, where I will be discussing the benefits that OpenShift adds on top of Kubernetes in every one of these categories.
 
 
The post Enterprise Kubernetes with OpenShift (Part one) appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift