Introducing Multi-Cloud Object Gateway for OpenShift

The Multi-Cloud Object Gateway is a new data federation service introduced in OpenShift Container Storage 4.2. The technology is based on the NooBaa project, which was acquired by Red Hat in November 2018, and open sourced recently. More information can be found here https://github.com/noobaa/noobaa-operator.
The Multi-Cloud Object Gateway has an object interface with an S3 compatible API. The service is deployed automatically as part of OpenShift Container Storage 4.2 and provides the same functionality regardless of its hosting environment.
Simplicity, Single experience anywhere
In its default deployment, the Multi-Cloud Object Gateway provides a local object service backed by using local storage or cloud-native storage, if hosted in the cloud.
Every data bucket on the Multi-Cloud Object Gateway is backed, by default, by using local storage or cloud-native storage, if hosted in the cloud. No additional configuration is required.
The Multi-Cloud Object Gateway’s object service API is always an S3 API, which means a single experience on-premise and in the cloud, for any cloud provider. This translates to a zero learning curve when moving to, or adding a new cloud vendor. That translates into greater agility for your teams.
Elasticity
The administrator can add multiple backing stores and apply mirroring policies to create hybrid data buckets and multi-cloud data buckets, using cloud-native storage providers and/or on-prem storage providers. Each bucket can have its own data placement policy and can be changed over time, to support the changing needs of applications and environments.

Integrated Monitoring and Management
The Multi-Cloud Object Gateway leverages the power of Kubernetes Operators to automate complex workflows, i.e. deployment, bootstrapping, configuration, provisioning, scaling, upgrading, monitoring and resource management. It is integrated into the OpenShift storage dashboard to provide an instant view of the current object usage, alerts and resource allocations.

If object services are impacted, the Multi-Cloud Object Gateway Operator will actively perform healing and recovery as needed to ensure data is resilient and available to users. There is no need for the Administrator to enable healing operations, set up jobs to rebalance or redistribute the data, or even upgrade the storage services. For administrators concerned with automatic upgrades, the OpenShift Container Storage Operator can be configured to be manually upgraded to meet organizational maintenance policies or considerations, as well.
Object Provisioning Made Easy
OpenShift Container Storage supports persistent volume claims for block and file-based storage. In addition, it introduces the Object Bucket Claims (OBC) and Object Buckets (OB) concept, which takes inspiration from Persistent Volume Claims (PVC) and Persistent Volumes (PV).
A generic, dynamic bucket provisioning API, similar to Persistent Volumes and Persistent Volume Claims is introduced, so that users familiar with the PVC/PV model can handle bucket provisioning with a similar pattern.
Applications that require an object bucket will create an Object Bucket Claim (OBC) and refer to the object storage class name.
Example:
The object bucket claim creates an object bucket and an account with new credentials.
You can use oc to create the Object Bucket Claim:
$ oc get objectbucket
NAME STORAGE-CLASS CLAIM-NAMESPACE CLAIM-NAME RECLAIM-POLICY PHASE AGE
obc-test-obc-test openshift-storage.noobaa.io obc-test Delete Bound 80s

Use oc to confirm the Object Bucket and accompanying Object Bucket Claim is created:
$ oc get objectbucket
NAME STORAGE-CLASS CLAIM-NAMESPACE CLAIM-NAME RECLAIM-POLICY PHASE AGE
obc-test-obc-test openshift-storage.noobaa.io obc-test Delete Bound 80s

After creating the Object Bucket Claim, the following Kubernetes resources would be created:
An Object Bucket which contains the bucket endpoint information, a reference to the Object Bucket Claim and a reference to the storage class.
A ConfigMap in the same namespace as the Object Bucket Claim, which contains connection information such as the endpoint host, port and bucket name, to be used by applications in order to consume the object service
A Secret in the same namespace as the OBC, which contains the access key and secret key needed to access the bucket.
This information can be used with environment variables. The following YAML is an example of a job using Object Bucket Claim and reading the information from the config map and secret into the environment variables:
apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
name: “obc-test”
spec:
generateBucketName: “obc-test-noobaa”
storageClassName: openshift-storage.noobaa.io

apiVersion: batch/v1
kind: Job
metadata:
name: obc-test
spec:
template:
spec:
restartPolicy: OnFailure
containers:
– image: quay.io/etamir/training:latest
name: obc-test
env:
– name: BUCKET_NAME
valueFrom:
configMapKeyRef:
name: obc-test
key: BUCKET_NAME
– name: BUCKET_HOST
valueFrom:
configMapKeyRef:
name: obc-test
key: BUCKET_HOST
– name: BUCKET_PORT
valueFrom:
configMapKeyRef:
name: obc-test
key: BUCKET_PORT
– name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: obc-test
key: AWS_ACCESS_KEY_ID
– name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: obc-test
key: AWS_SECRET_ACCESS_KEY
– name: AWS_DEFAULT_REGION
value: “us-east-1″
volumeMounts:
– name: training-persistent-storage
mountPath: /data
volumes:
– name: training-persistent-storage
emptyDir: {}

Security First
The Multi-Cloud Object Gateway provides multiple solutions for security concerns out of the box. 

Data encryption by default – every write operation chunked into multiple chunks and encrypted with a new key. 
Key management separation from data – all the keys are managed in a centralized location, separated from the encrypted chunks of data, regardless of the data location which can be in the cloud, on prem or a mixture for hybrid and multi-cloud deployments. 
Data isolation – every object bucket claim creates a new account, with new credentials permitted to access a new single bucket and create new buckets accessible only to this account, by default. 

 
 
Resources and Feedback
To find out more about OpenShift Container Storage or to take a test drive, visit https://www.openshift.com/products/container-storage/.
If you would like to learn more about what the OpenShift Container Storage team is up to or provide feedback on any of the new 4.2 features, take this brief 3-minute survey.
The post Introducing Multi-Cloud Object Gateway for OpenShift appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

How a hybrid workforce can save up to 20 hours a month

How productive would your company employees be if they could save two hours a day on regular tasks?
With the growth and evolution of today’s digital economy, companies face the challenge of managing increasingly complex business processes that involve massive amounts of data. This has also led to repetitive work, like requiring employees to manually perform data-intensive tasks when there are technologies available that could help free their time and automate tasks.
According to a WorkMarket report, 53 percent of employees believe they could save up to two hours a day by automating tasks; that equates to roughly 20 hours a month. Working on tasks that could easily be automated is probably not the best use of employees’ time, especially if your business is trying to improve productivity or customer service.
How automation and RPA bots can help improve social welfare
Let’s look at Ana, who is a social worker focused on child welfare and is entrusted with the safety and well-being of children. Like most employees, Ana does whatever it takes to get the job done. Her dilemma is that she spends up to 80 percent of her time executing repetitive, administrative tasks, such as typing handwritten notes and forms into agency systems or manually requesting verifications or background checks from external systems. This leaves only around 20 percent for client-facing activities, which is too low to improve long-term client outcomes.
Can automation make an immediate impact on the well-being of our children and improve the efficiency of the child welfare workers charged with their safety? Simply put, the answer is yes.
Social workers can shift focus back on the important work they do with the help of proven automation technologies. By combining automation capabilities or services, such as automating tasks with robotic process automation (RPA) bots, extracting and classifying data from documents and automating decisions can make a significant and positive impact in the entire social services industry. Watch the below video to see how automation creates more time for child welfare workers to focus on helping vulnerable children by automating repetitive administrative work.
 

 
As you can see from the above video, Ana is able to offload a number of her repetitive, routine and administrative tasks to a bot, freeing her to spend more time and effort towards improving the lives of children. The intent of bots is to augment human worker roles for optimal work-effort outcomes, not replace them.
How hybrid workforce solutions help bring freedom
In the future of work, a hybrid workforce will emerge. In this hybrid workforce, bots will work seamlessly alongside human counterparts to get work done more efficiently and deliver exceptional experiences to both customers and employees. The hybrid workforce of the future will allow human employees to focus on inherent human strengths (for example, strategy, judgment, creativity and empathy).
We’ve been enabling IBM Cloud Pak for Automation, our automation software platform for digital business, to interoperate with more RPA solutions. This interoperability gives clients greater freedom of choice to execute according to their business objectives. Our newest collaboration is with Blue Prism, a market-leading RPA vendor.
While our customers are increasingly seeking RPA capabilities to complement digital transformation efforts, Blue Prism customers are building out capabilities to surround their RPA initiatives — including artificial intelligence (AI), machine learning, natural language processing, intelligent document processing and business process management.
To enable greater interoperability between automation platforms, IBM and Blue Prism jointly developed API connectors, available on Blue Prism’s Digital Exchange (DX). These API connectors will help customers seamlessly integrate Blue Prism RPA task automation technology with three key IBM Digital Business Automation platform capabilities: Workflow, Data Capture and Decision Management.
This technical collaboration offers clients an automation solution for every style of work. This includes  immediately automating small-scale processes for efficiency and rapid return on investment (ROI), all the way to achieving a larger digital labor outcome through multiple types of automation.
Read the no-hype RPA Buyer’s Guide to learn how you can extend the value of your RPA investment by using an automation platform to establish new ways of working, maximize the expertise of your employees, lower operational costs and improve the experiences for our employees.
 
The post How a hybrid workforce can save up to 20 hours a month appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Introducing Red Hat OpenShift 4.3 to Enhance Kubernetes Security

Today Red Hat announces the general availability of Red Hat OpenShift 4.3, the newest version of the industry’s most comprehensive enterprise Kubernetes platform. With security a paramount need for nearly every enterprise, particularly for organizations in the government, financial services and healthcare sectors, OpenShift 4.3 delivers FIPS (Federal Information Processing Standard) compliant encryption and additional security enhancements to enterprises across industries. Combined, these new and extended features can help protect sensitive customer data with stronger encryption controls and improve the oversight of access control across applications and the platform itself. 
This release also coincides with the general availability of Red Hat OpenShift Container Storage 4, which offers greater portability, simplicity and scale for data-centric Kubernetes workloads.
Encryption to strengthen the security of containerized applications on OpenShift
As a trusted enterprise Kubernetes platform, the latest release of Red Hat OpenShift brings stronger platform security that better meets the needs of enterprises and government organizations handling extremely sensitive data and workloads with FIPS (Federal Information Processing Standard) compliant encryption (FIPS 140-2 Level 1). FIPS validated cryptography is mandatory for US federal departments that encrypt sensitive data. When OpenShift runs on Red Hat Enterprise Linux booted in FIPS mode, OpenShift calls into the Red Hat Enterprise Linux FIPS validated cryptographic libraries. The go-toolset that enables this functionality is available to all Red Hat customers. 
OpenShift 4.3 brings support for encryption of etcd, which provides additional protection for secrets at rest. Customers will have the option to encrypt sensitive data stored in etcd, providing better defense against malicious parties attempting to gain access to data such as secrets and config maps stored in ectd.
NBDE (Network-Bound Disk Encryption) can be used to automate remote enablement of LUKS (Linux Unified Key Setup-on-disk-format) encrypted volumes, making it easier to protect against physical theft of host storage. 
Together, these capabilities enhance OpenShift’s defense-in-depth approach to security. 
Better access controls to comply with company security practices 
OpenShift is designed to deliver a cloud-like experience across all environments running on the hybrid cloud. 
OpenShift 4.3 adds new capabilities and platforms to the installer, helping customers to embrace their company’s best security practices and gain greater access control across hybrid cloud environments. Customers can deploy OpenShift clusters to customer-managed, pre-existing VPN / VPC (Virtual Private Network / Virtual Private Cloud) and subnets on AWS, Microsoft Azure and Google Cloud Platform. They can also install OpenShift clusters with private facing load balancer endpoints, not publicly accessible from the Internet, on AWS, Azure and GCP.
With “bring your own” VPN / VPC, as well as with support for disconnected installs, users can have more granular control of their OpenShift installations and take advantage of common best practices for security used within their organizations. 
In addition, OpenShift admins have access to a new configuration API that allows them to select the cipher suites that are used by the Ingress controller, API server and OAuth Operator for Transport Layer Security (TLS). This new API helps teams adhere to their company security and networking standards easily.
OpenShift Container Storage 4 across the cloud
Available alongside OpenShift 4.3 today is Red Hat OpenShift Container Storage 4, which is designed to deliver a comprehensive, multicloud storage experience to users of OpenShift Container Platform. Enhanced with multicloud gateway technology from Red Hat’s acquisition of NooBaa, OpenShift Container Storage 4 offers greater abstraction and flexibility. Customers can choose data services across multiple public clouds, while operating from a unified Kubernetes-based control plane for applications and storage.
To help drive security across disparate cloud environments, this release brings enhanced built-in data protection features, such as encryption, anonymization, key separation and erasure coding. Using the multicloud gateway, developers can more confidently share and access sensitive application data in a more secure, compliant manner across multiple geo-locations and platforms.
OpenShift Container Storage 4 is deployed and managed by Operators, bringing automated lifecycle management to the storage layer, and helping with easier day 2 management.
Automation to enhance day two operations with OpenShift
OpenShift helps customers maintain control for day two operations and beyond when it comes to managing Kubernetes via enhanced monitoring, visibility and alerting. OpenShift 4.3 extends this commitment to control by making it easier to manage the machines underpinning OpenShift deployments with automated health checking and remediation. This area of automated operations capabilities is especially helpful to monitor for drift in state between machines and nodes.
OpenShift 4 also enhances automation through Kubernetes Operators. Customers already have access to Certified and community Operators created by Red Hat and ISVs, but customers have also expressed interest in creating Operators for their specific internal needs. With this release, this need is addressed with the ability to register a private Operator catalog within OperatorHub. Customers with air-gapped installs can find this especially useful in order to take advantage of Operators for highly-secure or sensitive environments.
With this release the Container Security Operator for Red Hat Quay is generally available on OperatorHub.io and embedded into OperatorHub in Red Hat OpenShift. This brings Quay and Clair vulnerability scanning metadata to Kubernetes and OpenShift. Kubernetes cluster administrators can monitor known container image vulnerabilities in pods running on their Kubernetes cluster. If the container registry supports image scanning, such as Quay with Clair, then the Operator will expose any vulnerabilities found via the Kubernetes API.
OpenShift 4.3 is based on Kubernetes 1.16. Red Hat supports customer upgrades from OpenShift 4.2 to 4.3. Other notable features in OpenShift 4.3 include application monitoring with Prometheus (TP), forwarding logs off cluster based on log type (TP), Multus enhancements (IPAM), SR-IOV (GA), Node Topology Manager (TP), re-size of Persistent Volumes with CSI (TP), iSCSI raw block (GA) and new extensions and customizations for the OpenShift Console.
Test Drive Red Hat OpenShift 4
Red Hat OpenShift is trusted by enterprises around the globe. This release comes at the heels of Red Hat’s recent win of the Ford IT Innovation award, which recognized Red Hat’s leadership in innovation enterprise Kubernetes.
OpenShift 4.3 will be available by the end of the month in the coming days. We encourage current customers to check out these new capabilities through the Red Hat customer portal. New to Kubernetes and OpenShift? Try out OpenShift 4 in-browser, through either our hands-on lab (for operations) or learn.openshift.com (great for developers).
Learn more:

Get started with OpenShift 4
Transition from OpenShift 3 to 4
About OpenShift Container Storage 4 
About Multi-Cloud Object Gateway
View customer stories about Red Hat OpenShift

The post Introducing Red Hat OpenShift 4.3 to Enhance Kubernetes Security appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Exploring container security: Announcing the CIS Google Kubernetes Engine Benchmark

If you’re serious about the security of your Kubernetes operating environment, you need to build on a strong foundation. The Center for Internet Security’s (CIS) Kubernetes Benchmark give you just that: a set of Kubernetes security best practices that will help you build an operating environment that meets the approval of both regulators and customers. The CIS Kubernetes Benchmark v1.5.0 was recently released, covering environments up to Kubernetes v1.15. Written as a series of recommendations rather than as a must-do checklist, the Benchmarks follows the upstream version of Kubernetes. But for users running managed distributions such as our own Google Kubernetes Engine (GKE), not all of its recommendations are applicable. To help, we’ve released in conjunction with CIS, a new CIS Google Kubernetes Engine (GKE) Benchmark, available under the CIS Kubernetes Benchmark, which takes the guesswork out of figuring out which CIS Benchmark recommendations you need to implement, and which ones Google Cloud handles as part of the GKE shared responsibility model.Read on to find out what’s new in the v1.5.0 CIS Kubernetes Benchmark, how to use the CIS GKE Benchmark, and how you can test if you’re following recommended best practices.Exploring the CIS Kubernetes Benchmark v1.5.0The CIS Kubernetes Benchmark v1.5.0 was published in mid October, and has a significantly different structure than the previous version. Whereas the previous version split up master and worker node configurations at a high level, the new version separates controls by the components to which they apply: control plane components, etcd, control plane configuration, worker nodes, and policies. This should help make it easier for you to apply the guidance to a particular distribution, as you may not be able to control some components, nor is it your responsibility.In terms of specific controls, you’ll see additional recommendations for: Secret management. New recommendations include Minimize access to secrets (5.1.2), Prefer using secrets as files over secrets as environment variables (5.4.1), and Consider external secret storage (5.4.2).Audit logging. In addition to an existing recommendation on how to ensure audit logging is configured properly with the control plane’s audit log flags, there are new recommendations to Ensure that a minimal audit policy is created (3.2.1), and Ensure that the audit policy covers key security concerns (3.2.2).Preventing unnecessary access, by locking down permissions in Kubernetes following the principle of least privilege. Specifically, you should Minimize wildcard use in Roles and ClusterRoles (5.1.3).Introducing the new CIS GKE BenchmarkWhat does this mean if you’re using a managed distribution like GKE? As we mentioned earlier, the CIS Kubernetes Benchmark is written for the open-source Kubernetes distribution. And while it’s intended to be as universally applicable as possible, it doesn’t fully apply to hosted distributions like GKE.The new CIS GKE Benchmark is a child of the CIS Kubernetes Benchmark specifically designed for the GKE distribution. This is the first distribution-specific CIS Benchmark to draw from the existing benchmark, but removing items that can’t be configured or managed by the user. The CIS GKE Benchmark also includes additional controls that are Google Cloud-specific, and that we recommend you apply to your clusters, for example, as defined in the GKE hardening guide. Altogether, it means that you have a single set of controls for security best practice on GKE.There are two kinds of recommendations in the GKE CIS Benchmark. Level 1 recommendations are meant to be widely applicable—you should really be following these, for example enabling Stackdriver Kubernetes Logging and Monitoring. Level 2 recommendations, meanwhile, result in a more stringent security environment, but are not necessarily applicable to all cases. These recommendations should be implemented with more care to avoid potential conflicts in more complicated environments. For example, Level 2 recommendations may be more relevant to multi-tenant workloads than single-tenant, like using GKE Sandbox to run untrusted workloads. The CIS GKE Benchmark recommendations are listed as “Scored” when they can be easily tested using an automated method (like an API call or the gcloud CLI), and the setting has a value that can be definitively evaluated, for example, ensuring node auto-upgrade is enabled. Recommendations are listed as “Not Scored” when a setting cannot be easily assessed using automation or the exact implementation is specific to your workload—for example, using firewall rules to restrict ingress and egress traffic to your nodes—or they use a beta feature that you might not want to use in production.If you want to suggest a new recommendation or a change to an existing one, please contribute directly to the CIS Benchmark in the CIS Workbench community.Applying and testing the CIS BenchmarksThere are actually several CIS Benchmarks that are relevant to GKE, and there are tools available to help you test whether you’re following their recommendations. For the CIS Kubernetes Benchmark, you can use a tool like kube-bench to test your existing configuration; for the CIS GKE Benchmark, there’s Security Health Analytics, a security product that integrates into Security Command Center and that has built-in checks for several CIS GCP and GKE Benchmark items. By enabling Security Health Analytics, you’ll be able to discover, review, and remediate any cluster configurations you have that aren’t up to par with best practices from the CIS Benchmarks in the Security Command Center vulnerabilities dashboard.Security Health Analytics scan results for CIS BenchmarksDocumenting GKE control plane configurationsThe new CIS GKE Benchmark should help make it easier for you to implement and adhere to Kubernetes security best practices. And for components that they don’t cover, we’ve documented where the GKE control plane implements the new Kubernetes CIS Benchmark, where we are working to improve our posture, and the existing mitigating controls we have in place. We hope this helps you make an informed decision on what controls to put in place yourself, and better understand your existing threat model.Check out the new CIS GKE Benchmark, the updated CIS Kubernetes Benchmark, and understand how GKE performs according to the CIS Kubernetes Benchmark. If you’re already using the GKE hardening guide, we’ve added references to the corresponding CIS Benchmark recommendations so you can easily demonstrate that your hardened clusters meets your requirements.The CIS GKE Benchmark were developed in concert with Control Plane and the Center for Internet Security (CIS) Kubernetes community.
Quelle: Google Cloud Platform

How Google Cloud helped Phoenix Labs meet demand spikes with ease for its hit multiplayer game Dauntless

In the role-playing video game Dauntless, players work in groups to battle monsters and protect the city-island of Ramsgate. Commitment reaps big rewards: with every beast slayed, you earn new weapons and armor made of the same materials as the Behemoth you took down, strengthening your arsenal for the next battle. And when creating Dauntless, game studio Phoenix Labs channeled these same values of resourcefulness, teamwork, and persistence. But instead of using war pikes and swords, it wielded the power of the cloud to achieve its goals.  Preparing for unknown battles with containers and the cloudFor the gaming industry, launches bring unique technological challenges. It’s impossible to predict if a game will go viral, and developers like Phoenix Labs need to plan for a number of scenarios without knowing exactly how many players will show up and how much server capacity will ultimately be needed. In addition, since Dauntless was the first game in the industry to launch cross-platform—available on PlayStation 4, Xbox One, and PCs—it would be critical for all the underlying cloud-based services to work together flawlessly and provide an uninterrupted, real-time and consistent experience for players around the globe.As part of staying agile to meet player needs, Phoenix Labs runs all its game servers in containers on Google Cloud Platform (GCP). The studio has a custom Google Kubernetes Engine (GKE) cluster in each region where Dauntless is available, across five continents (North America, Australia, Europe and Asia). When a player loads the game, Dauntless matches him or her with up to three other players, forming a virtual team that is taken to a neighboring island to hunt a Behemoth monster together. Each “group hunt” runs on an ephemeral pod on GKE, lasting for about 15 minutes before the players complete their assignment and return to Ramsgate to polish their weapons and prepare for the next battle. “Containerizing servers isn’t very common in the gaming industry, especially for larger games,” said Simon Beaumont, VP Technology at Phoenix Labs. “Google Cloud spearheaded this effort with their leadership and unique technology expertise, and their platform gave us the flexibility to use Kubernetes-as-a-service in production.”Addressing player and customer needs at launch and beyondWhen Dauntless launched out of beta earlier this year, the required amount of server capacity turned out to be a lot. Within the first week, player count quickly climbed to 4 million—rapid growth that was no small feat to accommodate.Continuously addressing Reddit and Twitter feedback from players, Phoenix Labs’ lean team worked side by side with Google Cloud Professional Services to execute over 1,700 deployments to its production platform during the week of the launch alone. “Google Cloud’s laser focus on customers reaches a level I’ve never seen before,” said Jesse Houston, CEO and co-founder at Phoenix Labs. “They care just as much about our experience as a GCP customer as they do about our players. Without their ‘let’s go’ attitude, Dauntless would have been a giant game over.”“Behemoth” growth, one platform at a time Now that Dauntless has surpassed 16 million unique players and launched on Nintendo Switch, Phoenix Labs is preparing to expand to new regions such as Russia and Poland (they recently launched in Japan) and take advantage of other capabilities across Google. For example, by leveraging Google Ads and YouTube as part of its digital strategy for Dauntless, 5 million new gamers were onboarded in the first week of launch; using YouTube Masthead ads also increased exposure to its audience. Phoenix Labs has migrated to Google Cloud’s data warehouse BigQuery for its ease of use and speed, returning queries in seconds based on trillions of rows of data. They’re even beginning to use the Google Sheets data connector for BigQuery to simplify reporting and ensure every decision is data informed. At Google Cloud, we’re undaunted by behemoth monsters—and the task of making our platform a great place to launch and run your multiplayer game. Learn more about how game developers of all sizes work with Google Cloud to take their games to the next level here.
Quelle: Google Cloud Platform