Elastic Load Balancing: Network Load Balancers unterstützt jetzt mehrere TLS-Zertifikate über Server Name Indication (SNI)

Wir freuen uns sehr, die Unterstützung mehrerer TLS-Zertifikate für Network Load Balancers mithilfe von Server Name Indication (SNI) anzukündigen. Sie können nun mehrere sichere Anwendungen auf einem Load Balance Listener hosten, jede mit eigenem TLS-Zertifikat. Damit können SaaS-Anwendungen und Hosting-Services hinter demselben Load Balancer ausgeführt werden. Sie profitieren damit von Verbesserungen bei Ihrer Service-Sicherheitslage und Vereinfachungen bei Verwaltung und Betrieb.
Quelle: aws.amazon.com

4 steps to modernize and cloud-enable applications

Customers today are no longer satisfied by the traditional consumer-business relationship. Instead, they expect engaging and informative digital experiences. In order to match these expectations and stay ahead of the curve, organizations must lean into digital transformation. Businesses need to modernize both customer-facing and enterprise applications to support a customer-centric approach to business. Developing a strategy to cloud-enable applications is crucial in gaining and maintaining a competitive advantage, especially when, according to a Forrester study, by 2023, 90 percent of current applications will still be in use, but most won’t have received sufficient modernization investment. This means there’s an opportunity for businesses who invest now in application modernization.
Application modernization: Taking a phased approach
Cloud-enabling applications doesn’t have to be an all-or-nothing proposition. Application modernization is best achieved by taking a phased approach, one that’s tailored to business goals and application architecture.
Companies can simplify and extend functionality while still meeting business and IT requirements by carefully choosing which applications to prioritize when modernizing for a hybrid cloud environment. This allows organizations to capitalize on the benefits of the cloud without disrupting existing workloads in on-premises environments.
Prime applications for digital transformation through the following four steps to gain a competitive edge.
1. Simplify with containers
Putting existing applications into containers is the first step to simplifying application deployment and management. Containers encapsulate the application with minimal or no changes to the application itself, enabling consistent testing and deployment that reduces costs and simplifies operations.
2. Extend with APIs
Extend existing applications with APIs that securely expose their full capabilities to developers. The applications become reusable across clouds to easily access and build new capabilities. Beyond APIs, this approach relies on an agile integration strategy that supports the volume of connections and variety of architectures required.
3. Decompose with microservices
Use microservices to break down monolithic applications into deployable components, where each component performs a single function. Businesses can then further enhance development agility and efficiency by putting each microservice in its own container. Using Kubernetes, companies can then manage and deliver the microservices of existing applications.
4. Refactor with new microservices
Refactoring involves building new microservices. In some instances, it may be easier to develop new applications utilizing cloud-native development practices instead of working with a current monolith. This provides teams with the ability to deliver innovation to users, encourage creative thinking and allow developers to experiment in a low-risk fashion.
Find your next step to modernize applications
Application modernization is a critical aspect of business modernization. Leading organizations that prioritize cloud-enabling their applications are breaking away from the competition by enhancing customer experiences and accelerating development and delivery.
Read the smart paper “Simplify and extend apps with an open, hybrid cloud” to learn more about application modernization and the unique approach, tools and solutions offered by IBM for application modernization.
The post 4 steps to modernize and cloud-enable applications appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

A CIO’s guide to cloud success: decouple to shift your business into high gear

They say 80% of success is showing up—but unfortunately for enterprises moving to the cloud, this doesn’t always hold up. A recent McKinsey survey, for example, found that despite migrating to the cloud, many enterprises are nonetheless “falling short of their IT agility expectations.” Because CTOs and CIOs are struggling to increase IT agility, many organizations are unable to achieve their larger business goals. McKinsey notes that 95% of CIOs indicated that the majority of the C-Suite’s overall goals depend on them.The disconnect between moving to the cloud and successful digital transformation can be traced back to the way most organizations adopt cloud:renting pooled resources from cloud vendors or investing in SaaS subscriptions. By adopting cloud in this cookie-cutter way, an enterprise basically keeps doing what it’s always done—perhaps just a little faster and a little more efficiently. But we’re entering a new age. Cloud services are increasingly about intelligence, automation, and velocity—not just the economies of scale offered by big providers renting out their infrastructure. As McKinsey notes, enterprises sometimes stumble because they use the cloud for scale, but do not take advantage of the agility and velocity benefits it provides. At its core, achieving velocity and agility isn’t about where an application is hosted so much as how fast, freely, and efficiently enterprises can launch and adjust strategies, whether creating ways to interact with customers on new technology platforms, quickly adding requested features to apps or monetizing data. This in turn relies on decoupling the dependencies between different systems and minimizing the amount of manual coordination that enterprise IT typically has to perform. The result is more loosely-coupled distributed systems that are far better equipped for today’s dynamic technology landscape. This concept of decoupling, and how it can accelerate business results, drives much of what we do at Google—and it has strongly informed how we built Anthos, our open source-based multi-cloud platform that lets enterprises run apps anywhere, but also achieve the elusive IT agility and velocity that enterprises crave.Decoupling = agility: shift your development into high gearMigrating to the cloud does not, by default, transform an enterprise because digital transformation isn’t about the cloud itself. Rather, it’s about changing the way software is built and the consequent explosion in new business strategies that software can support—from selling products via voice assistants, to exposing proprietary data and functionality to partners at scale, to automating IT administration and security operations that used to require manual oversight. Specifically, modern software development eschews ‘monolithic’ application architectures whose design makes it difficult to update or reuse functionality without impacting the entire application. Instead, developers increasingly build applications by assembling small, reusable, independently deployable microservices. This shift not only makes software easier to reuse, combine, and modify (which can help an enterprise to be more responsive to changing business needs), but also lets developers work in small parallel teams rather than large groups (which helps them to create and deploy applications much faster. What’s more, microservices exposed as APIs can help developers leverage resources from a range of providers spread across many different clouds, giving them the tools to create richer applications and connected experiences. These decouplings of services from an application and developers from one another is often done via containers. By abstracting applications and libraries from the underlying operating system and hardware, containers make it easier for one team of developers to focus on its work without worrying about what any of the teams with which they’re collaborating are doing. Containers also represent another important form of decoupling that can dramatically change the relationship among an IT department, severs, and maintenance. Thanks to containers, for example, many applications can reside on the same server without impacting one another, which reduces the need for application-specific hardware deployments. Containers can also be ported from one machine to another, opening opportunities for developers to create applications on-premises and scale them via the cloud, or to move applications from one cloud to another based on changing needs. This abstraction from the hardware they run on is one reason containers are often referred to as “cloud-native.” This overview only scratches the surface, but the point is, by decoupling functionality and creating new architectures built around loosely-coupled distributed systems, enterprises can empower their developers to work faster in smaller, parallel teams and unlock the IT agility through which modern, software-driven business strategies are executed.  But doesn’t decoupling increase complexity?Containers and distributed systems offer many advantages, but adoption isn’t as simple as flipping a switch. Decomposing fat applications into hundreds of smaller services can increase an enterprise’s agility, but orchestrating all those services can be tremendously complicated, as can authenticating their users and protecting against threats. When millions of microservices are communicating with one another, it becomes literally impossible to put a human being in the middle of those processes, requiring automated solutions. Many enterprises consequently struggle with not only governance across these distributed environments, but also identifying the right solutions to put in place. Moreover, not everything within a large enterprise will evolve at the same pace. Running containers in the cloud can help an enterprise focus on building great applications while handing off infrastructure management to a vendor. In fact, teams in almost every large enterprise are already operating this way—but other teams accustomed to legacy approaches may require a more incremental transition. Additionally, enterprises may have a variety of reasons, whether strategic or regulatory, for keeping data on-prem—but they may still want ways to apply cloud-based analytics and machine learning services to that data and otherwise merge the cloud with their on-prem assets. Assembling the orchestration, management, and monitoring solutions for such deployments has historically been difficult. Another significant challenge is that though containers are intrinsically portable, the various public clouds provide different platforms, which can make moving containers—let alone giving developers and administrators consistent experiences—quite difficult. Many open-source options are not the panacea they once seemed because the open-source version of a solution and the managed deployment sold by a cloud provider may be meaningfully different. These challenges can be particularly vexing because enterprises want the flexibility to change cloud vendors, utilize multiple clouds, and otherwise avoid lock-in.   Helping enterprises to enjoy the benefits of distributed systems while avoiding these challenges shaped our development of Anthos. Anthos: Agility minus the complexity Google runs multiple web services with billions of users and is an enormously complex organization whose IT systems connect tens of thousands of employees, contractors, and partners. No surprise then that we’ve spent a lot of time solving the puzzle of distributed systems and their dynamic, loosely-coupled components. For example, we open-sourced Kubernetes, the de facto standard for container orchestration, and Istio, a leading service mesh for managing microservices—and both are major components in Anthos and both are based on internal best practices. Istio, provides systematic centralized management for microservices and enables what is arguably the most important form of decoupling: policies from services. Developers supported by Istio are free to write code without encoding policies into their microservices, allowing administrators to change policies in a controlled rollout without redeploying individual services. This automates away the expensive, time-consuming coordination and bureaucracy traditionally required for IT governance and helps accelerate developer velocity. Recognizing that enterprises demand choice and openness, Anthos launched with hybrid support and will soon include multi-cloud functionality as well, with all options offering simplified management via single-pane-of-glass views, policy-driven controls, and a consistent experience across all environments, whether on Google Cloud Platform, in a corporate data center with Anthos deployed on VMware, or, after our coming update, in a third-party cloud such as Azure or AWS. Because Anthos is software-based, on-prem deployments don’t require stack refreshes, letting enterprises utilize existing hardware investments, ensuring developers and administrators have a consistent experience, regardless of where workloads are located or whose hardware they run on. We’re already seeing fantastic momentum with customers using Anthos. For example, KeyBank, a superregional bank that’s been in business for almost 200 years, is adopting Anthos after using containers and Kubernetes for several years for customer-facing applications. “The speed of innovation and competitive advantage of a container-based approach is unlike any technology we’ve used before,” said Keybank’s CTO Keith Silvestri and Director of DevOps Practices Chris McFee in a recent blog post, adding that the technologies also helped the bank spin up infrastructure on demand when traffic spiked, such as during Black Friday or Cyber Monday. KeyBank chose Anthos to bring this agility and “burstability” to the rest of its IT operations, including internal-facing applications, while staying as close as possible to the open-source version of Kubernetes. “We deploy Anthos locally on our familiar and high-performance Cisco HyperFlex hyperconverged infrastructure,” Silvestri and McFee noted. “We manage the containerized workloads as if they’re all running in GCP, from the single source of truth, our GCP console.” Anthos includes much more—such as Migrate for Anthos to auto-migrate virtual machines into containers in Google Kubernetes Engine (GKE) and an ecosystem of more than 40 hardware and software partners. But as the preceding attests, at the highest level, the platform helps enterprises to balance developer agility, operational efficiency, and platform governance by facilitating the decoupling central to successful digital transformation:Infrastructure is decoupled from the applicationsTeams are decoupled from one anotherDevelopment is decoupled from operations Security is decoupled from development and operationsSuccessful decoupling minimizes the need for manual coordination, cuts costs, reduces complexity, and significantly increases developer velocity, operational efficiency, and business productivity. Decoupling delivers a framework, implementation, and operating model to ensure consistency across an open, hybrid, and multi-cloud future—a future Anthos has been built to serve.Check out McKinsey’s report “Unlocking Business Acceleration in a Hybrid Cloud World” for more about how hybrid technologies can accelerate digital transformation, and tune in to our “Cloud OnAir with Anthos” session to learn even more about how Anthos is helping enterprises digitally transform—including special appearances by KeyBank and OpenText!
Quelle: Google Cloud Platform

Anthos simplifies application modernization with managed service mesh and serverless for your hybrid cloud

For decades, organizations built and ran applications in their own on-premises data centers. Then, they started deploying and running applications in the cloud. But, for most enterprises, the thought of moving all-in to the cloud was too daunting. They worried they would need different developers and tools for each environment, and that they wouldn’t have a consistent management interface to ensure the environments were compliant with their security policies. To address these challenges, we introducedAnthos, a services platform that brings applications into the 21st century, with the flexibility to run in any environment—whether it’s cloud-native or based on virtual machines. Today, we’re announcing new Anthos capabilities to further simplify your application modernization journey: Anthos Service Mesh, which connects, manages, and secures microservicesCloud Run for Anthos, which enables you to easily run stateless workloads on a fully managed Anthos environmentIn addition, Anthos Config Management now includes capabilities to help your teams automate and enforce org-specific policies. Binary Authorization, meanwhile, helps to ensure that  only validated, verified images are integrated into your managed build-and-release process..Tame microservices with Anthos Service MeshIncreasingly, many organizations consider microservices architectures to be an essential way to modernize their applications. But moving from monolithic applications to large numbers of microservices increases operational complexity. To address this, you can use a service mesh—an abstraction layer that provides a uniform way to connect, secure, monitor, and manage microservices. A service mesh uses high-performance and lightweight proxies to bring security, resiliency, and visibility to service communications, freeing your developers to do what they do best: build great applications. A service mesh helps you manage the lifecycle and policies for this intelligent data plane and gives you secure and easy-to-manage microservices-based applications. As a managed offering, Anthos Service Mesh in Beta makes it easy to add this abstraction layer to your environment. Built on Istio open APIs, it lets you easily manage and secure inter-service traffic  with a unified administrative interface, and provides uniform traffic controls that span them both. In addition, Anthos Service Mesh gives you deep visibility into your application traffic, thereby improving your development experience and making it easier to troubleshoot these complex environments.Deep visibility helps keep your applications running smoothly.Serverless flexibility and velocity across on-prem and cloudServerless computing provides you with a number of benefits: the ability to run workloads without having to worry about the underlying infrastructure, to execute code only when needed, to autoscale from zero to n depending on traffic, all wrapped around a simple developer experience. Today, we are excited to bring this experience to Anthos through Cloud Run for Anthos, now in beta. Based on Knative, an open API and runtime environment, Cloud Run for Anthos enables you to be more agile by letting you write code like you always do—without having to learn advanced Kubernetes concepts. It enforces best practices and provides deep integration with Anthos by offering advanced networking support, and enabling cloud accelerators, which means your workloads can all run in the same cluster. Cloud Run for Anthos delivers portability with consistency, so you can flexibly run your workloads on Google Cloud or on-premises – all with the same consistent experience. It helps you adopt cloud on your own terms by letting you adopt serverless wherever you are – even on-premises. Modernize application security to increase organizational agilityIn addition to simplifying the development and operations of modern applications, Anthos includes guardrails that provide security by default. Enterprises can automate their security operations by enforcing consistent policy across environments, isolating workloads with different risk profiles, and deploying only trusted workloads.With Anthos Service Mesh, you have uniform policies for enforcing service aware network security including encryption in transit, mutual authentication and powerful access controls. This allows your IT teams to implement zero trust security that moves across environments with your application without making application code changes, allowing you to focus on delivering critical business functions faster.Binary Authorization helps you build defined security checks into the development process earlier, making sure you deploy only trusted workloads in your environments. By ensuring workloads are assessed and validated before they are deployed, enterprises can have the confidence that these workloads can be trusted.Finally, using the new Policy Controller and Config Connector features of Anthos Config Management, you can enforce consistent security policies and controls continuously across your cloud environments, including Google Cloud, on-prem and other clouds. Learn more about how Anthos helps organizations modernize their approach to application security in our Anthos Security white paper.Expanding the Anthos partner ecosystemAnthos launched with more than 30 hardware, software and system integration partners ready to help customers adopt Anthos right out of the gate. Today, that number stands at more than 40, and partners report exceptional momentum for the platform. Atos, Cognizant, Deloitte, HCL, Infosys, TCS, and Wipro are some of the global systems integrators who are helping deliver Anthos to their clients, and they are doubling down on their efforts. “Deloitte has been working with Google long before the formal announcement of Anthos at Google Cloud Next in April,  said Tim O’Connor, Principal, Deloitte Consulting LLP. “Since then we’ve supercharged our investments and have been extending existing Anthos assets and building teams to bring this powerful and game-changing technology to the marketplace,” through a dedicated group of practitioners focused on hybrid enablement through Anthos.A complete platform for modernizing organizationsWith its comprehensive capabilities for container management, service mesh, security, monitoring and logging, as well as developer productivity, Anthos helps your entire organization benefit from  application modernization. For developers, Anthos simplifies application deployment with access to services like GCP Marketplace and Cloud Run. Operations teams benefit from improved resource utilization and reuse, and visibility into all available services—all from a single management plane. Meanwhile, Anthos lets security professionals roll out consistent policies across their deployments, encrypt sensitive traffic, and ensure that only trusted binaries are running in the environment. All the while, Anthos puts your organization on the path to the cloud, in the configuration and at the pace that works for you. For a technical deep dive into service mesh, download our new ebook, The Service Mesh Era: Architecting, Securing and Managing Microservices with Istio. And to understand how Anthos can take your cloud environment to the next level, check out A CIO’s guide to cloud success: decouple to shift your business into high gear.
Quelle: Google Cloud Platform

Enabling OpenShift 4 Clusters to Stop and Resume Cluster VMs

There are a lot of reasons to stop and resume OpenShift Cluster VMs:

Save money on cloud hosting costs

Use a cluster only during daytime hours – for example for
exploratory or development work. If this is a cluster for just one
person it does not need to be running when the only user is not
using it.

Deploy a few clusters for students ahead of time when teaching a
workshop / class. And making sure that the clusters have
prerequisites installed ahead of time.

Background
When installing OpenShift 4 clusters a bootstrap certificate is created
that is used on the master nodes to create certificate signing requests
(CSRs) for kubelet client certificates (one for each kubelet) that will
be used to identify each kubelet on any node.
Because certificates can not be revoked, this certificate is made with a
short expire time and 24 hours after cluster installation, it can not be
used again. All nodes other than the master nodes have a service account
token which is revocable. Therefore the bootstrap certificate is only
valid for 24 hours after cluster installation. After then again every 30
days.
If the master kubelets do not have a 30 day client certificate (the
first only lasts 24 hours), then missing the kubelet client certificate
refresh window renders the cluster unusable because the bootstrap
credential cannot be used when the cluster is woken back up.
Practically, this requires an OpenShift 4 cluster to be running for at
least 25 hours after installation before it can be shut down.
The following process enables cluster shutdown right after installation.
It also enables cluster resume at any time in the next 30 days.
Note that this process only works up until the 30 day certificate
rotation. But for most test / development / classroom / etc. clusters
this will be a usable approach because these types of clusters are
usually rather short lived.
Preparing the Cluster to support stopping of VMs
These steps will enable a successful restart of a cluster after its VMs
have been stopped. This process has been tested on OpenShift 4.1.11 and
higher – including developer preview builds of OpenShift 4.2.

From the VM that you ran the OpenShift installer from create the
following DaemonSet manifest. This DaemonSet pulls down the same
service account token bootstrap credential used on all the
non-master nodes in the cluster.
cat << EOF >$HOME/kubelet-bootstrap-cred-manager-ds.yaml.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kubelet-bootstrap-cred-manager
namespace: openshift-machine-config-operator
labels:
k8s-app: kubelet-bootrap-cred-manager
spec:
replicas: 1
selector:
matchLabels:
k8s-app: kubelet-bootstrap-cred-manager
template:
metadata:
labels:
k8s-app: kubelet-bootstrap-cred-manager
spec:
containers:
– name: kubelet-bootstrap-cred-manager
image: quay.io/openshift/origin-cli:v4.0
command: [‘/bin/bash’, ‘-ec’]
args:
– |
#!/bin/bash

set -eoux pipefail

while true; do
unset KUBECONFIG

echo “———————————————————————-”
echo “Gather info…”
echo “———————————————————————-”
# context
intapi=$(oc get infrastructures.config.openshift.io cluster -o “jsonpath={.status.apiServerURL}”)
context=”$(oc –config=/etc/kubernetes/kubeconfig config current-context)”
# cluster
cluster=”$(oc –config=/etc/kubernetes/kubeconfig config view -o “jsonpath={.contexts[?(@.name==”$context”)].context.cluster}”)”
server=”$(oc –config=/etc/kubernetes/kubeconfig config view -o “jsonpath={.clusters[?(@.name==”$cluster”)].cluster.server}”)”
# token
ca_crt_data=”$(oc get secret -n openshift-machine-config-operator node-bootstrapper-token -o “jsonpath={.data.ca.crt}” | base64 –decode)”
namespace=”$(oc get secret -n openshift-machine-config-operator node-bootstrapper-token -o “jsonpath={.data.namespace}” | base64 –decode)”
token=”$(oc get secret -n openshift-machine-config-operator node-bootstrapper-token -o “jsonpath={.data.token}” | base64 –decode)”

echo “———————————————————————-”
echo “Generate kubeconfig”
echo “———————————————————————-”

export KUBECONFIG=”$(mktemp)”
kubectl config set-credentials “kubelet” –token=”$token” >/dev/null
ca_crt=”$(mktemp)”; echo “$ca_crt_data” > $ca_crt
kubectl config set-cluster $cluster –server=”$intapi” –certificate-authority=”$ca_crt” –embed-certs >/dev/null
kubectl config set-context kubelet –cluster=”$cluster” –user=”kubelet” >/dev/null
kubectl config use-context kubelet >/dev/null

echo “———————————————————————-”
echo “Print kubeconfig”
echo “———————————————————————-”
cat “$KUBECONFIG”

echo “———————————————————————-”
echo “Whoami?”
echo “———————————————————————-”
oc whoami
whoami

echo “———————————————————————-”
echo “Moving to real kubeconfig”
echo “———————————————————————-”
cp /etc/kubernetes/kubeconfig /etc/kubernetes/kubeconfig.prev
chown root:root ${KUBECONFIG}
chmod 0644 ${KUBECONFIG}
mv “${KUBECONFIG}” /etc/kubernetes/kubeconfig

echo “———————————————————————-”
echo “Sleep 60 seconds…”
echo “———————————————————————-”
sleep 60
done
securityContext:
privileged: true
runAsUser: 0
volumeMounts:
– mountPath: /etc/kubernetes/
name: kubelet-dir
nodeSelector:
node-role.kubernetes.io/master: “”
priorityClassName: “system-cluster-critical”
restartPolicy: Always
securityContext:
runAsUser: 0
tolerations:
– key: “node-role.kubernetes.io/master”
operator: “Exists”
effect: “NoSchedule”
– key: “node.kubernetes.io/unreachable”
operator: “Exists”
effect: “NoExecute”
tolerationSeconds: 120
– key: “node.kubernetes.io/not-ready”
operator: “Exists”
effect: “NoExecute”
tolerationSeconds: 120
volumes:
– hostPath:
path: /etc/kubernetes/
type: Directory
name: kubelet-dir
EOF

Deploy the DaemonSet to your cluster.
oc apply -f $HOME/kubelet-bootstrap-cred-manager-ds.yaml.yaml

Delete the secrets csr-signer-signer and csr-signer from the
openshift-kube-controller-manager-operator namespace
oc delete secrets/csr-signer-signer secrets/csr-signer -n openshift-kube-controller-manager-operator

This will trigger the Cluster Operators to re-create the CSR signer
secrets which are used when the cluster starts back up to sign the
kubelet client certificate CSRs. You can watch as various operators
switch from Progressing=False to Progressing=True and back to
Progressing=False. The operators that will cycle are
kube-apiserver, openshift-controller-manager,
kube-controller-manager and monitoring.
watch oc get clusteroperators

Sample Output.
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE
authentication 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
cloud-credential 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
cluster-autoscaler 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
console 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
dns 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
image-registry 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
ingress 4.2.0-0.nightly-2019-08-27-072819 True False False 3h46m
insights 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
kube-apiserver 4.2.0-0.nightly-2019-08-27-072819 True True False 18h
kube-controller-manager 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
kube-scheduler 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
machine-api 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
machine-config 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
marketplace 4.2.0-0.nightly-2019-08-27-072819 True False False 3h46m
monitoring 4.2.0-0.nightly-2019-08-27-072819 True False False 3h45m
network 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
node-tuning 4.2.0-0.nightly-2019-08-27-072819 True False False 3h46m
openshift-apiserver 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
openshift-controller-manager 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
openshift-samples 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
operator-lifecycle-manager 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
operator-lifecycle-manager-catalog 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
operator-lifecycle-manager-packageserver 4.2.0-0.nightly-2019-08-27-072819 True False False 3h46m
service-ca 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
service-catalog-apiserver 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
service-catalog-controller-manager 4.2.0-0.nightly-2019-08-27-072819 True False False 18h
storage 4.2.0-0.nightly-2019-08-27-072819 True False False 18h

Once all Cluster Operators show Available=True,
Progressing=False and Degraded=False the cluster is ready
for shutdown.

Stoppping the cluster VMs
Use the tools native to the cloud environment that your cluster is
running on to shut down the VMs.
The following command will shut down the VMs that make up a cluster on
Amazon Web Services.
Prerequisites:

The Amazon Web Services Command Line Interface, awscli, is
installed.

$HOME/.aws/credentials has the proper AWS credentials available to
execute the command.

REGION points to the region your VMs are deployed in.

CLUSTERNAME is set to the Cluster Name you used during
installation. For example cluster-${GUID}.

export REGION=us-east-2
export CLUSTERNAME=cluster-${GUID}

aws ec2 stop-instances –region ${REGION} –instance-ids $(aws ec2 describe-instances –filters “Name=tag:Name,Values=${CLUSTERNAME}-*” “Name=instance-state-name,Values=running” –query Reservations[*].Instances[*].InstanceId –region ${REGION} –output text)

Starting the cluster VMs
Use the tools native to the cloud environment that your cluster is
running on to start the VMs.
The following commands will start the cluster VMs in Amazon Web
Services.
export REGION=us-east-2
export CLUSTERNAME=cluster-${GUID}

aws ec2 start-instances –region ${REGION} –instance-ids $(aws ec2 describe-instances –filters “Name=tag:Name,Values=${CLUSTERNAME}-*” “Name=instance-state-name,Values=stopped” –query Reservations[*].Instances[*].InstanceId –region ${REGION} –output text)

Recovering the cluster
If the cluster missed the initial 24 hour certicate rotation some nodes
in the cluster may be in NotReady state. Validate if any nodes are in
NotReady. Note that immediately after waking up the cluster the nodes
may show Ready – but will switch to NotReady within a few seconds.
oc get nodes

Sample Output.
NAME STATUS ROLES AGE VERSION
ip-10-0-132-82.us-east-2.compute.internal NotReady worker 18h v1.14.0+b985ea310
ip-10-0-134-223.us-east-2.compute.internal NotReady master 19h v1.14.0+b985ea310
ip-10-0-147-233.us-east-2.compute.internal NotReady master 19h v1.14.0+b985ea310
ip-10-0-154-126.us-east-2.compute.internal NotReady worker 18h v1.14.0+b985ea310
ip-10-0-162-210.us-east-2.compute.internal NotReady master 19h v1.14.0+b985ea310
ip-10-0-172-133.us-east-2.compute.internal NotReady worker 18h v1.14.0+b985ea310

If some nodes show NotReady the nodes will start issuing Certificate
Signing Requests (CSRs). Repeat the following command until you see a
CSR for each NotReady node in the cluster with Pending in the
Condition column.
oc get csr

Once you see the CSRs they need to be approved. The following command
approves all outstanding CSRs.
oc get csr -oname | xargs oc adm certificate approve

When you double check the CSRs (using oc get csr) you should now see
that the CSRs have now been Approved and Issued (again in the
Condition column).
Double check that all nodes now show Ready. Note that this may take a
few seconds after approving the CSRs.
oc get nodes

Sample Output.
NAME STATUS ROLES AGE VERSION
ip-10-0-132-82.us-east-2.compute.internal Ready worker 18h v1.14.0+b985ea310
ip-10-0-134-223.us-east-2.compute.internal Ready master 19h v1.14.0+b985ea310
ip-10-0-147-233.us-east-2.compute.internal Ready master 19h v1.14.0+b985ea310
ip-10-0-154-126.us-east-2.compute.internal Ready worker 18h v1.14.0+b985ea310
ip-10-0-162-210.us-east-2.compute.internal Ready master 19h v1.14.0+b985ea310
ip-10-0-172-133.us-east-2.compute.internal Ready worker 18h v1.14.0+b985ea310

Your cluster is now fully ready to be used again.
Ansible Playbook to recover cluster
The following Ansible Playbook should recover a cluster after wake up.
Note the 5 minute sleep to give the nodes enough time to settle, start
all pods and issue CSRs.
Prerequisites:

Ansible installed

Current user either has a .kube/config that grants cluster-admin
permissions or a KUBECONFIG environment variable set that points
to a kube config file with cluster-admin permissions.

OpenShift Command Line interface (oc) in the current user’s PATH.

– name: Run cluster recover actions
hosts: localhost
connection: local
gather_facts: False
become: no
tasks:
– name: Wait 5 minutes for Nodes to settle and pods to start
pause:
minutes: 5

– name: Get CSRs that need to be approved
command: oc get csr -oname
register: r_csrs
changed_when: false

– name: Approve all Pending CSRs
when: r_csrs.stdout_lines | length > 0
command: “oc adm certificate approve {{ item }}”
loop: “{{ r_csrs.stdout_lines }}”

Summary
Following this process enables you to stop OpenShift 4 Cluster VMs right
after installation without having to wait for the 24h certificate
rotation to occur.
It also enables you to resume Cluster VMs that have been stopped while
the 24h certificate rotation would have occurred.
The post Enabling OpenShift 4 Clusters to Stop and Resume Cluster VMs appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

AWS Elemental MediaConnect unterstützt jetzt das RIST-Protokoll

AWS Elemental MediaConnect unterstützt ab sofort den Standard für den zuverlässigen Internet-Stream-Transport (Reliable Internet Stream Transport (RIST)). Mithilfe des RIST-Protokolls können Sie Live-Videos bei geringer Latenz und hoher Ausfallsicherheit in Bezug auf Paketverluste transportieren. Diese zusätzliche Transportoption für MediaConnect bietet Ihnen eine höhere Flexibilität bei in der AWS-Cloud zu verarbeitenden Übertragungs-Streams und für das Rückführen von Streams an den Standort über beliebige Geräte, die RIST unterstützen.
Quelle: aws.amazon.com