K0S – Zero-Friction Kubernetes, now in General Release

The lightweight, single-binary, open source Kubernetes for IoT and datacenter has earned its first major version designation. Recent features include kube-router as default CNI, arm32 support, backups, and a simple but powerful new lifecycle manager! “K0S was engineered to be modern and simple to run anywhere,” says Miska Kaipiainen, project founder. “What people call ‘lightweight’ … Continued
Quelle: Mirantis

How Anthos supports your multicloud needs from day one

Most enterprises that run in the cloud have already spent a significant amount of effort automating, operationalizing, and securing their environment. Many have spent years investing in a single cloud provider. Yet today, the ability to run workloads on multiple cloud providers is becoming increasingly important.Why? There are multiple reasons. Some organizations want application teams to be able to take advantage of the best service for a given application. Others have acquired a company which runs on another cloud. And still others want the ability to spread out risk across multiple vendors. At the same time, there are also challenges associated with multicloud. Having multiple cloud providers means you must accept having different APIs, operational tools, security standards, and ways of working. Resource sprawl becomes significantly worse when you have multiple cloud platforms. Without a single place to see all of your resources, inventorying, monitoring, and keeping these systems up to date can become difficult. Creating secure environments within multiple cloud platforms is another challenge. With each platform having different security capabilities, identity systems, and risks. Supporting an additional cloud platform often means doubling your security efforts.Finally, the biggest challenge with multicloud can be the inconsistency between platforms. In an ideal world, application teams would not need to worry about platform-specific details. They would be able to build their application and deploy it to any cloud platform, or move it between platforms, even if platform-specific details such as storage, load balancing, networking, workload identity, and security make each platform quite different.Is multicloud worth it? Yes.For many organizations, multicloud is only worth it if they can find a smart way to address these challenges. For a growing number of companies, the solution is Anthos.Rather than relying on your application or cloud infrastructure teams to build tools to operate across multiple platforms, Anthos provides these capabilities out-of-the-box. Anthos provides the core capabilities for deploying and operating infrastructure and services across Google Cloud, AWS, and soon in Azure. For infrastructure teams, Anthos provides a single way to provision, view, and manage distributed infrastructure, characterized by:A simple API-driven deployment mechanismReliable one-step cluster upgrades that updates the Google-managed Kubernetes distribution, the server OS, and all of the supporting system pods and servicesA single web console for viewing cluster state, nodes, attached volumesA powerful configuration and policy management system which can be used to enforce security policies, RBAC rules, network policies, and any other Kubernetes objectsA fully managed logging system with a powerful search capability, log-based metrics, and custom retention rulesSoftware supply-chain security to ensure only trusted code is running in your environmentHybrid identity service compatible with Active Directory, AWS IAM, and other OIDC-based identity providersAnd for application teams, Anthos offers a consistent deployment target regardless of which cloud you’re targeting, characterized by:A familiar Kubernetes API that provides a consistent way of provisioning storage, load balancers, and ingress rulesA single web console for deploying, updating, and monitoring workloads and servicesAn open-source, serverless application framework A consistent way to securely access cloud-managed services that your app depends onA single Kubernetes API endpoint for connecting to multiple clusters across multiple platforms, even without direct network connectivity (especially great for enabling your CI/CD pipeline to run anywhere)A fully-managed logging system with strong support for multi-tenancy so you only see logs for your applicationThe option to auto-discover and collect custom application metrics, dashboarding, alerting, and incident managementPlays nice with othersThere are a lot of Kubernetes clusters already deployed out in the wild. Every cluster should not need to be completely replaced to build a multicloud environment. The promise of Kubernetes is that it provides a single API and common set of objects. Because of this, Anthos provides a subset of capabilities to existing Kubernetes clusters, and existing Amazon EKS, Microsoft AKS, or Red Hat OpenShift clusters can be attached to the Anthos management plane. These attached clusters can take advantage of the same operational management capabilities of Anthos-native clusters, including:The web console for deploying, updating, and monitoring workloads and servicesPolicy and configuration managementLogging, monitoring, dashboarding, and alertingThe single Kubernetes API end-point which is always available so you don’t need to worry about network connectivity Anthos-attached clusters provide an easy onboarding path that allows users to connect existing clusters to Google Cloud in minutes and start managing them through a single pane of glass. The connection does not require special networking capabilities and is simple to set up. This allows users to use any Kubernetes cluster they have alongside their Google Kubernetes Engine (GKE) clusters.Of course, you still use native tools for EKS, AKS, and OpenShift for cluster creation, upgrades, and deletes; but once a cluster is attached to Anthos, you can manage the cluster just like any other Anthos cluster.Check out the recent blog post on Anthos 1.7 release. We announced new features for Anthos that our customers tell us will drive business agility and efficiency in the multicloud world. The Anthos Sample Deployment on Google Cloud today is a great place to start with an actual application. You can also deploy Anthos to your AWS account or try out attaching your existing clusters to Anthos.Related Article3 keys to multicloud success you’ll find in Anthos 1.7The new Anthos 1.7 lets you do a whole lot more than just run in multiple clouds.Read Article
Quelle: Google Cloud Platform

Introducing Open Saves: Open-source cloud-native storage for games

Many of today’s games are rich, immersive worlds that engage the audience in ways that make a gamer a part of a continuing storyline. To create these persistent experiences, numerous storage technologies are required to ensure game data can scale to the standards of gamers’ demands. Not only do game developers need to store different types of data—such as saves, inventory, patches, replays, and more—but they also must keep the storage system high-performing, available, scalable, and cost-effective.Enter Open Saves, a brand-new, purpose-built single interface for multiple storage back ends that’s powered by Google Cloud and developed in partnership with 2K. Now, development teams can store game data without having to make the technical decisions on which storage solution to use, whether that’s Cloud Storage, Memorystore, or Firestore. “Open Saves demonstrates our commitment to partnering with top developers on gaming solutions that require a combination of deep industry expertise and Google scale,” said Joe Garfola, Vice President of IT and Security at 2K. “We look forward to continued collaboration with Google Cloud.”Game development teams can save game data against Open Saves without having to worry about the optimal back-end storage solution, while operations teams can focus on needed scalability and storage options. Here’s how it looks in practice:With Open Saves, game developers can run a cloud-native game storage system that is:Simple: Open Saves provides a unified, well-defined gRPC endpoint for all operations for metadata, structured, and unstructured objects.Fast: With a built-in caching system, Open Saves optimizes data placements based on access frequency and data size, all to achieve both low latency for smaller binary objects and high throughput for big objects.Scalable: The Open Saves API server can run on either Google Kubernetes Engine or Cloud Run. Both platforms can scale out to handle hundreds of thousands of requests per second. Open Saves also stores data in Firestore and Cloud Storage, and can handle hundreds of gigabytes of data and up to millions of requests per second.Open Saves is designed with extensibility in mind, and can be integrated into any game—whether mobile or console, multiplayer or single player—running on any infrastructure, from on-prem to cloud or a hybrid. The server is written in Go, but you can use many programming languages and connect from client or server since the API is defined in gRPC.Writing to and reading from Open Saves is as simple as the following code:We are actively developing Open Saves in partnership with 2K Games, and would love for you to come join us on GitHub. There are a few ways to get involved:Install and deploy your Open Saves serviceCheck out the API referenceRead the development guide and start contributingJoin the open-saves-discuss mailing listRelated ArticleOpen Match: Flexible and extensible matchmaking for gamesGoogle Cloud and Unity are jointly announcing the availability of Open Match, an open source matchmaking solution offering game creators …Read Article
Quelle: Google Cloud Platform

6 more reasons why GKE is the best Kubernetes service

It’s that time of the year again: time to get excited about all things cloud-native, as we gear up to connect, share and learn from fellow developers and technologists around the world at KubeCon EU 2021 next week. Cloud-native technologies are mainstream and our creation, Kubernetes, is core to building and operating modern software. We’re working hard to create industry standards and services to make it easy for everyone to use this service. Let’s take a look at what’s new in the world of Kubernetes at Google Cloud since the last KubeCon and how we’re making it easier for everyone to use and benefit from this foundational technology.1. Run production-ready k8s like a proGoogle Kubernetes Engine (GKE), our managed Kubernetes service, has always been about making it easy for you to run your containerized applications, while still giving you the control you need. With GKE Autopilot, a new mode of operation for GKE, you have an automated Kubernetes experience that optimizes your clusters for production, reduces the operational cost of managing clusters, and delivers higher workload availability.“Reducing the complexity while getting the most out of Kubernetes is key for us and GKE Autopilot does exactly that!” – STRABAG BRVZ Customers who want advanced configuration flexibility continue to use GKE in the standard mode of operation. As customers scale up their production environments, application requirements for availability, reducing blast-radius, or distributing different types of services have grown to necessitate deployment across multiple clusters. With the recently introduced GKE multi-cluster services, the Kubernetes Services object can now span multiple clusters in a zone, across multiple zones, or across multiple regions, with minimal configuration or overhead for managing the interconnection between clusters. GKE multi-cluster services enable you to focus on the needs of your application while GKE manages your multi-cluster topology.“We have been running all our microservices in a single multi-tenant GKE cluster. For our next-generation Kubernetes infrastructure, we are designing multi-region homogeneous and heterogeneous clusters. Seamless inter-cluster east-west communication is a prerequisite and multi-cluster Services promise to deliver. Developers will not need to think about where the service is running. We are very excited at the prospect.” – Mercari2. Create and scale CI/CD pipelines for GKE Scaling continuous integration and continuous delivery (CI/CD) pipelines can be a time-consuming process, involving multiple manual steps: setting up the CI/CD server, ensuring configuration files are updated, and deploying images with the correct credentials to a Kubernetes cluster. Cloud Build, our serverless CI/CD platform, comes with a variety of features to make the process as easy for you as possible. For starters, Cloud Build natively supports buildpacks, which allows you to build containers without a Dockerfile. As a part of your build steps, you can bring your own container images or choose from pre-built images to save time. Additionally, since Cloud Build is serverless, there is no need to pre-provision servers or pay in advance for capacity. And with its built-in vulnerability scanning, you can perform deep security scans within the CI/CD pipeline to ensure only trusted artifacts make it to production. Finally, Cloud Build lets you create continuous delivery pipelines for GKE in a few clicks. These pipelines implement out-of-the-box best practices that we’ve developed at Google for handling Kubernetes deployments, further reducing the overhead of setting up and managing pipelines. “Before moving to Google Cloud, the idea that we could take a customer’s feature request and put it into production in less than 24 hours was man-on-the-moon stuff. Now, we do it all the time.” —Craig Van Arendonk, Director of IT – Customer and Sales, Gordon Food Service3. Manage security and compliance on k8s with easeUpstream Kubernetes, the open-source version that you get from a GitHub repository, isn’t a locked-down environment out of the box. Rather than solve for security, it‘s designed to be very extensible, solving for flexibility and usability.As such, Kubernetes security relies on extension points to integrate with other systems such as identity and authorization. And that’s okay! It means Kubernetes can fit lots of use cases. But it also means that you can’t assume that the defaults for upstream Kubernetes are correct for you. If you want to deploy Kubernetes with a “secure by default” mindset, there are several core components to keep in mind. As we discuss in the fundamentals of container security whitepaper, here are some of the GKE networking-related capabilities which help you make Kubernetes more secure: Secure Pod networking – With Dataplane V2 (in GA), we enable Kubernetes Network Policy when you create a cluster. In addition, Network Policy logging (in GA) provides visibility into your cluster’s network so that you can see who is talking to who.Secure Service networking – The GKE Gateway controller (in preview) offers centralized control and security without sacrificing flexibility and developer autonomy, all through standard and declarative Kubernetes interfaces.”Implementing Network Policy in k8s can be a daunting task, fraught with guess work and trial and error, as you work to understand how your applications behave on the wire. Additionally, many compliance and regulatory frameworks require evidence of a defensive posture in the form of control configuration and logging of violations. With GKE Network Policy Logging you have the ability to quickly isolate and resolve issues, as well as, providing the evidence required during audits. This greatly simplifies the implementation and operation of enforcing Network Policies.”- Credit Karma4. Get an integrated view of alerts, SLOs, metrics, and logsDeep visibility into both applications and infrastructure is essential for troubleshooting and optimizing your production offerings. With GKE, when you deploy your app, it gets monitored automatically. GKE clusters come with pre-installed agents that collect telemetry data and automatically route it to observability services such as Cloud Logging and Cloud Monitoring. These services are integrated with one another as well as with GKE, so you get better insights and can act on them faster. For example, the GKE dashboard offers a summary of metrics for clusters, namespaces, nodes, workloads, services, pods, and containers, as well as an integrated view of Kubernetes events and alerts across all of those entities. From alerts or dashboards you can go directly to logs for a given resource and you can navigate to the resource itself without having to navigate through unconnected tools from multiple vendors. Likewise, since telemetry data is automatically routed to the Google Cloud’s observability suite, you can immediately take advantage of tools based on Google’s Site Reliability Engineering (SRE) principles. For example, SLO Monitoring helps you to drive greater accountability across your development and operations teams by creating error budgets and monitoring your service against those objectives. Ongoing investments in integrating OpenTelemetry will improve both platform and application telemetry collection“[With GKE] There’s zero setup required and the integration works across the board to find errors…. Having the direct integration with the cloud-native aspects gets us the information in a timely fashion.” —Erik Rogneby, Director of Platform Engineering, Gannett Media Corp.5. Manage and run Kubernetes anywhere with AnthosYou can take advantage of GKE capabilities in your own datacenter or on other clouds through Anthos. With Anthos you can bring GKE, along with other key frameworks and services, to any infrastructure, while managing centrally from Google Cloud.Creating secure environments within multiple cloud platforms is a challenge. With each platform having different security capabilities, identity systems, and risks, supporting an additional cloud platform often means doubling your security efforts. With Anthos, many of these challenges are solved. Rather than relying on your application or cloud infrastructure teams to build tools to operate across multiple platforms, Anthos provides these capabilities out-of-the-box. Anthos provides the core capabilities for deploying and operating infrastructure and services across Google Cloud, AWS, and soon in Azure. We recently released Anthos 1.7, delivering an array of capabilities that make multicloud more accessible and sustainable.Take a look at how our latest Anthos release tracks to a successful multicloud deployment. “Using Anthos, we’ll be able to speed up our development and deliver new services faster” – PKO Bank Polski6. ML at scale made simpleGKE brings flexibility, autoscaling, and management simplicity, while GPUs bring superior processing power. With the launch of support for multi-instance NVIDIA GPUs on GKE (in Preview), you can now partition a single NVIDIA A100 GPU into up to seven instances that each have their own high-bandwidth memory, cache and compute cores. Each instance can be allocated to one container, supporting up to seven containers per one NVIDIA A100 GPU. Further, multi-instance GPUs provide hardware isolation between containers, and consistent and predictable QoS for all containers running on the GPU.”By reducing the number of configuration hoops one has to jump through to attach a GPU to a resource, Google Cloud and NVIDIA have taken a needed leap to lower the barrier to deploying machine learning at scale. Alongside reduced configuration complexity, NVIDIA’s sheer GPU inference performance with the A100 is blazing fast. Partnering with Google Cloud has given us many exceptional options to deploy AI in the way that works best for us.” – BetterviewSee you at KubeCon EUOn Monday, May 3, join us at Build with GKE + Anthos, co-located with KubeCon EU, to kickstart or accelerate your Kubernetes development journey. We’ll cover everything from how to code, build and run a containerized application, to how to operate, manage, and secure it. You’ll get access to technical demos that go deep into our Kubernetes services, developer tools, operations suite and security solutions. We look forward to partnering with you on your Kubernetes journey!
Quelle: Google Cloud Platform

The evolution of Kubernetes networking with the GKE Gateway controller

Last week the Kubernetes community announced the Gateway API as an evolution of the Kubernetes networking APIs. Led by Google and a variety of contributors, the Gateway APIunifies networking under a core set of standard resources. Similar to how Ingress created an ecosystem of implementations, the Gateway API delivers unification, but on an even broader scope and based on lessons from Ingress and service mesh.Today we’re excited to announce the Preview release of the GKE Gateway controller, Google Cloud’s implementation of the Gateway API. Over a year in the making, the GKE Gateway controller manages internal and external HTTP/S load balancing for a GKE cluster or a fleet of GKE clusters. The Gateway API provides multi-tenant sharing of load balancer infrastructure with centralized admin policy and control. Thanks to the API’s expressiveness, it supports advanced functionality such as traffic shifting, traffic mirroring, header manipulation and more. Take the tour below to learn more!A tour of the GKE Gateway controllerThe first new resource you’ll encounter in the GKE Gateway controller is the GatewayClass. GatewayClasses provide a template that describes the capabilities of the class. In every GKE 1.20+ cluster you’ll see two pre-installed GatewayClasses. Go spin up a GKE 1.20+ cluster and check for them right now!These GatewayClasses correspond to the regional Internal (gke-l7-rilb) and global External (gke-l7-gxlb) HTTP(S) Load Balancers, which are orchestrated by the Gateway controller to provide container-native load balancing. Role-oriented designThe Gateway API is designed to be multi-tenant. It introduces two primary resources that create a separation of concerns between the platform owner and the service owner:Gateways represent the load balancer that is listening for traffic that it routes. You can have multiple Gateways, one per team or just a single Gateway that is shared among different teams.Routes are the protocol-specific routing configuration that are applied to these Gateways. GKE supports HTTPRoutes today, with TCPRoutes and UDPRoutes on the roadmap. One or more Routes can bind to a Gateway and together they define the routing configuration of your application.The following example (which you can deploy in this tutorial), shows how the cluster operator deploys a Gateway resource to be shared by different teams, even across different Namespaces. The owner of the Gateway can define domain ownership, TLS termination, and other policies in a centralized way without involving service owners. On the other hand, service owners can define routing rules and traffic management that are specific to their app, without having to coordinate with other teams or with the platform administrators. The relationship between Gateway and Route resources creates a formal separation of responsibilities that can be managed by standard Kubernetes RBAC.Advanced routing and traffic managementThe GKE Gateway controller introduces new routing and traffic management capabilities. The following HTTPRoute was deployed by the Store team in their Namespace. It matches traffic for foo.example.com/store and applies these traffic rules:90% of the client traffic goes to store-v110% of the client traffic goes to store-v2 to canary the next version of the storeAll of the client traffic is also copied and mirrored to store-v3 for scalability testing of the following version of the storeNative multi-cluster supportThe GKE Gateway controller is built with native multi-cluster support for both internal and external load balancing via its multi-cluster GatewayClasses. Multi-cluster Gateways load-balance client traffic across a fleet of GKE clusters. This targets various use cases including:Multi-cluster or multi-region redundancy and failoverLow latency serving through GKE clusters that have close proximity to clientsBlue-green traffic shifting between clusters (try this tutorial)Expansion to multiple clusters because of organizational or security constraintsOnce ingress is enabled in your Hub, the multi-cluster GatewayClasses (with `-mc`) will appear in your GKE cluster, ready to use across your fleet.The GKE Gateway controller is tightly integrated with multi-cluster Services so that GKE can provide both north-south and east-west multi-cluster load balancing. Multi-cluster Gateways leverage the service discovery of multi-cluster Services so that they have a full view of Pod backends. External multi-cluster Gateways provide internet load balancing andinternal multi-cluster Gateways provide internal, private load balancing. Whether your traffic flows are east-west, north-south, public, or private, GKE provides all the multi-cluster networking capabilities that you need right out of the box.The GKE Gateway controller is Google Cloud’s first implementation of the Gateway API. Thanks to a loosely coupled resource model, TCPRoutes, UDPRoutes, and TLSRoutes will also soon be added to the Gateway API specification, expanding its scope of capabilities. This is the beginning of a new chapter in Kubernetes Service networking and there is a long road ahead! Learn moreThere are many resources available to learn about the Gateway API and how to use the GKE Gateway controller. Check out one of these Learn K8s tutorials on Gateway API concepts:Introduction to the Gateway APIGateway API Concepts
Quelle: Google Cloud Platform

Amazon FSx File Gateway bietet einen schnelleren und effizienteren lokalen Zugriff auf vollständig verwalteten Dateispeicher in der Cloud

AWS Storage Gateway fügt einen neuen Gateway-Typ hinzu, Amazon FSx File Gateway, der einen lokalen Zugriff mit niedriger Latenz auf vollständig verwaltete Dateifreigaben in der Cloud ermöglicht. Kunden, die die Vorteile eines vollständig verwalteten Cloud-Dateispeichers nutzen möchten, aber eine niedrige Latenz für ihre Benutzer und Anwendungen benötigen, können Amazon FSx for Windows File Server jetzt einfach in ihre bestehende lokale Umgebung einbinden.
Quelle: aws.amazon.com

AWS Nitro Enclaves unterstützt jetzt Windows-Betriebssysteme

AWS Nitro Enclaves unterstützt jetzt die Erstellung isolierter Computerumgebungen, so genannter Enklaven, aus übergeordneten EC2-Instances, die auf einem Windows-Betriebssystem ausgeführt werden. Nitro Enclaves isoliert die CPU und den Speicher der Enklave von Benutzern, Anwendungen und Bibliotheken auf der übergeordneten EC2-Instance.
Quelle: aws.amazon.com

Amazon MSK fügt Unterstützung für Apache Kafka Version 2.8.0 hinzu

Amazon Managed Streaming für Apache Kafka (Amazon MSK) unterstützt jetzt Apache Kafka Version 2.8.0 für neue und vorhandene Cluster. Apache Kafka 2.8.0 umfasst mehrere Fehlerkorrekturen und neue Funktionen, die die Leistung verbessern. Zu den wichtigsten Funktionen gehören die Begrenzung der Verbindungsrate, um Probleme mit falsch konfigurierten Clients zu vermeiden (KIP-612) und Themen-IDs, die Leistungsvorteile bieten (KIP-516). Es gibt auch eine Early-Access-Funktion, um Zookeeper durch ein selbstverwaltetes Metadaten-Quorum zu ersetzen (KIP-500). Dies wird jedoch nicht für den Einsatz in der Fertigung empfohlen. Eine vollständige Liste von Verbesserungen und Fehlerbehebungen finden Sie in den Apache Kafka Versionshinweisen für 2.8.0.
Quelle: aws.amazon.com

Ankündigung der allgemeinen Verfügbarkeit der nativen Unterstützung von JSON und halb-strukturierten Daten in Amazon Redshift

Die native Unterstützung von JSON und halb-strukturierten Daten durch Amazon Redshift steht jetzt allgemein zur Verfügung. Es basiert auf dem neuen Datentyp „SUPER“, mit dem Sie halb-strukturierte Daten in Ihre Amazon Redshift Data Warehouses aufnehmen und speichern können. Amazon Redshift unterstützt außerdem PartiQL für einen SQL-kompatiblen Zugriff auf relationale, halb-strukturierte und verschachtelte Daten. Mit dem Datentyp SUPER und PartiQL in Amazon Redshift können Sie erweiterte Analysen durchführen, die klassischen strukturierten SQL-Daten (z. B. Zeichenfolgen, Zahlen und Zeitstempel) mit den halb-strukturierten SUPER-Daten (z. B. JSON) mit überlegener Leistung, Flexibilität und Benutzerfreundlichkeit kombinieren.
Quelle: aws.amazon.com