Readers’ choice: Top Google Cloud Platform stories of 2018

We’re wrapping up a busy year here at Google Cloud. As you head into a new year, take a minute to catch up on what happened in 2018—and get some ideas about what you might do in 2019. Here’s what was most popular this year on the Google Cloud Platform (GCP) blog, based on readership, and organized generally by key areas of cloud.Building the right cloud infrastructure for your businessThe many ways to build a cloud infrastructure keep expanding. Container tools like Kubernetes continued to grow in popularity, and we started to learn more about serverless computing possibilities.On the container front, this year brought news of the gVisor sandbox for secure container isolation, so you can run a bigger variety of workloads. Plus, Jib came out this year: It’s an open-source Java containerizer, so you can build containers using familiar Java tools.And at Next ‘18 we announced the Cloud Services Platform, a consistent development framework for your IT resources that gathers together cloud services to automate away tasks across on-prem and cloud infrastructure. The beta release of GPUs attached to preemptible VMs also came this year, making it more affordable to run large-scale ML workloads. And Cloud TPU hardware accelerators arrived (and continued to mature) to speed up and scale ML workloads programmed with TensorFlow.Developing cloud apps on that infrastructureAlong with solid cloud foundations, cloud app development made strides in 2018.News of support for headless Chrome for Google Cloud Functions and Cloud Functions for Firebase got attention. And the newly revamped Cloud Source Repositories made a splash—it’s powered by the same underlying code search infrastructure that Google engineers use every day.Now that you’ve found cloud, what are you gonna do with it?Cloud technology infrastructure really started to mature this year, especially for emerging use cases like machine learning (ML) that need powerful back-end tools.News of the Ethereum cryptocurrency dataset on BigQuery was a hit; it’s publicly available to use for analysis. A partnership with NASA’s Frontier Development Lab brought in Google Cloud to work on simulating and classifying the possible atmospheres of exoplanets.Also popular on the blog this year: We added a PyTorch 1.0 Preview VM image to GCP so you can easily conduct deep learning experimentation with the newest PyTorch framework. Cloud Text-to-Speech made Google’s internal technology, powered by DeepMind, available for uses like call center responses, IoT device speech, and converting text into audio format.And don’t forget the fun that’s powered by cloud, too. A post on the new open-source Agones project got a lot of attention; Agones uses Kubernetes to host and scale dedicated game servers. OpenMatch arrived this year too—this open source project lets game developers bring their own logic to a common matchmaking framework when building multiplayer games.Building the future cloud IT teamCloud technology hasn’t just changed IT infrastructure; it’s changed IT teams and processes as well. Concepts like site reliability engineering (SRE) bring some new ways of thinking about structuring these processes.This popular SRE vs. DevOps blog post laid out the details of how SRE is similar and different from DevOps and describes its availability targets, risk and error budgets, toil budgets and more. Then, there was the Accelerate: State of DevOps 2018 research report, with lots of takeaways based on survey results from DevOps professionals.Managing the modern cloudSome essential cloud management basics also stuck out among all the future-oriented, big-idea projects that got attention this year.The guide to best practices for user account authorization was a useful read for anyone creating, handling and authenticating GCP user accounts. Choosing strong database consistency also struck a chord, with details on why and how it’s important, with a particular focus on Cloud Spanner. Titan Security Keys became available in the Google Store this year. These FIDO security keys include a hardware chip with Google-engineered firmware for strong two-factor authentication.That’s a wrap for 2018! We’re looking forward to seeing what you build (and read) next.
Quelle: Google Cloud Platform

Nurture what you create: How Google Cloud supports Kubernetes and the cloud-native ecosystem

At Google Cloud, we talk a lot about our belief in open source and open cloud. But what does that actually mean?Usually, when you’re a leader in an open-source community like Kubernetes and there’s a big event (like this week’s KubeCon North America), that means launching a brand new project. Launches are exciting, but maintaining a successful project like Kubernetes requires sustained investment and maintenance. We find that what really distinguishes a successful open-source project is the day-in day-out nurturing that happens behind the scenes. And it’s more than coding—it’s things like keeping the project safe and inclusive, writing documentation, managing test infrastructure, responding to issues, working in project governance, creating mentoring programs, reviewing pull requests, and participating in release teams. So today, we thought we’d take this opportunity not to announce a project, but rather reflect on some examples of what it means to us to be a part of the open-source cloud-native community.“Open-source software is not free like sunshine, it’s free like a puppy.” – Sarah Novotny, Head of Open Source Strategy for GCPSupporting communities and thinking differentlyFirst and foremost, with Kubernetes, we fully support the core values of the project, as well as provide technical and non-technical contributions in ways that reinforce positive results for the entire community. Since its inception, we’ve remained the top contributor to the project. This is something we’re incredibly proud of, and we hope that our work helps make the entire cloud-native landscape richer.Our commitment to open source also extends to making more impactful events. For example, this year, rather than produce new KubeCon conference swag, we donated diversity scholarships for 2019 to the CNCF instead. This aligns with our desire for inclusivity, and helps cultivate a stronger community. We also co-organized the Kubernetes Contributor Summit, so our community can have critical in-person interactions ahead of the full event.Supporting the existing cloud-native ecosystem: etcdAnother example of our commitment to open source is supporting the etcd distributed key value store, which has now joined the roster of CNCF projects. As the Kubernetes ecosystem matured, we saw the need for more support in this critical component. We dedicated full-time engineers to the project, including an etcd maintainer, and two of the top five code committers in 2018. We led improvements to the etcd release process, expanding release branch support from just the latest minor version to the latest three minor versions. We also dedicated staff to patch management duties and automating the release workflow, and actively helped stabilize etcd, hunting down and fixing issues including a critical boltdb data corruption issue. More recently, we contributed to the rewrite of the etcd client-side load balancer and led efforts to expand the metrics exposed by etcd for monitoring system health and performance.We’re committed to the quality and production readiness of etcd. Our plans include making upgrades safer by adding zero-downtime downgrade support, and expanding test coverage over more version pairings of etcd with Kubernetes. Finally, we’re continually making coordinated improvements to both etcd and the Kubernetes storage layer that interfaces with it to optimize scalability, performance, and ease of operability.Enriching the cloud-native landscapeOur commitment to open source isn’t just limited to supporting communities and existing projects. We also hope to share many of the valuable lessons we have learned while building scalable, secure, and reliable systems, Kubernetes being a prime example.A recent example is gVisor, based on technology Google uses to isolate and secure containerized workloads. As organizations run more heterogenous and less trusted workloads, there’s new interest in containers that provide a secure isolation boundary, and we wanted to share how we’ve been tackling the problem internally with the community. This in turn opened up broader discussions about the security challenges inherent in cloud-native architecture.In an effort to make gVisor more accessible, we integrated it with Minikube, so you can try out gVisor locally, in a VM on your laptop. We’re also actively working to open more of the project’s support infrastructure, plans, and processes, starting with a substantial system call compatibility test suite with more than 1500 tests.Releasing gVisor as an open-source project underscores the many different ways communities can form and contribute across the cloud-native landscape. Sometimes those contributions aren’t explicitly code, but instead feedback or ways to do things better. Being open helps build communities of practice across all technology groups and stakeholders.Improving the cloud-native developer experienceWe understand that the day-to-day life of an application developer can be challenging in the cloud-native world due to multiple points of divergence between how you run your application locally and in a production Kubernetes cluster. Our goal is to reduce these differences so all developers can have a positive experience in the Kubernetes ecosystem.In March we released an important open-source tool for cloud-native development called Skaffold, which allows you to define the build, test and deployment phases of your Kubernetes application with a single yaml file. In the skaffold dev command, this local pipeline is combined with an automated file watcher based on the build definition, creating a fast feedback loop—you can see your source file changes in your deployed app in seconds. This works both locally and in Google Kubernetes Engine (GKE), helping to provide a cohesive workflow.Learn and share: How we cross-pollinate communitiesAnother effort within Google open source is to create templates and other starter materials for emerging projects to use for things like governance and contributions. Our hope is to eventually provide everything necessary to bootstrap a successful open-source project, as well as offer guidance at key inflection points in the project lifecycle. These are distilled from our experience working on projects like Kubernetes, Istio, Knative, and Tensorflow. To further improve these materials, we regularly bring community managers together across projects to discuss shared struggles, opportunities, and lessons learned to avoid repeating antipatterns across projects. Scaling open-source contributions is important, especially if the goal is to ensure consistently positive and inclusive interactions across every project we support.So, as we all celebrate the continued success of Kubernetes, remember to take the time and thank someone you see helping make the community better. It’s up to all of us to foster a cloud-native ecosystem that prizes the efforts of everyone who helps maintain and nurture the work we do together.To stay up to date on what’s going on in the cloud-native community, both from Google and beyond, we urge you to subscribe to the Kubernetes Podcast. And, if you’re interested in getting involved, please visit the links provided below.Kubernetes for container scheduling and management [ Google Cloud | GitHub ]Istio to connect, monitor, and secure microservices [ Google Cloud | GitHub ]Knative to build, deploy, and manage modern serverless workloads [ Google Cloud | GitHub ]Container tools to help entire life-cycle of containerized applications [ Google Cloud | GitHub ]KubeFlow Pipeline to compose, deploy, and manage end-to-end machine learning workflows [ Google Cloud | GitHub ]
Quelle: Google Cloud Platform

Azure Monitor for containers now generally available

We are happy to announce that Azure Monitor for containers is now generally available. Azure Monitor for containers monitors the health and performance of Kubernetes clusters hosted on Azure Kubernetes Service (AKS). Since the launch of the public preview at Build in May 2018, we have seen a lot of excitement from customers. Customers love the fact that you can enable monitoring as soon as you create an AKS cluster and get all the monitoring telemetry in a centralized location in Azure without having to login to containers or rely on other tools. Since the public preview, we have been adding more capabilities and refining the experience based on your feedback. Let’s look at some of the recent changes.

Multi-cluster view – You often have multiple AKS clusters to manage. Wouldn’t it be great to view and manage all your clusters together? The multi-cluster view discovers all AKS clusters across subscriptions, resource group, and workspaces, and provides you a health roll up view. You can even discover clusters that aren’t being monitored and with just few clicks start monitoring them. 

Drill down further into AKS cluster with Performance Grid view – To investigate further, you can drill down to performance grid view that shows the health and performance of your nodes, controllers, and containers. From the node view tab, you can easily spot the noisy neighbor issue on the pod and drill further to see the controller it is part of. You can further see the controller limits, request setting, and actual usage to determine if you have configured your controller correctly. You can continue investigating by looking at the Kubernetes event logs associated to that controller.

Live debugging – We all know the importance of verifying that your application is working as expected, especially after you deploy an update. With live logs you get a real time, live stream of your container logs directly in your Azure portal. You can pause the live stream and search within the log file for errors or issues. Unlike the Azure Monitor logs, the live stream data is ephemeral and is meant for real time troubleshooting.

Onboarding – In addition to the Azure portal, we have added more ways for you to automate onboarding Azure Monitor for containers.

Azure CLI and ARM template – With the add-on option you can onboard Azure Monitor for containers with a single command. The command will automatically create the default Log Analytics workspace and deploy the agent for you.

For new AKS clusters:

az aks create –resource-group myAKSCluster –name myAKSCluster –node-count 1 –enable-addons monitoring –generate-ssh-keys 

For existing AKS clusters:

az aks enable-addons -a monitoring -n MyExistingAKSCluster -g MyExistingAKSClusterRG 

You can also enable monitoring for your containers by using Azure Resource Manager (ARM) template. To learn more, please review the detailed instructions for onboarding using Azure CLI and ARM template.

Terraform – Similar to ARM template, if you are using Terraform to deploy AKS clusters, you can enable monitoring right from the template. To learn more read the documentation from Terraform on setting up AKS cluster, Log Analytics solution, and workspace.

We would like to conclude with some inspiring words from one of our customers, Hafslund, a Nordic power company, with whom we recently published a case study:

“We found it easy to get Azure Monitor up and running for containers. The metrics and charts right out of the Monitor box are perfect to help us quickly tune our clusters and services and resolve technical issues.”

– Ståle Heitmann, CTO, Hafslund Nett AS

To learn more about Azure Monitor for containers, read our documentation, “Azure Monitor for containers overview.” Thank you for your feedback during the public preview and we look forward to your continued support as we add more exciting features and capabilities to Azure Monitor for containers.
Quelle: Azure

KubeCon North America 2018: Serverless Kubernetes and community led innovation!

Welcome to KubeCon North America 2018, and welcome to Seattle. It’s amazing to get the chance to welcome you to my hometown, and the site of Kubernetes birth. It was barely five years ago that Joe, Craig, and I had the first small ideas and demos that eventually turned into the amazing project and community. I’m honored that all of you over the years have chosen to invest your time, energy, and enthusiasm in Kubernetes, whether this is your first KubeCon or you’ve been here since the first one in San Francisco four years ago, welcome!

For the Azure Kubernetes team, KubeCon is especially exciting. It’s been a busy and fulfilling year, Azure Kubernetes Service (AKS) has been the fastest growing service in the history of Azure Compute, that’s been quite a ride! With KubeCon here, it’s a great chance to meet up with our customers and community collaborators to celebrate all the incredible things.

For the Azure Kubernetes Service, we started with the journey of "how to make Kubernetes easier for our customers." For example, by letting Azure take care of deployment, operations, and management of Kubernetes APIs and leveraging integrated tools, Maersk was able to free their engineers and talents to focus on things that makes the most business impact. Furthermore, by taking advantage of a fully-managed runtime environment provided by AKS, Siemens Healthineers realized shorter release cycles and achieved its desired continuous delivery approach in highly regulated environment.

We're seeing more and more Java customers port their existing Java application stacks to AKS with little or no changes. Xerox, for example, was able to run their Java apps in containers with no code modifications and leveraged Helm chart to automate customer onboarding. As a result, for their DocuShare Flex Content Management platform they were able to reduce the provisioning time from 24 hours to less than 10 minutes, accelerating sales and customer onboarding.

While we’re discussing Azure Kubernetes Service, it’s great to see more and more Azure services bring their strengths to Kubernetes. Here at KubeCon, we’re announcing the general availability (GA) of the Azure Monitor for containers. The Azure Cognitive Services have also announced containerization of their cognitive APIs, allowing users to take advantage of core cognitive technology on-premise, at the edge or wherever your data lives. For the Azure Kubernetes team, it’s been an exceptionally busy month, starting with the announcement, at KubeCon Shanghai, of AKS in Azure’s China region. Just last week in Las Vegas, we announced the public preview of AKS virtual nodes which together with Azure Container Instances (ACI) helps customers realize and take advantage of a serverless container infrastructure.

But honestly, the service that we build is only one (albeit very important) piece of what we work on as a team. Of equal importance is the work that we do in the open source community to work with others to develop novel solutions to our customers problems. With help from the community, like the great folks at the open policy agent framework, we launched an open source policy controller for Kubernetes. This policy agent installs on Kubernetes clusters anywhere and can provide enterprises with assurances that developers will successfully build reliable and compliant systems. We also are announcing the Osiris open source project that enables efficient “scale-to-zero" for Kubernetes containers. This technology can power Functions as a Service, or any programming paradigm where you need rapid scale-up in response to customer traffic.

With Docker, Bitnami, Hashicorp, and others we’ve announced the Cloud Native Application Bundle (CNAB) specification. CNAB is a new distributed application package that combines Helm or other configuration tools with Docker images to provide a complete, self-installing cloud applications. To see what CNAB can do for you, imagine being able to hand out a USB key to KubeCon attendees that could install your complete application. Finally, we’re celebrating the adoption of the Virtual Kubelet project into the CNCF sandbox, as we continue to work with VMWare, AWS, hyper.sh, and others in the community to make nodeless Kubernetes a reality.

At KubeCon Shanghai, I talked about my thoughts on serverless Kubernetes and the evolution of cloud native development. It’s a future driven by our mission of “Kubernetes for Everyone,” this includes reducing the complexity of Kubernetes operations by running your API for you in AKS and developing ‘nodeless’ Kubernetes with virtual nodes. It also means working on tools like Draft, and the Kubernetes extension for Visual Studio Code, which has been installed by nearly 175 thousand people that make Kubernetes a more integrated, easy to use experience.

At KubeCon North America, I’m taking off my forward-looking cap, and instead talking about the development and maintenance of the Java, .NET, TypeScript, and Python clients for Kubernetes. Whether you’re interested in talking about the future of cloud computing, or adding features like port-forwarding to the TypeScript client. I’ll be around the conference all week at the Azure booth and in the hallway track.

When it comes to explaining Kubernetes, one of my favorites is the Children’s Illustrated Guide to Kubernetes. For this KubeCon, I’m incredibly excited to announce that Microsoft is donating the likeness of Phippy, and all of your favorites from the book to the CNCF. To celebrate, we’re sharing a special second episode of the Children’s guide to Kubernetes. You can learn about the core concepts of Kubernetes in a fun way!

Whether you’re joining us in Seattle for KubeCon, or watching the talk streams from afar, we’ve got some great resources to get you started with Kubernetes, including the recently published best practices we’ve gathered from our customers and a webinar I will be sharing on structuring Kubernetes project in production.

Welcome to Seattle!

–brendan
Quelle: Azure

A hybrid approach to Kubernetes

We’re excited to see everyone at Kubecon this week! We’ve been working with our customers to understand how they’re thinking about Kubernetes and what we can do to make it easier for them. Azure Stack unleashes new hybrid capabilities for developing applications. You design, develop, and maintain your applications just like you do with Azure and you can deploy to any of the Azure clouds. Your application’s location becomes a configuration parameter rather than a design constraint.

So how does Azure Stack work with containers exactly? The way that containers and hybrid cloud work together can allow you to solve many problems. You can create a set of apps in containers using the languages you love like NodeJS, Python, Ruby, and many others. You can also take advantage of the wide array of tooling available, including Visual Studio Code. You can deploy your container or set of containers to a mix of environments that meet your user’s requirements. For instance, you can keep your sensitive data local in Azure Stack and access current functionality such as Azure Cognitive Services in global Azure. Or you can develop your apps in global Azure where you developers are and then deploy the containerized apps to a private cloud in Azure Stack that becomes completely disconnected on board a submarine. The possibilities are endless.

Azure Stack allows you to run your containers on-premise in pretty much the same you as you do with global Azure. You can choose the best place for your containers depending on data gravity, data sovereignty, or other business needs. Containers let you use Azure Services from your host running on-premise and lets you take advantage of the secure infrastructure, integrated Role Based access Control, and seamless DevOps tools allowing you to create a single pipeline targeting multiple Azure clouds. Your containers and supporting services are hosted in a secure infrastructure that integrates with your corporate network.

The Kubernetes Marketplace item available in Preview for Azure Stack is consistent with Azure since the template is generated by the Azure Container Service Engine, the resulting cluster will run the same containers as in AKS. It also complies with the Cloud Native Foundation.

Your developers can also use Open Shift Container Platform in Azure Stack. Open Shift provides a consistent container experience across Azure, Azure Stack, bare-metal, Windows, and RHEL. Open Shift brings together Microsoft and Red Hat developer frameworks and partner ecosystems as previously announced in September.

When you take your containers across Azure, Azure Stack, and Azure sovereign clouds, you should also consider that your application architecture likely depends on more than containers. Your application likely depends on numerous resources with different, specific versions. To make it easier to manage this, we recently announced the Cloud Native Application Bundles, a new open source package format specification created in close partnership with Docker and broadly supported by HashiCorp, Bitnami, and more. With Cloud Native Application Bundles, you can manage distributed applications using a single installable file, reliably provision application resources in different environments, and easily manage your application lifecycle without having to use multiple tools.

This week is KubeCon and if you are attending you can see Kubernetes and Azure Stack in action in the Expo Hall. Please drop by our booth #P18 to see great demos of the technologies I mentioned in this post.

I hope you find the information in this post useful! Stay tuned for new topics around developing hybrid applications and feel free to follow me on Twitter.

To learn more about hybrid application development, read the previous post in this series: "What you need to know when writing hybrid applications."
Quelle: Azure