GKE usage metering: Whose line item is it anyway?

As Kubernetes gains widespread adoption, a growing number of enterprises and [P/S]aaS providers are using multi-tenant Kubernetes clusters for their workloads. These clusters could be running workloads that belong to different departments, customers, environments, etc. Multi-tenancy has a whole slew of advantages: better resource utilization, lower control plane overhead and management burden, reduced resource fragmentation, and reuse of extensions/CRDs, to name a few. However, the advantages do come at a cost. When running Kubernetes in a multi-tenant configuration, it can be hard to:estimate which tenant is consuming what portion of the cluster resourcesdetermine which tenant introduced a bug that led to a sudden usage spikeidentify the prodigal tenant(s) who may not be aware that they are wasting resourcesWe are pleased to announce the launch of Google Kubernetes Engine (GKE) usage metering in beta. The feature allows you to see your Google Cloud Platform (GCP) project’s resource usage broken down by Kubernetes namespaces and labels, and attribute it to meaningful entities (for example, department, customer, application, or environment.) This enables a number of enterprise use cases, such as approximating cost breakdown for departments/teams that are sharing a cluster, understanding the usage patterns of individual applications (or even components of a single application), helping cluster admins triage spikes in usage, and providing better capacity planning and budgeting. SaaS providers can also use it to estimate the cost of serving each consumer. How GKE usage metering worksWhen you enable GKE usage metering, resource usage records are written to a BigQuery table that you specify. Usage records can be grouped by namespace, labels, time period, or other dimensions to produce powerful insights. You can then visualize the data in BigQuery using tools such as Google Data Studio.Optionally, you can enable network egress metering. With it, a network metering agent (NMA) is deployed into the cluster as a DaemonSet (one NMA pod running on each cluster node). The NMA is designed to be pretty lightweight, however, it is important to note that an NMA runs as a privileged pod and consumes some resources on the node.High-level architecture of the usage metering agent.What customers are sayingEarly adopters of GKE usage metering tell us that the feature improves the operational efficiency and flexibility of their organizations.“We have found the usage metering feature very helpful as it lets us break down costs for each individual team using a multi-tenant cluster. Since the data is available directly in BigQuery, our finance team can easily access the data, without us as operators having to write any scripts or do calculations ourselves.” – Matthew Brown, Staff Software Engineer, Spotify“Descartes Labs’ multi-tenant platform is built on top of Kubernetes and GKE to allow many different types of use cases, such as wide-range geospatial ML modeling, but no two workloads have the exact same resource footprint. Being able to isolate each user’s workload in its own Kubernetes namespace, then having tooling natively built into GKE to measure resources being consumed per namespace, provides us great visibility into how our platform is being leveraged and works similarly to the GCP billing export we are already know well.” – Tim Kelton, Co-founder, head of SRE, Security, and Cloud Operations, Descartes LabsGetting startedYou can enable usage metering on a per-cluster basis (detailed instructions and relevant documentation can be found here). This enables one of GKE usage metering’s popular use cases: obtaining a cost breakdown of individual tenants. In the documentation, you’ll find some sample BigQuery queries and plug-and-play Google Data Studio templates to join GKE usage metering and GCP billing export data to estimate a cost breakdown by namespace and labels. They allow you to create dashboards like this:GKE users can visualize and dissect resource usage data to gain insights.GKE usage metering best practicesThe combination of namespaces and labels gives a lot of flexibility—users can segregate resource usage using namespaces, Kubernetes labels, or a combination of both. Taking the time to consciously plan the namespace/labeling strategy and standardizing it across your organization will make it easier to generate powerful insights down the road. The exact recommendations for setting namespaces and labels vary depending on factors such as the size of the company, the complexity of workloads, organizational structure, etc. Here are some general guidelines to keep in mind:While using too many namespaces can introduce complexity, using too few will make it hard to take advantage of multi-tenancy features. For a large company with multiple teams sharing the cluster, aim  to have at least one namespace per team. For more details and scenarios, see this short video.It is a good idea to define a required set of labels for the org and make sure the essential attributes are captured for every application/object. For example, you can require every Kubernetes application to define the application id, team name, and environment, and allow team members to customize additional labels as needed. However, keep in mind that taking this to the extreme and using too many labels may slow down some components.Enabling effective multi-tenancy on KubernetesEnterprises and resellers using GKE clusters in a multi-tenant environment need to understand resource consumption on a per-tenant basis. GKE usage metering provides a flexible mechanism to dissect and group GKE cluster usage based on namespaces and labels. You can find detailed documentation on our website. And please take a few minutes to give us your feedback and ideas, to help us shape upcoming releases.
Quelle: Google Cloud Platform

Cloud Bigtable brings database stability and performance to Precognitive

[Editor’s note: Today we’re hearing from Precognitive, which develops technology that interprets data to improve the accuracy of fraud detection and prevention, with the goals of reducing false positives and avoiding customer disruption. Their quest for the right database led them to Cloud Bigtable, and we’re bringing you their story here.]At Precognitive, we were able to start with a blank technology slate to support our fraud detection software products. When we started building the initial version of our platform in 2017, we had some decisions to make: What coding language to use? What cloud infrastructure provider to choose? What database to use? The majority of the decisions were straightforward, but we struggled to decide upon a database. We had plenty of collective experience with relational databases, but not with a wide-column database like Cloud Bigtable—which we knew we’d need to scale our behavior and device workloads. At launch, our products were supported by a self-managed database, but we quickly migrated to Cloud Bigtable, and we love it.  To efficiently support our bursty, real-time fraud detection workloads, we needed a cloud database that could satisfy the following key requirements:Stability to keep up with increased adoption of our productsIntelligent scaling that avoids bottlenecksNative integrations with BigQuery and Cloud DataprocManaged services that free up our engineers’ time to work on our productsAdding Cloud Bigtable as our performance databaseAs we scaled our services and added customers, our data collection services for our Device Intelligence and Behavioral Analytics products were seeing thousands of events per second. Cloud Bigtable provided a stable managed database that could handle the volume we were receiving during peak hours. We weren’t always able to handle this scale, as an early version of our product utilized a self-managed database.Every month, two or three engineers spent hours managing the database instances. Whenever the instances crashed, it would cost at least one engineer a day or two of productivity attempting to restore the instances and recovering any data from our backup database. Managing this database internally was taking precious time away from product development.We circled back to Cloud Bigtable. After two weeks of R&D, we decided to switch the Device Intelligence and Behavioral Analytics services to Cloud Bigtable.Cloud Bigtable solved our scaling issues. Cloud Bigtable had been attractive to us from the start because it was fully managed, and offered regional replication and other features we were lacking in our own managed instances. Cloud Bigtable provides horizontal scaling and automatically rebalances row keys (equivalent to a shard key) over time to prevent “hot” nodes. In addition, Cloud Bigtable provides a connector to BigQuery and Cloud Dataproc that allows us to analyze the terabytes of data we are processing and use that data for unsupervised machine learning.The perks of using Cloud BigtableAfter the migration to Cloud Bigtable, we noticed a number of additional benefits: improved I/O performance, a significant cost reduction, and a sizable decrease in hours spent on database maintenance.We measured some of our typical metrics before and after implementing Cloud Bigtable. Our request latency dropped by about 30 ms on average (to sub-10 ms) for API requests. Prior to the change, we were seeing latencies of 40+ ms on average. This latency drop on our Behavioral Analytics and Device Intelligence products allowed us to trim about an additional 10 to 15 ms off our average response time across all dependent services.Before we moved to Cloud Bigtable, we had to scale our database instances every time a new customer was onboarded. We were over-scaling in an attempt to avoid constantly resizing our database servers. By sunsetting our self-managed database and switching to Cloud Bigtable, we cut database infrastructure costs by approximately 35% and can now scale as needed, with a couple of clicks, during onboarding.We have spent zero hours managing a Cloud Bigtable database since launch, and we put the time we are saving every month toward product development.Moving forward with Cloud BigtableAs an engineering team, we love working with Cloud Bigtable. We are not only seeing improved developer experience and reduced latency, which keeps the engineers happy, but also reduced costs, which keeps the business happy. We’re able to build more product, too, with the time we’ve saved by switching to Cloud Bigtable. Stay tuned to our engineering blog for more on the lessons we’ve learned and our contributions to the wider Cloud Bigtable community.
Quelle: Google Cloud Platform

Azure.Source – Volume 67

Now in preview

Introducing IoT Hub device streams in public preview

Azure IoT Hub device streams is a new PaaS service that addresses the need for security and organization policy compliance by providing a foundation for secure end-to-end connectivity to IoT devices. At its core, an IoT Hub device stream is a data transfer tunnel that provides connectivity between two TCP/IP-enabled endpoints: one side of the tunnel is an IoT device and the other side is a customer endpoint that intends to communicate with the device. IoT Hub device streams address end-to-end connectivity needs by leveraging an IoT Hub cloud endpoint that acts as a proxy for application traffic exchanged between the device and service. IoT Hub device streams are particularly helpful when devices are placed behind a firewall or inside a private network.

Azure IoT Hub Device Streams

Announcing the preview of OpenAPI Specification v3 support in Azure API Management

Azure API Management has just introduced preview support of OpenAPI Specification v3 – the latest version of the broadly used open-source standard of describing APIs. We based the implementation of this feature on the OpenAPI.NET SDK. OpenAPI Specification is a widely-adopted industry standard that enables you to abstract your APIs from their implementation in a language-agnostic and easy to understand format. The wide adoption of OpenAPI Specification (formerly known as Swagger) resulted in an extensive tooling ecosystem.  If your APIs are defined in an OpenAPI Specification file, you can easily import them in Azure API Management (APIM). APIM helps organizations publish APIs to external, partner, and internal developers to unlock the potential of their data and services. Once the backend API is imported into APIM, the APIM API becomes a façade for the backend API.

Regulatory compliance dashboard in Azure Security Center now available

The regulatory compliance dashboard in Azure Security Center (ASC) provides insight into your compliance posture for a set of supported standards and regulations, based on continuous assessments of your Azure environment. The ASC regulatory compliance dashboard is designed to help you improve your compliance posture by resolving recommendations directly within the dashboard. Click through to each recommendation to discover its details, including the resources for which the recommendation should be implemented. The regulatory compliance dashboard preview is available within the standard pricing tier of Azure Security Center, and you can try it for free for the first 30 days.

Public preview: Read replicas in Azure Database for PostgreSQL

You can now replicate data from a single Azure Database for PostgreSQL server (master) to up to five read-only servers (read replicas) within the same Azure region. This feature uses PostgreSQL's native asynchronous replication. With read replicas, you can scale out your read-intensive workloads. Read replicas can also be used for BI and reporting scenarios. You can choose to stop replication to a replica, in which case it becomes a normal read/write server. Replicas are new servers that can be managed in similar ways as normal standalone Azure Database for PostgreSQL servers. For each read replica, you are billed for the provisioned compute in vCores and provisioned storage in GB/month.

Now generally available

HDInsight Tools for Visual Studio Code now generally available

The Azure HDInsight Tools for Visual Studio Code are now generally available on Windows, Linux and Mac. These tools provide best-in-class authoring experiences for Apache Hive batch jobs, interactive Hive queries, and PySpark jobs. The tools feature a cross-platform, lightweight, keyboard-focused code editor which removes constraints and dependencies on a platform. Azure HDInsight Tools for Visual Studio Code is available for download from Visual Studio Marketplace.

Azure Service Bus and Azure Event Hubs expand availability

Availability Zones is a high availability offering that protects applications and data from datacenter failures. Availability Zones support is now generally available for Azure Service Bus premium and Azure Event Hubs standard in every Azure region that has zone redundant datacenters. Note that this feature won’t work with existing namespaces—you will need to provision new namespaces to use this feature.  Availability Zones support for Azure Service Bus Premium and Azure Event Hubs Standard is available in the following regions: East US 2, West US 2, West Europe, North Europe, France Central, and Southeast Asia.

Azure Cognitive Services adds important certifications, greater availability, and new unified key

Over the past six months, we added added 31 certifications across services in Cognitive Services and will continue to add more in 2019. With these certifications, hundreds of healthcare, manufacturing, and financial use cases are now supported. In addition, Cognitive Services now offers more assurances for where customer data is stored at rest. These assurances have been enabled by graduating several Cognitive Services to Microsoft Azure Core Services. Also, the global footprint for Cognitive Services has expanded over the past several months — going from 15 to 25 Azure data center regions. Recently, we launched a new bundle of multiple services, enabling the use of a single API key for most of our generally available services: Computer Vision, Content Moderator, Face, Text Analytics, Language Understanding, and Translator Text.

Also generally available

Access generally available functionality in Azure Database Migration Service to migrate Amazon RDS for SQL Server, PostgreSQL, and MySQL to Azure while the source database remains online during migration:

Support for Amazon RDS for SQL Server to Azure SQL Database online migrations
Support for Amazon RDS for PostgreSQL to Azure Database for PostgreSQL online migrations
Support for Amazon RDS for MySQL to Azure Database for MySQL online migrations

News and updates

Microsoft and Citus Data: Providing the best PostgreSQL service in the cloud

On Thursday, Microsoft  announced the acquisition of Citus Data, an innovative open source extension to scale out PostgreSQL databases without the need to re-architect existing applications. Citus Data delivers unparalleled performance and scalability by intelligently distributing data and queries across multiple nodes, which makes sharding simple. Because Citus Data is packaged as an extension (not a fork) to PostgreSQL, customers can take advantage of all the innovations in community PostgreSQL with queries that are significantly faster compared to proprietary implementations of PostgreSQL. More information is available in this post by Rohan Kumar, Corporate Vice President, Azure Data: Microsoft acquires Citus Data, re-affirming its commitment to Open Source and accelerating Azure PostgreSQL performance and scale.

Export data in near real-time from Azure IoT Central

You can now export data in near real-time to your Azure Event Hubs and Azure Service Bus in near real-time from your Azure IoT Central app. Use the new features in Continuous Data Export to export data to your own Azure Event Hubs, Azure Service Bus, and Azure Blob Storage instances for custom warm path and cold path processing, and analytics on your IoT data. Watch this episode of the Internet of Things Show to learn how to export device data to your Azure Blob storage, Azure Event Hub, and Azure Service Bus using continuous data export in IoT Central. You’ll also learn how to set up continuous export to export measurements, devices, and device template data to your destination and how to use this data.

Export data from your IoT Central app to Azure Event Hubs and Azure Service Bus

HDInsight Metastore Migration Tool open source release now available

Microsoft Azure HDInsight Metastore Migration Tool (HMMT) is an open-source shell script that you can use for applying bulk edits to the Hive metastore. HMMT is a low-latency, no-installation solution for challenges related to data migrations in Azure HDInsight. This blog post covers how HMMT is outlined with respect to the Hive metastore and Hive storage patterns, the design of HMMT and describes initial setup steps, and finally, some sample migrations are described and solved with HMMT as a demonstration of its usage and value.

Azure Backup now supports PowerShell and ACLs for Azure Files

Azure Backup now supports preserving and restoring new technology file system (NTFS) access control lists (ACL) for Azure files in preview. You can now script your backups for Azure File Shares using PowerShell. Make use of the PowerShell commands to configure backups, take on-demand backups, or even restore files from your file shares protected by Azure Backup. Using the “Manage backups” capability in the Azure Files portal, you can take on-demand backups, restore files shares, or individual files and folders, and even change the policy used for scheduling backups. You can also go to the Recovery Services Vault that backs up the file share and edit policies used to backup Azure File shares. Backup alerts for the backup and restored jobs of Azure File shares are enabled, which enables you to configure notifications of job failures to chosen email addresses.

Analyze data in Azure Data Explorer using KQL magic for Jupyter Notebook

Jupyter Notebook enable you to create and share documents that contain live code, equations, visualizations, and explanatory text. Its includes data cleaning and transformation, numerical simulation, statistical modeling, and machine learning. KQL magic commands extend the functionality of the Python kernel in Jupyter Notebook. KQL magic enable you to write KQL queries natively and query data from Microsoft Azure Data Explorer. You can easily interchange between Python and KQL, and visualize data using rich Plot.ly library integrated with KQL render commands. KQL magic supports Azure Data Explorer, Application Insights, and Log Analytics as data sources to run queries against. KQL magic also works with Azure Notebooks, Jupyter Lab, and the Visual Studio Code Jupyter extension.

Hyperledger Fabric updates now available

Hyperledger Fabric is an enterprise-grade distributed ledger that provides modular components, enabling customization of components to fit various scenarios. You can now download from the Azure Marketplace an updated template for Hyperledger Fabric that supports Hyperledger Fabric version 1.3. The automation provided by this solution is designed to make it easier to deploy, configure and govern a multi-member consortium using the Hyperledger Fabric software stack. This episode of Block Talk walks through the Hyperledger Fabric ledger and discusses the core features you can use to customize the deployment of Hyperledger Fabric in your environment. 

Hyperledger Fabric on Azure

Additional news and updates

Azure FXT Edge Filer (Avere Update)
M-series virtual machines (VMs) are now available in Australia Central region.

Technical content

Connecting Node-RED to Azure IoT Central

In this post, Peter Provost, Principal PM Manager, Azure IoT, shows how simple it is to connect a temperature/humidity sensor to Azure IoT Central using a Raspberry Pi and Node-RED. Node-RED is a flow-based, drag and drop programming tool designed for IoT. It enables the creation of robust automation flows in a web browser, simplifying IoT project development.

Getting started with Azure Blueprints

Azure Blueprints (currently in Preview) helps you define which policies – including policy initiatives – RBAC settings, and ARM templates to apply on a subscription basis, making it easy to set configurations at scale, knowing that any resources created in those subscriptions will comply with those settings (or will show as non-compliant in the case of audit policies). Sonia provides an intro to the service, showing how they group configuration controls, like Azure Policy and RBAC, and then uses an example scenario to demonstrate how and why to use Blueprints to simplify compliance and governance.

RStudio Server on Azure

RStudio Server Pro, the premier IDE for the R programming language is now available on the Azure Marketplace, letting you launch it on a virtual machine of your choice. David details the benefits of this new offering and also lists alternative solutions for developers interested in running a self-managed instance of RStudio Server.

Sneak Peek: Making Petabyte Scale Data Actionable with ADX Part 2

To celebrate the recent announcement of free private repos in GitHub, Ari released a sneak peak of what he's working on for Part II of his "Making Petabyte Scale Data Actionable with Azure Data Explorer" series.

Azure shows

The Azure Podcast | Episode 263 – Partner Spotlight – Aqua Security

Liz Rice, Technical Evangelist at Aqua Security and master of all things Security in Kubernetes, talks to us about her philosophy on security and gives us the some great tips-n-tricks on how to secure your container workloads in Azure, on-prem or any cloud.

HTML5 audio not supported

Azure Friday | An intro to Azure Cosmos DB JavaScript SDK 2.0

Chris Anderson joins Donovan Brown to discuss Azure Cosmos DB JavaScript SDK 2.0, which adds support for multi-region writes, a new fluent-style object model—making it easier to reference Azure Cosmos DB resources without an explicit URL—and support for promises and other modern JavaScript features. It is also written in TypeScript and supports the latest TypeScript 3.0.

AI Show | Learn by Doing: A Look at Samples

Gain an understanding of the landscape of sample projects available for Cognitive Services.

Five Things | Five Reasons Why You Should Check Out Cosmos DB

What does a giant Jenga tower have in common with NoSQL databases? NOTHING. But we're giving you both anyway. In this episode, Burke and Jasmine Greenaway bring you five reasons that you should check out Cosmos DB today. They also play a dangerous game of Jenga with an oversized tower made out of 2×4's, and someone nearly gets crushed.

The DevOps Lab | Verifying your Database Deployment with Azure DevOps

While at Microsoft Ignite | The Tour in Berlin, Damian speaks to Microsoft MVP Houssem Dellai about some options for deploying your database alongside your application. Houssem shows a few different ways to deploy database changes, including a clever pre-production verification process for ensuring your production deployment will succeed. Database upgrades are often the scariest part of your deployment process, so having a robust check before getting to production is very important.

Overview of Managed Identities on Azure Government

In this episode of the Azure Government video series, Steve Michelotti talks with Mohit Dewan, of the Azure Government Engineering team, about Managed Identities on Azure Government. Whether you’re storing certificates, connection strings, keys, or any other secrets, Managed Identities is a valuable tool to have in your toolbox. Watch this video to see how quick and easy it is to get up and running with Managed Identities in Azure Government.

Azure Tips and Tricks | How to create a container image with Docker

In this edition of Azure Tips and Tricks, learn how to create a container image to run applications with Docker. You’ll see how to create a folder inside a container and create a script to execute it.

Azure Tips and Tricks | How to manage multiple accounts, directories, and subscriptions in Azure

Discover how to easily manage multiple accounts, directories, and subscriptions in the Microsoft Azure portal. In this video, you'll learn how to log in to the portal and manage multiple accounts, establish the contexts between accounts and directories, and how to filter and scope the portal at a few different levels to their billable subscriptions.

The Azure DevOps Podcast | Paul Hacker on DevOps Processes and Migrations – Episode 020

In this episode, Paul Hacker is joining the Azure DevOps Podcast to discuss DevOps processes and migrations. Paul has some really interesting perspectives on today’s topic and provides some valuable insights on patterns that are emerging in the space, steps to migrating to Azure DevOps, and common challenges (and how to overcome them). Listen to his insight on migrations, DevOps processes, and more.

HTML5 audio not supported

Events

Microsoft Ignite | The Tour

Learn new ways to code, optimize your cloud infrastructure, and modernize your organization with deep technical training. Join us at the place where developers and tech professionals continue learning alongside experts. Explore the latest developer tools and cloud technologies and learn how to put your skills to work in new areas. Connect with our community to gain practical insights and best practices on the future of cloud development, data, IT, and business intelligence. Find a city near you and register today. In February, the tour visits London, Sydney, Hong Kong, and Washington, DC.

Customers, partners, and industries

Security for healthcare through cloud agents and virtual patching

For a healthcare organization, security and protection of data is a primary value, but solutions can be attacked from a variety of vectors such as malware, ransomware, and other exploits. The attack surface of an organization could be complex, email and web browsers are immediate targets of sophisticated hackers. One Microsoft Azure partner, XentiT (ex-ent-it), is devoted to protecting healthcare organizations despite the complexity of the attack surface. XentIT leverages two other security services with deep capabilities and adds its own expertise to create a dashboard-driven security solution that lets healthcare organizations better monitor and protect all assets.

AI & IoT Insider Labs: Helping transform smallholder farming

Microsoft’s AI & IoT Insider Labs was created to help all types of organizations accelerate their digital transformation. Learn how AI & IoT Insider Labs is helping one partner, SunCulture, leverage new technology to provide solar-powered water pumping and irrigation systems for smallholder farmers in Kenya. SunCulture, a 2017 Airband Grant Fund winner, believed sustainable technology could make irrigation affordable enough that even the poorest farmers could use it without further aggravating water shortages. The company set out to build an IoT platform to support a pay-as-you-grow payment model that would make solar-powered precision irrigation financially accessible for smallholders across Kenya.

A Cloud Guru | Azure This Week – 25 January 2019

This time on Azure This Week, Lars talks about Azure Monitor logs for Grafana in public preview, New Azure Portal landing page, and it is time to move on from Windows Server 2008.

Quelle: Azure

5 Reasons to Attend DockerCon SF 2019

 
If you can only attend one conference this year – make it matter. DockerCon is the one-stop event for practitioners, contributors, maintainers, developers, and the container ecosystem to learn, network and innovate. And this year, we will continue to bring you all the things you love about DockerCon like Docker Pals, the Hallway Track and roundtables, and the sessions and content you wanted more of – including open source, transformational, and practical how-to talks. Take advantage of our lowest ticket price when you register by January 31, 2019. No codes required.
<Register Now>

And in case you are still not convinced, here are a few more reasons you shouldn’t miss this year’s DockerCon

Belong. The Docker Community is one of a kind and the best way to feel a part of it is at DockerCon. Take advantage the Docker Pals Program, Hallway Track, roundables and social events to meet new people and make lasting connections.

2.  Think big. Docker containers and our container platform are being used everywhere for everything – from sending rockets to space to literally saving the earth from asteroids to keeping e-commerce running smoothly for black friday shoppers. Come to DockerCon and imagine your digital future.

 Build your skills. DockerCon’s sessions prioritize learning with actionable takeaways – from tips and tricks for devs, to real-world best practices for ops, from customer stories to the latest innovations from the Docker Team.

 Be the expert. Dive into topics such as machine learning, CI/CD, Kubernetes, developer tools, security, and more through Hallway Track – a one-of-a-kind meeting tool that allows attendees to easily schedule one-on-one & group conversations about topics of their choosing.

5.  Experience Unparalleled Networking. We know that one of the main reasons to attend a conference is who you will meet and DockerCon brings together industry experts and practitioners at every stage of the container journey. So grow your network, meet with other attendees, and get to know the Docker team!

5 reasons to attend #DockerCon SF 2019:Click To Tweet

The post 5 Reasons to Attend DockerCon SF 2019 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/