HTTP/3 gets your content there QUIC, with Cloud CDN and Load Balancing

When it comes to the performance of internet-facing applications, HTTP/3 is no small step over HTTP/2: Google’s own roll-out of HTTP/3 reduced Search latency by 2%, reduced video rebuffer times on YouTube by 9%, and improved throughput on mobile devices by 7%. So today, we’re excited to bring support for HTTP/3 to all Google Cloud customers using Cloud CDN and HTTPS Load Balancing. With HTTP/3 support, you’ll see real-world improvements to your streaming video, image serving and API scaling behind our global infrastructure—all without having to change your applications.What is HTTP/3?HTTP/3 is a next-generation internet protocol, and is built on top of QUIC, a protocol we developed and contributed to the IETF, the standards organization in charge of maintaining internet protocols. Together, HTTP/3 and QUIC address previous challenges with HTTP/2 around head-of-line-blocking, security (TLS 1.3 is foundational to QUIC), and reliability over unreliable connections. The original Google QUIC (we call it ‘gQUIC’) will be phased out at the end of 2021, as the number of IETF QUIC clients is quickly surpassing those that support gQUIC. Importantly, your end users can benefit from HTTP/3 today: the latest versions of Mozilla Firefox, Google Chrome, and Apple’s iOS Safari all support HTTP/3 and/or plan to enable it by default in the next couple of months, as do popular libraries such as Cronet and libcurl. Enabling HTTP/3To use HTTP/3 for your applications, you can enable it on your external HTTPS Load Balancers via the Cloud Console or the gcloud SDK with a single click.Clients that don’t yet support HTTP/3 such as older browsers or networking libraries won’t be negatively impacted: HTTP/3 uses the Alt-Svc HTTP header to allow clients to “opt in” if they support the protocol. Those clients will continue to negotiate HTTP/2 or HTTP/1.1 as appropriate.What’s next?In the coming weeks, we’ll bring HTTP/3 to more users when it’s enabled by default for all Cloud CDN and HTTPS Load Balancing customers: you won’t need to lift a finger for your end users to start enjoying improved performance.  If you want to learn more about how Cloud CDN works, check out our overview video, and keep an eye on our release notes to keep up with new features.
Quelle: Google Cloud Platform

Vida Health invigorates virtual healthcare with Google Cloud solutions

Editor’s note: In this guest blog, we look at how healthcare startup Vida Health built a virtual platform on Google Cloud that cut costs and overhead, saves healthcare providers valuable time, and delivers machine learning capabilities that operationalize their data for better patient health outcomes.  At Vida Health, our virtual healthcare platform is designed to deliver whole-person healthcare by treating multiple conditions and integrating both mind and body medicine. In choosing Google Cloud to help us with our digital transformation, we were able to reduce costs 60% by switching from a managed platform to Google Kubernetes Engine (GKE), and are using Google solutions like BigQuery ML to innovate new products that help our patients and empower our clinicians. Accelerating the heartbeat of digital transformationsTraditionally, healthcare has been a slow-moving industry with a bias toward risk aversion and maintaining the status quo. The COVID-19 pandemic challenged this mindset and encouraged many healthcare organizations to accelerate their plans for digital transformation. At the forefront of this transformation is virtual care/telehealth and the ability for providers to offer the same high-quality patient experience over the web and mobile as they do in person. During the pandemic, Vida Health faced challenges scaling our original infrastructure on another cloud provider to meet the growing demand. We also felt that this CSP’s suite of machine learning (ML) services didn’t provide the value add we were seeking. After performing research into competitive cloud technologies, we chose Google Cloud for their flexible, secure, and scalable solutions that integrated seamlessly, reduced our operational overhead, and gave us the tools to build innovative products powered by ML.A key differentiator of Vida in the healthcare marketplace is our platform. Where many competitors have solutions targeting single conditions, we took a horizontal approach, with a platform designed to treat multiple conditions and to integrate both mind and body. Nearly half of Americans have more than one chronic medical condition, and we want to help them with whole-person health solutions that acknowledge the reality of their situation. Our platform is powered by a spectrum of Google solutions, including Looker, an enterprise platform for business intelligence, data applications, and embedded analytics. With a unified dashboard experience, Looker helps us aggregate all of our data and gives us a holistic view of each patient. To take advantage of artificial intelligence (AI) and ML technologies, we were well situated by using BigQuery, Google’s serverless data warehouse, to store all of our data in one place. Even as our datasets in BigQuery grow more comprehensive, it remains easy for our ML engineers and data scientists to use and experiment on that data. We can then take that data into production with BigQuery ML, which allows us to build ML models with only SQL skills.Prescribing ML for new use casesIn our use and exploration of AI/ML in our platform, we go beyond pure AI tools by including human-in-the-loop programs and treatments. For example, we provide coaches, therapists, and dieticians that work with each individual patient, providing tips, strategies, and accountability. Our patient-provider interactions are digitized and stored, giving us a robust training dataset that we can now operationalize using all of the Google tools available. Using these provider interactions, we can track a patient’s progress to ensure they’ve improved their health outcomes, whether it’s weight loss, stress reduction, blood sugar management or beyond. We want to endow our providers with superhuman powers, which means using AI/ML to manage and automate all of the tasks that aren’t member-facing, freeing up the providers to focus their time and energy on their patients. We’re currently experimenting with our Google tools around transcribing the provider’s consultation notes and then applying data analysis to uncover insights that will lead to better health outcomes. Other time-saving solutions on our roadmap for providers include pre-filling standard fields in the chat function and managing end-of-day approvals. We’re currently using BigQuery ML for our “next action recommender,” a member-facing feature on our mobile app that recommends the next step a patient can take in their treatment, based on past datasets of information provided by the patient. At the start of their journey, the steps might be basic, such as scheduling a consultation, adding a health tracker, or watching a health video. But the longer a patient uses our platform, the more sophisticated the recommendation system gets. On the provider side, we have our Vidapedia, a comprehensive list of protocols for treatments that providers can follow. In the past year we’ve invested in Vidapedia cards, which are distinct sets of clinical protocols that have been codified. We’re up to 150 cards, and instead of providers needing to keep all of that information in their heads, we’re working on using BigQuery ML to extract the actions a patient has taken so far in their treatment. Using that data, we’ll then recommend to the provider the most relevant cards that apply to the specific conditions. Having that information at their fingertips reduces the amount of time they need to spend on each member offline, which helps us build efficiency and lower the cost of delivering care.  We’ve also used ML in our customer acquisition process, which has traditionally been a costly endeavor for healthcare startups. A company first needs to market and sell to payers and providers, and then understand the total addressable market (TAM) for their patient base before convincing that segment that their platform is the best decision. We’ve successfully applied ML to this process, sifting through hundreds of different data inputs to better predict who is likely to use our platform, saving us time and money.Invigorating virtual healthcare with Google Cloud solutions The rest of our current Google Cloud stack is robust, featuring BigQuery Slot Autoscaling, a preview feature that optimizes costs and scales for traffic spikes without a sacrifice in performance. We use Looker for data reporting and dashboarding, and Data Studio for quick, ad hoc data visualization. Our relational database is Cloud SQL for PostgreSQL, and we use Data Catalog for data discovery and search. Other Google services in our stack include GKE, Dataflow, Data Fusion, Cloud Scheduler, and AI Platform.  The seamless integration between Google products and services has been impressive and time-saving. Many of our clinical protocols were originally written in Google Docs, and the ability to import that data directly into BigQuery has saved us so much time and effort. Using Looker to then democratize access to that data internally across our organization, and BigQuery ML to build ML applications upon that data, feels like a secret weapon that puts us ahead of the competition. As the healthcare industry continues to adjust to the demands of a changing world, we’ll be working with Google Cloud to deliver cutting-edge solutions that exceed the needs of our patients and providers. Learn more about Vida Health, then apply for our Startup Program to get financial, business, and technical support for your startup. You can also read more about other organizations using Looker and BigQuery to modernize business intelligence.Related ArticleHow Lumiata democratizes AI in healthcare with Google CloudAs AI becomes essential in healthcare, Lumiata’s AI platform helps organizations starting using AI and ML easily to improve care, minimiz…Read Article
Quelle: Google Cloud Platform

Improving cloud operations and migrations with Google Cloud and ServiceNow

When organizations embrace cloud as a core component of their IT operations, they have a number of options: a wholesale migration to the public cloud, incremental or large-scale hybrid deployments, private clouds, or even running services across multiple clouds. The modern enterprise has more options than ever before in terms of where to host any individual workload, but also faces a rising level of complexity. According to Flexera’s 2021 State of Cloud report, 92% of enterprises have a multicloud strategy. However, the majority are also faced with higher than planned cloud costs, and a need to optimize their existing cloud resources. These organizations have a few core needs in common, specifically they must: Maintain visibility into and control over critical applications and data, regardless of where their workloads resideCarefully plan and quickly execute their cloud migrationsMaximize uptime Avoid outages—proactivelyTo help organizations accelerate their cloud migrations securely and efficiently, we are expanding our partnership with ServiceNow in four key ways.1. Enabling real-time visibilityTypically, large organizations have applications and data spanning multiple locations: private clouds running on premises, one or multiple public clouds, or hybrid environments. Managing these disparate workloads and data can be challenging—teams need to know where workloads are and how they’re connected to ensure they can be properly managed.ServiceNow and Google Cloud are integrating Google Cloud Asset Inventory tools with ServiceNow IT Operations Management (ITOM) Visibility services. This will deliver real-time views of data and improved data quality in an organization’s configuration management database (CMDB) through automated updates and reduced operational overhead. Ultimately, this means IT teams will have better visibility into and management of workloads across their entire IT estate. As a result, they’ll be better positioned to leverage their existing governance and compliance models across cloud, hybrid, and on-premises deployments to optimize IT operations, reduce risk, and gain usage and cost reporting.Travel technology company Sabre has been a longtime user of ServiceNow ITOM. Last year, Sabre announced a large-scale partnership with Google to migrate its platform onto Google Cloud.“Sabre provides customer critical backbone technology for the travel industry, with superior uptime requirements demanding strict governance to ensure successful ongoing operations. ServiceNow has been our platform for IT Operations Management, and last year we announced a large-scale partnership with Google Cloud to migrate our platform onto its infrastructure,” said Charles Cinert VP of Global Operations at Sabre. “We need assets and services deployed on Google Cloud to be reflected accurately in our ServiceNow CMDB, be able to leverage our existing IT Workflows to provision assets on Google Cloud, and accelerate migration of our on-premises workloads onto Google Cloud.  We are thrilled to see the partnership investing in these areas to provide us unified visibility, governance, and controls across on-premises and Google Cloud.”2. Accelerating cloud migrationsOnce a baseline management framework is established, IT teams can turn their focus to evaluating and actually migrating workloads. For organizations with a complex IT landscape, this can be particularly challenging, as it can take a long time to identify which workloads to migrate, how applications should be migrated (lift-and-shift, refactored or retired), and in what order to move them. Google Cloud created the Rapid Assessment and Migration Program (RAMP) to help organizations simplify their on-premises to cloud transition by combining multiple sources of workload data and providing recommendations for right-sizing, order of migrations and other cloud optimizations.ServiceNow CMDB provides a single system of record for your IT infrastructure and digital service data. It’s able to assess everything you have and help you migrate it in the best way for your organization. Used in conjunction with Google Cloud RAMP), CMDB can provide critical workload performance and sizing data used to craft a migration strategy. Our mutual systems integration partners believe these integrated capabilities can accelerate cloud migration planning by up to 50%.“Sabre realizes value as each workload lands in Google Cloud, and our migration spans thousands of workloads over the next 10 years,” adds Cinert. “This partnership represents a way for us to accelerate that effort while reducing the risk associated with it.”3. Providing consistent governance across environmentsWhen moving mission-critical workloads to the cloud, ensuring operational excellence is paramount. To support the highest levels of uptime, security, and control over data, we will expand our catalog of integration “spokes” available in ServiceNow’s IntegrationHub to include more Google Cloud and Google Workspace services. These spokes will enable organizations to manage their Google Cloud assets from ServiceNow bringing the same level of operational rigor from an on-premises environment to a customer’s Google Cloud deployment. Customers will see improved governance and security around their cloud assets, aligned with existing IT workflows. This is particularly important as organizations create high-availability and disaster recovery environments for their critical applications. In addition, Google and ServiceNow are exploring advanced self-service access to provisioning workloads leveraging services such as Google Private Catalog Service and ServiceNow Service Catalog. IT organizations benefit from even greater agility while maintaining governance and control.4. Predict issues and automate resolutions with AIOpsIT organizations work tirelessly to minimize the impacts of service outages. ServiceNow AIOps leverages AI to sift through large sets of data to help organizations predict outages and automate resolutions. Through this partnership, ServiceNow AIOps can now consume and process IT event-data from Google Cloud Ops (formerly “Stackdriver events”) to identify trends and correlate data that may affect service levels. For example, ServiceNow ITOM Health can ingest Google Cloud event data, along with telemetry from logs, monitoring tools, and other data sources, to quickly identify root causes of issues and automate remediation of affected systems.We are excited about this next phase in our alliance with ServiceNow. We aim to deliver significant value to customers with refined focus on improving their visibility into the entire IT estate, accelerating cloud migrations, and bringing a new level of governance + AI-based operations to Google Cloud. These integrations will start becoming available in June. If you are interested in learning more about the partnership and connecting with an expert at Google, please fill out this form.
Quelle: Google Cloud Platform

Struggling to fix Kubernetes over-provisioning? GKE has you covered!

Cost optimization is one of the leading initiatives, challenges, and sources of effort for teams adopting public cloud1—especially for those just starting their journey. When it comes to Kubernetes, cost optimization is especially challenging because you don’t want any efforts you undertake to negatively affect your applications’ performance, stability, or ability to service your business. In other words, reducing costs cannot come at the expense of your users’ experience or risk to your business.If you’re looking for a Kubernetes platform that will help you maximize your business value and at the same time reduce costs, we’ve got you covered with Google Kubernetes Engine (GKE), which provides several advanced cost-optimization features and capabilities built-in. This is great news for teams that are new to Kubernetes, who may not have the expertise to easily balance their applications’ performance and stability, and as a result, tend to over-provision their environments to mitigate potential impact on the business. After all, an over-provisioned environment tends not to run out of headroom or capacity, ensuring that applications meet users’ expectations for performance and reliability.Cost-optimization = reduce cost + achieve performance goals + achieve stability goals + maximize business valueWhile over-provisioning can provide short-term relief (at a financial cost), it’s one of the first things you should look at as part of a continuous cost-optimization initiative. But if you’ve tried to cut back on over-provisioning before—especially for other Kubernetes platforms—you’ve probably found yourself experimenting with random configurations and trying different cluster setups. As such, it’s not uncommon for teams to give up on cost optimization due to the amount of effort they put in relative to the results. Let’s take a look at how GKE differs from other Kubernetes managed services, and how it can reduce your need to over-provision and simplify your cost-optimization efforts.The most common Kubernetes over-provisioning problemsBefore jumping into GKE features and solutions that can help you optimize your costs, let’s first define the three main challenges that lead to over-provisioning of Kubernetes clusters.Bin packing – how well you pack applications onto your Kubernetes nodes. The better you pack apps onto nodes, the more you save.App right-sizing – the ability to set appropriate resources requests and workload autoscale configurations for the applications deployed in the cluster. The more precisely you set accurate resources to your pods, the more reliably your applications will run and, in the vast majority of the cases, the more space you’ll save in the cluster. Scaling down during off-peak hours – Ideally, to save money during periods of low demand, for example at night, you should scale down your cluster along with actual traffic. However, there are cases when this doesn’t happen as expected, especially for workloads or cluster configurations that block Cluster Autoscaler.In our experience, the most over-provisioned environments tend to have at least two of the above challenges. In order to effectively cost-optimize your environment, you should embrace a culture that encourages a continuous focus on the issues that lead to over-provisioning. Tackling over-provisioning with GKEImplementing a custom monitoring system is a common approach for reducing your reliance on over-provisioned resources. For bin packing, you can compare allocatable vs. requested resources; for app rightsizing, requested vs. used resources; and for cluster utilization, you can monitor for Cluster Autoscaler being unable to scale down. However, implementing such a monitoring system is quite complex and requires the platform to provide specific metrics, resource recommendations, template dashboards and alerting policies. To learn how to build such a system yourself, check out our Monitoring your GKE clusters for cost-optimization tutorial, where we show you how to set up this kind of continuous cost-optimization monitoring environment, so that you can tune your GKE clusters according to recommendations without compromising your applications’ performance and stability.Another element of a GKE environment that plays a fundamental role in cost optimization is Cluster Autoscaler, which provides nodes for Pods that don’t have a place to run and removes underutilized nodes. In GKE, Cluster Autoscaler is optimized for the cost of the infrastructure, meaning, if there are two or more node types in the cluster, it chooses the least expensive one that fits the current demand. If your cluster is not scaling down as expected, take a look at your cluster autoscaler events to understand the root cause. You may have set a higher than needed minimum node-pool size or your Cluster Autoscaler may not be able to delete some nodes because certain Pods may cause temporary disruption if restarted. Some examples are: system Pods (such as metrics-server and kube-dns), and Pods that use local storage. To learn how to handle such scenarios, please take a look at our best practices. And if you determine that you really do need to over-provision your workload to properly handle spikes during the day, you can reduce the cost by setting up scheduled autoscalers. GKE’s cost-optimization superpowersGKE also provides many other unique modes and features that may make you forget about having to do things like bin packing and app rightsizing. For example:GKE AutopilotGKE Autopilot is our best ever GKE mode that also delivers the ultimate cost-optimization superpower. In GKE Autopilot, you only pay for the resources you request, effortlessly, getting rid of one of the biggest sources of waste—bin packing. That, and the fact that Autopilot automatically applies industry best practices, eliminates all node management operations, maximizes cluster efficiency and provides a stronger security posture. And, with less infrastructure to manage, Autopilot can help you cut down even further on deployment man-hours and day-two operations.If you decide not to use GKE Autopilot but still want to use the most cost-optimized defaults, check out GKE’s built-in “Cost-optimized cluster” set-up guide, which will get you started with the key infrastructure features and settings you need to know about.Node auto-provisioningBin packing is a complex problem, and even with a decent monitoring system like the one presented above, it requires constant manual tweaks. GKE removes the friction and operational costs associated with precise node-pool tweaking with node auto-provisioning, which automatically creates—and deletes—the most appropriate node pools for a scheduled workload. Node auto-provisioning is the evolution of Cluster Autoscaler, but with better cost savings and less knowledge and effort on your part. Then, if you want to pack your node pools even more, you can further select the optimize-utilization profile, which preferences scheduling Pods on the most utilized nodes and it makes Cluster Autoscaler even more aggressive at scaling down. Beyond making cluster autoscaling fully automatic, this setup also maintains the least expensive configuration.Pod AutoscalersApp rightsizing requires you to fully understand the capacity of all your applications or, again, pass that responsibility over to us. In addition to the classic Horizontal Pod Autoscaler, GKE also provides a Vertical Pod Autoscaler and a Multidimensional Pod Autoscaler. Horizontal Pod Autoscaler is best for responding to spiky traffic by quickly adding more Pods to your cluster. Vertical Pod Autoscaler lets you rightsize your application by figuring out, over time, your Pod’s capacity in terms of cpu and memory. Last but not least, Multidimensional Pod Autoscaler lets you define these two autoscaler behaviors using a single Kubernetes resource. These workload autoscalers give you the ability to automatically rightsize your application and, at the same time, quickly respond to traffic volatility in a cost-optimized way.Optimized machine typesBeyond the above solutions to the most common over-provisioning problems, GKE also helps you reduce costs by using E2 machine types by default. E2 machines are cost-optimized VMs that offer 31% savings compared to N1 machines. Or choose our new Tau machines, available in Q3 2021, which offer a whopping 42% better price-performance over comparable general-purpose offerings. Moreover, GKE also gives you the option to choose Preemptible VMs, which are up to 80% cheaper than standard Compute Engine VMs. (However, we recommend you to read our best practices to make sure your workload will run smoothly on top of Preemptible VMs.) Ensuring operational efficienciesOptimizing costs isn’t just about looking at your underlying compute capacity—another important consideration is the operational cost of building, maintaining, and securing your platform. While that often gets overlooked when calculating total cost of ownership (TCO), it’s nevertheless important to keep in mind.To help save man-hours, GKE provides the easiest fully managed Kubernetes environment on the market. With the GKE Console, gcloud command line, terraform or Kubernetes Resource Model, you can quickly and easily configure regional clusters with a high-availability control plane, auto-repair, auto-upgrade, native security features, automated operation, SLO-based monitoring, etc.Last but not least, GKE is unmatched in its ability to scale a single cluster to 15,000 nodes. For the vast majority of users, this removes scalability as a constraint in your cluster design and pushes the boundaries of cost, performance and efficiency for hyper-scaled workloads when you need it. In fact, we see up to 50% greater infrastructure utilization in large clusters, where key GKE capabilities have been considered and applied. What our customers are saying about their experience with GKEMarket Logic makes a marketing insights platform and says GKE four-way auto scaling and multi-cluster support helped it minimize its maintenance time and costs.“Since migrating to GKE, we’ve halved the costs of running our nodes, reduced our maintenance work, and gained the ability to scale up and down effortlessly and automatically according to demand. All our customer production loads and development environment run on GKE, and we’ve never faced a critical incident since.” – Helge Rennicke, Director of Software Development, Market Logic SoftwareSee more details in Market Logic: Helping leading brands run an insights-driven business with a scalable platformBy switching to a containerized solution on Google Kubernetes Engine, Konga, Nigeria’s online marketplace, cut cloud infrastructure costs by two-thirds.“With Google Kubernetes Engine, we deliver the same or better functionality as previously in terms of being able to scale up to match traffic, but in its lowest state, the overall running cost of the production cluster is much less than the minimum costs we’d pay with the previous architecture.” – Andrew Mori, Director of Technology, KongaRead more in Konga: Cutting cloud infrastructure costs by two-thirds What’s nextBuilding a cost optimization culture and routines into your organization can help you balance performance, reliability and cost. This in turn will give your team and business a competitive edge, helping you focus on innovation. GKE includes many features that can greatly simplify your cost-optimization initiatives. To get the most from the platform, make sure developers and operators are aligned on the importance of cost-optimization as a continuous discipline. To help, we’ve prepared a set of materials: our GKE cost-optimization best practices, a 5 minute video series (if you want to learn on the go), cost-optimization tutorials, and a self-service hands-on workshop to help you practice your skills. Moreover, we strongly encourage you to create internal discussion groups, and run internal workshops to ensure all your teams get the most out of GKE. Last but not least, watch this space! We look forward to publishing more blog posts about cost optimization on GKE in the coming months!1. https://info.flexera.com/CM-REPORT-State-of-the-Cloud
Quelle: Google Cloud Platform

DockerCon LIVE 2021 Recapped: Top 5 Sessions

You came, you participated, you learned. You helped us pull off another DockerCon — and, my fellow developers, it was good. How good? About 80,000 folks registered for the May 27 virtual event — on a par with last year.

We threw a lot at you, from demos and product announcements to company updates and more — all of it focused on modern application delivery in a cloud-native world. But some clear favorites emerged. Here’s a rundown of the top 5 sessions, which zeroed in on some of the everyday issues and challenges facing our developer community.

#1. How Much Kubernetes Do I Need to Learn?

Kubernetes isn’t simple and the learning curve is steep, but the upside to mastering this powerful and flexible system is huge. So it’s natural for developers to ask how much Kubernetes is “just enough” to get productive. Clearly, many of you shared that question, making this the Número Uno session of DockerCon LIVE 2021. Docker Captain Elton Stoneman, a consultant and trainer at Sixeyed Consulting, walks you through the Kubernetes platform, clarifying core concepts around services, deployments, replica sets, pods, config maps and secrets, and sharing demos to show how they all work together. He also shows how simple and complex apps are defined as Kubernetes manifests, and clarifies the line between dev and ops.

#2. A Pragmatic Tour of Docker Filesystems

Mutagen founder Jacob Howard takes on the heroic task of dispelling the mists of confusion that developers often encounter when starting out with containerized development. Sure, container filesystems can seem like an impenetrable mess, but Jacob carefully makes the case for why the relationship between file systems and containers actually makes a lot of sense, even to non-developers. He also provides a pragmatic guide to container filesystem concepts, options and performance that can serve as a rule of thumb for selecting the right solution(s) for your use case.

#3. Top Dockerfile Security Best Practices

In this webinar, Alvaro Iradier Muro, an integrations engineer at Sysdig, goes deep on Dockerfile best practices for image builds to help you prevent security issues and optimize containerized applications. He shows you straightforward ways to avoid unnecessary privileges, reduce the attack surface with multistage builds, prevent confidential data leaks, detect bad practices and more, including how to go beyond image building to harden container security at runtime. It all comes down to building well-crafted Dockerfiles, and Alvaro shows how to do so by removing known risks in advance, so you can reduce security management and operational overhead.

#4. Databases on Containers

Only in the last few years has running high-performance stateful applications inside containers become a reality — a shift made possible by the rise of Kubernetes and performance improvements in Docker. Denis Souza Rosa, a developer advocate at Couchbase, answers many of the common questions that arise in connection with this new normal: Why should I run these applications inside containers in the first place? What are the challenges? Is it production ready? In this demo, Denis deploys a database and operator long with fail nodes, and he shows how to scale up and down with almost no manual intervention using state-of-the-art technology.

#5. A Day in the Life of a Developer: Moving Code from Development to Production Without Losing Control

Learn how to take control of your development process in ways you never thought possible with Nick Chase, director of technical marketing and developer relations at Mirantis. Nick zeroes in on how only a true software development pipeline can prevent serious problems such as security holes, configuration errors, and business issues such as executive approval for promotion of changes. Along the way, he covers what a complete software supply chain looks like, common “weak links” and how to strengthen them, how to integrate your workflow as a developer, and what to do when business concerns affect the pipeline.

If you missed these popular sessions last month, now’s your chance to catch them. Or maybe you just want to see them again. Either way, check out the recordings. They’re informative, practical and free!

We have a complete container solution for you – no matter who you are and where you are on your containerization journey. Get started with Docker today here.
The post DockerCon LIVE 2021 Recapped: Top 5 Sessions appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/