9 data management sessions to check out at Next OnAir

At Google Cloud, we build databases that are simple to operate, secure, and designed to handle both what you’re running today and what you’re looking to build next. No matter where you are in your journey—whether you’re moving on-prem servers to the cloud, modernizing existing systems, or spinning up new workloads—this week at Google Cloud Next ‘20: OnAir is full of sessions, tips, and customer stories to help you meet your needs. Be sure to add these can’t-miss sessions to your playlists, and take a look at the full schedule. All sessions are going live starting tomorrow, August 18, at 9am PT, and will be available on-demand after that.Keynote featuring Sharechat: What’s new with database management Join Director of Product Management Penny Avril for this keynote session featuring new releases, use cases, and a conversation with Sharechat’s VP of Engineering Venkatesh Ramaswamy to learn how his team was able to lower operating costs by 30% while improving team efficiency by migrating to Google Cloud. Data modernization: the McKesson story Global healthcare company McKesson will show you how they modernized their databases using Google Cloud services, including migrating from Oracle to Cloud SQL. Their director of data analytics and innovation will walk through how they brought together multiple siloed legacy tools and data into a successful modern cloud architecture. Learn how they broke down complex database architectures into roadmap items and navigated complex challenges along the way. What’s new with Cloud SQL Cloud SQL makes it easy to run MySQL, PostgreSQL, and SQL Server in the cloud. Over the past year, we’ve added an impressive amount of enterprise capabilities to meet your needs for security, reliability, and financial governance. Learn more about new features and enhancements, including point-in-time recovery and cross-region replicas, and how you can get started with them.  Switching from DynamoDB to Cloud Bigtable and Spanner In this session, hear from ShareChat, one of the largest social media platforms in India, which migrated from AWS to Google Cloud. Learn how ShareChat replaced DynamoDB with Google’s Cloud Spanner database for better scale and improved efficiency. NYT: Building a real-time collaborative editor with Firestore The New York Times has reporters and editors all over the world working to write and produce breaking news on tight deadlines. Using Firestore, the technology team at the Times built a collaborative rich-text editor that for the first time allowed editors to see each others’ edits as they happen. You’ll hear how Firestore empowered the team at the Times to quickly build a highly scalable, collaborative article-editor app.Future-proof your business for global scale and consistency with Cloud Spanner In this session, learn about how massively scalable database Spanner helps build apps that require strong consistency and high availability. You’ll also get a look at new and recently launched Spanner capabilities that offer even more enterprise benefits.Simplify complex application development using Cloud Firestore Get a look at how the Cloud Firestore database service makes it easy for developers to scale new and existing applications while adding real-time client data synchronization and offline mode capabilities. Learn about new Firestore capabilities and hear from Khan Academy how they responded to rapid growth in site usage during the Covid-19 pandemic. Modernizing HBase workloads with Cloud Bigtable See how you can move workloads running on open source database HBase to Bigtable, our fully managed, low-latency NoSQL database. Bigtable can ease operational burdens and allow developers to use the service for large analytical and operational workloads with global replication.Bare Metal Solution: Bring all your enterprise workloads to Google Cloud Most legacy apps weren’t designed to run in the cloud, but businesses running those apps need to adapt quickly to take advantage of modern technology. In this session, check out how Bare Metal Solution addresses the risk and challenge of migrating legacy workloads by providing all the necessary infrastructure to run Oracle databases and other specialized workloads close to Google Cloud—securely and cost-effectively.Check out the full lineup of databases and data management sessions. Have a question or project you’re working on? Remember to schedule time with our database experts during the week. We’ll see you in the cloud!
Quelle: Google Cloud Platform

Looking ahead as GKE, the original managed Kubernetes, turns 5

It’s hard to believe that GKE is already celebrating its 5th birthday. Over these last five years it’s been inspiring to see what businesses have accomplished with Google Cloud and GKE—from powering multi-million QPS retail services, to helping a game publisher deploy 1700 times to production in the week of its launch, to accelerating research into discovery of treatments for both rare and common conditions in cardiology and immunology, to helping map the human brain. These were all made possible by Kubernetes. Today, as Virtual KubeCon kicks off, we want to say first and foremost, thank you to the community for making Kubernetes what it is, and making it the industry standard for managing containerized applications. While GKE transformed how businesses modernize their applications and pushed the bounds of what Kubernetes can achieve, such sustained innovation was only possible thanks to support and feedback from Kubernetes users and close partnership with GKE customers.As we look ahead, we wanted to share five ways we’re continuing our work to make GKE the best place to run Kubernetes.1. Leaving no app behindEvery workload deserves the benefits of portability, isolation, and security. The next wave of Kubernetes users wants to containerize their apps and derive those benefits, but often stumbles when confronted with the complexity of getting started with Kubernetes. Embracing Kubernetes shouldn’t have to be the hard way, and GKE has invested heavily in simplifying the entire journey—from creating your first cluster, to deploying your first app into production. GKE customers can now take advantage of Windows on GKE, as well as Google Cloud optimizations and best practices, to smoothly run workloads from traditional stateless to complex stateful and batch. For enterprise applications, you have access to practical guidance for PCI DSS on GKE and infrastructure-level compliance resources, helping you achieve compliance while reaping the benefits of running your applications on GKE. Even workloads previously stuck on proprietary legacy mainframes can now be migrated into GKE, using automated code refactoring tools. You can also supercharge your advanced AI/ML workloads at an optimized cost performance ratio using GPU and TPU support, the latter of which is only available in Google Cloud. If you’re interested in deploying GPU workloads across cloud, on-premises, and the edge, you can check out our partner NVIDIA’s GPU Operator for Anthos and GKE on-prem, which we are launching this week.2. Saving money with optimal price-to-performance by defaultEmbracing Kubernetes isn’t just about developer velocity but also about cost optimization. GKE helps organizations improve resource utilization through efficient bin packing and auto-scaling. Although this provides some operational efficiency, you can achieve considerable additional savings using multi-dimensional auto-scaling, which is again, only available on GKE. GKE clusters of all sizes use cluster autoscaling, which can help reduce costs over static clusters and reduce the complexity of ensuring Kubernetes scales to meet the needs of your business. You can save even more with flexible options for horizontal pod autoscaling based on both CPU utilization and custom metrics, as well as vertical pod autoscaling and node auto-provisioning. We’ve published our best practices for running cost-optimized Kubernetes applications on GKE to help you get started.3. Container-native networking: no more square pegs in round holesGKE is at the forefront of container-native networking. A new eBPF-based dataplane in GKE provides built-in Kubernetes network policy enforcement to support multi-tenant workloads. It helps increase visibility into network traffic augmented with Kubernetes metadata for security conscious enterprises. VPC native integration affords IP management features such as flexible pod CIDR and non-RFC 1918 IP range support, letting you better manage your IP space and scale your clusters as needed. With network endpoint groups (NEGs) you can load balance HTTP traffic directly to pods for more stable and efficient networking. You can read more about our networking capabilities in this best practices blog post.4. Bringing BeyondProd to containerized appsGoogle protects its own microservices with an initiative called BeyondProd, a new approach to cloud-native security. This protection applies concepts like: mutually authenticated service endpoints, transport security, edge termination with global load balancing, denial of service protection, end-to-end code provenance, and runtime sandboxing. By implementing the BeyondProd methodology for your containerized applications, GKE allows your developers to spend less time worrying about security while still achieving more secure outcomes.5. Democratizing access to learning Kubernetes Kubernetes, conceived and created at Google, is the industry standard for adopting containers and implementing cloud-native applications. Google is the largest engineering contributor1 to the Kubernetes project, contributing to almost every subsystem, SIG, and work-group. Google also funds and provides almost all of the infrastructure for Kubernetes development. We are deeply committed to continuing these contributions.The growth and potential of Kubernetes is accelerating its usage across customers and creating more businesses focused on its distribution, hosting and services. To wit: there are more than 64,500 job openings related to Kubernetes2. To support this growing demand, we are continuing to provide opportunities to learn Kubernetes through GKE. You already have access to quickstarts, how-to guides, tutorials, and certifications from Coursera and Pluralsight. To make it even easier, from now until December 31, 2020 we’re providing Kubernetes training at no charge–visit goo.gle/gketurns5 to learn more.We can’t wait to see what customers will achieve with GKE in the next five years. Until then, we will leave you with this celebratory ‘5 things developers love about GKE’ video.1. DevStats from CNCF.io2. LinkedIn job search results for ‘Kubernetes’ keyword worldwide as of August 12, 2020.
Quelle: Google Cloud Platform

Deploy GPU workloads across all your clouds with Anthos and NVIDIA

We are very excited to announce a joint solution with NVIDIA now publicly available to all users in beta that allows customers to run NVIDIA GPU workloads on Anthos across hybrid cloud and on-premises environments. Running GPU workloads between cloudsMachine learning is one of the fastest growing application segments in the market today, powering many industries such as biotech, retail, manufacturing and many more. With such unprecedented growth, customers are facing multiple challenges. The first is the difficult choice of where to run your ML and HPC workloads. While the cloud offers elasticity and flexibility for ML workloads, some applications have latency, data size, or even regulatory requirements that mean they need to reside within certain data centers and at edge locations. The other challenge is high demand for on-prem GPU resources; no matter how fast organizations onboard GPU hardware, demand is always greater than supply, so you need to always maximize investment in your GPUs. Organizations are also looking for a hybrid architecture that maximizes both cloud and on-prem resources. In this architecture, bursty, and transient model development and training can run in the cloud, while inference and steady state runtime can be on-prem, or vice versa.Anthos and ML workloadsAnthos was built to enable customers to easily run applications both in the cloud and on-prem. Built on Kubernetes, Anthos’ advanced cluster management and multi-tenancy capabilities allows you to share your ML infrastructure across teams, increasing utilization and reducing the overhead of managing bespoke environments. Anthos also allows you to run applications anywhere, whether they reside on-prem, other cloud providers, or even at the edge. The flexibility of deployment options with Anthos combined with open-source ML frameworks such as Tensorflow and Kubeflow lets you build truly cloud-portable ML solutions and applications.In addition to in-house developed applications, you can use Anthos to deploy Google Cloud’s best-in-class ML services such as Vision AI, Document AI, and many others in your data center and at edge locations, turbocharging ML initiatives in your organizations. Our collaboration with NVIDIAFor this solution, we’ve built on our strong relationship with NVIDIA, a leader in AI/ML acceleration. The solution uses the NVIDIA GPU Operator to deploy GPU drivers and software components required to enable GPUs in Kubernetes. The solution works with many popular NVIDIA data center GPUs such as the V100 and T4. This broad support allows you to take advantage of your existing and future investments in NVIDIA GPUs with Anthos. For more information about supported NVIDIA platforms, please check the NVIDIA GPU Operator documentation. You can also learn more about other Google Cloud and NVIDIA collaborations.Getting startedThis solution is available as beta and will work with Anthos on-prem 1.4 or later. For instructions on getting started using NVIDIA GPUs with Google Cloud’s Anthos and supported NVIDIA GPUs, please refer to the documentation here.
Quelle: Google Cloud Platform

Google Cloud VMware Engine explained: Integrated networking and connectivity

Editor’s note: This the first installment in a new blog series that dives deep into our Google Cloud VMware Engine managed service. Stay tuned for other entries on migration, integration, running stateful database workloads, and enabling remote workers, to name a few.We recently announced the general availability of Google Cloud VMware Engine, a managed VMware platform service that enables enterprises to lift and shift their VMware-based applications to Google Cloud without changes to application architectures, tools or processes. With VMware Engine, you can deploy a private cloud—an isolated VMware stack—that consists of three or more nodes, enabling you to run VMware Cloud Foundation platform natively. This approach lets you retire or extend your data center to the cloud, use the cloud as a disaster recovery target, or migrate and modernize workloads by integrating with cloud-native services such as BigQuery, Cloud AI, etc.But before you can do that, you need easy-to-provision, high-performance, highly available networking to connect between:On-premises data centers and the cloudVMware workloads and cloud-native servicesVMware private clouds in single or multi-region deployments.Google Cloud VMware Engine networking leverages existing connectivity services for on-premises connections and provides seamless connectivity to other Google Cloud services. Furthermore, the service is built on high-performance, reliable and high-capacity infrastructure, giving you a fast and highly available VMware experience, at a low cost.Let’s take a closer look at some of the networking features you’ll find on VMware Engine. High Availability and 100G throughputGoogle Cloud VMware Engine private clouds are deployed on enterprise-grade infrastructure with redundant and dedicated 100Gbps networking that provides 99.99% availability, low latency and high throughput.Integrated networking and on-prem connectivity Subnets associated with private clouds are allocated in Google Cloud VPCs and delegated to VMware Engine. As a result, Compute Engine instances in the VPC communicate with VMware workloads using RFC 1918 private addresses, with no need for External IP-based addressing. Private clouds can be accessed from on-prem using existing Cloud VPN or Cloud Interconnect-based connections to Google Cloud VPCs without additional VPN or Interconnect attachments to VMware Engine private clouds. You can also stretch your on-prem networks to VMware Engine to facilitate workload migration.Furthermore, for internet access, you can choose to use VMware Engine’s internet access service or route internet-bound traffic from on-prem to meet your security or regulatory needs.Access to Google Cloud services from VMware Engine private cloudsVMware Engine workloads can access other Google Cloud services such as Cloud SQL, Cloud Storage, etc., using options such as Private Google Access and Private Service Access. Just like a Compute Engine instance in a VPC, a VMware workload can use private access options to communicate with Google Cloud services while staying within a secure and trusted Google Cloud network boundary. As such, you don’t need to exit out to the public internet to access Google Cloud services from VMware Engine, regardless of whether internet access is enabled or disabled. This provides for low-latency and secure communication between VMware Engine and other Google Cloud services.Multi-region connectivity between VMware private clouds VMware workloads in private clouds in the same region can talk to one another directly—without needing to “trombone” or “hairpin” across the Google Cloud VPCs. In the case where VMware workloads need to communicate with one another across regions, they can do so using VMware Engine’s global routing service. This approach to multi-region connectivity doesn’t require a VPN, or any other latency-inducing connectivity options. Access to full NSX-T functionalityVMware Engine supports full NSX-T functionality for VMware workloads. With this, you can use VMware’s NSX-T policy-based UI or API to create network segments, gateway firewall policies or distributed/east-west firewall policies. In addition, you can also leverage NSX-T’s load balancer, NAT and service insertion functionality. Networking is critical to any Enterprise’s cloud transformation journey—even more so for VMware-related use cases. The networking functionality in VMware Engine makes it easy for you to take advantage of the scale, flexibility and agility that Google Cloud provides without compromising on functionality.What’s nextIn the coming weeks, we’ll share more about VMware Engine and migration, building business resiliency, enabling work from anywhere, and your enterprise database options. To learn more or to get started, visit the VMware Engine website where you’ll find detailed information on key features, use cases, product documentation, and pricing.
Quelle: Google Cloud Platform

App modernization enabled this retail solution provider to converge physical and digital commerce

As Google Cloud CEO Thomas Kurian said earlier this year, digital transformation in the retail industry is more than just a requirement, it’s a race, and the retailers that transform the fastest will be the most successful in the long term. COVID-19’s impact on the retail industry has only exacerbated this challenge, forcing retailers to rapidly marry digital and physical experiences by offering services such as curbside pickups and contactless check-out.Long before 2020, Manhattan Associates, a leader in supply chain, inventory and omnichannel e-commerce technology, recognized retailers needed a platform that could help them adapt to changing market conditions and bridge the gap from brick and mortar to digital commerce. As a result, they launched the Manhattan Active Solutions portfolio for their customers in retail, manufacturing, and distribution. Solutions like Manhattan Active Warehouse Management, the cloud-native version of their flagship warehouse management system, and Manhattan Active Omni, a suite of cloud-based inventory management, point-of-sale, and customer engagement tools, are versionless so customers get continuous access to the latest features, with automated scaling for peak demand. But for Manhattan Associates to deliver their vision to make their solutions portfolio versionless, agile, scalable, and extensible, they first needed to modernize their application environment. They knew lifting and shifting a monolithic architecture to the cloud would not deliver the level of agility that they—and their end-customers—would need, so they decided to go with microservices. Consequently, they re-architected Manhattan Active Warehouse Management and Manhattan Active Omni from the ground up to take full advantage of the benefits of the cloud.Guiding principles behind Manhattan Associates cloud-native approachFirst, Manhattan Associates established some critical architectural principles to serve as their north star for cloud-native application design. These guiding principles, as outlined below, were created to help ensure Manhattan Associates solutions could deliver the engineering velocity demanded by their end-customers while also aiming to reduce the cost of innovation.Speed of innovation: Manhattan Associates strove to adopt a microservices-based, cloud-native approach so they could deliver new functionality faster than their competitors, and faster than they themselves had previously done. Extensibility: In addition to accelerating innovation in house, they also sought to ensure an open and extensible ecosystem so that their customers and partners can easily build and integrate additional capabilities without dependence on Manhattan Associates.Flexibility: With these managed SaaS versions of their flagship products, Manhattan Associates needed the flexibility to scale performance and resources based on their end-customers’ needs.Low cost of innovation: Finally, Manhattan Associates required the ability to leverage open-source technologies for portability and to avoid vendor lock-in. In addition, they required detailed logging and monitoring tools in order to maintain control over costs.Partnering with Google Cloud to meet business objectivesIn order to deliver on the imperatives outlined above, Manhattan Associates teamed with Google Cloud as their cloud platform. Here’s a quick look at how they are leveraging key Google Cloud services to help meet the objectives laid out in their guiding principles.Achieving agility, scalability, and resiliency with Google Kubernetes Engine: Manhattan Associates built on top of their already extensive Kubernetes experience and took advantage of Google Kubernetes Engine managed features such as regional high availability, autoscaling, node auto-provisioning, and the auto-repair functionality to help ensure scalability and resiliency for their containerized workloads.Gaining ease of use and openness with Cloud SQL for MySQL: Manhattan Associates team members were also long-time users of MySQL. With Cloud SQL for MySQL, they were able to continue using open-source software to run managed MySQL databases. In addition, Manhattan Associates could leverage Cloud SQL to support their environment setup and their customer needs. For instance, they can create a Cloud SQL database instance per project. And each project can correspond with an environment, like dev/test and production or with a specific customer. They also have the flexibility to support optional add-ons for each customer like additional database instances for high availability.Visibility and control with detailed monitoring and logging: Finally, Manhattan Associates demanded very detailed monitoring and logging to help ensure customer environments were running effectively across the stack. They leveraged a combination of open-source, custom and Google Cloud tools like Cloud Logging, Cloud Monitoring, and CloudRun as well as in-house tools to keep tabs on their environments and trigger alerts so that Manhattan Associates teams can take action when needed.As a result, Manhattan Associates was able to deliver a truly cloud-native warehouse management and unified commerce platform, built using microservices. They have been delivering on their goal of engineering velocity; releasing features quarterly versus annually as was the case on their prior monolithic architecture. More importantly, they can pass this speed of innovation on to their end-customers.To learn more about how Manhattan Associates rebuilt their platform in the cloud, check out the Manhattan Active Warehouse Management andManhattan Active Omni on Google Cloud virtual breakout sessions at Manhattan Associates’ digital event, Momentum Connect. In these sessions, they’ll dive into more detail behind their cloud native approach, their use cases, and how they combined Google Cloud, open-source, and home-grown tools to bring their Manhattan Active Warehouse Management and Manhattan Active Omni solutions to market.
Quelle: Google Cloud Platform

Build resilient applications with Kubernetes on Azure

Welcome to KubeCon EU 2020, the virtual edition. While we won’t be able to see each other in person at KubeCon EU this year, we're excited that this new virtual format of KubeCon will make the conference more accessible than ever, with more people from the amazing Kubernetes community able to join and participate from around the world without leaving their homes.

With everything that has been happening, the last year has been an up and down experience, but through it all I’m incredibly proud of the focus and dedication from the Azure Kubernetes team. They have continued to iterate and improve our Kubernetes on Azure that provides enterprise-grade experience for our customers.

Kubernetes on Azure (and indeed anywhere) delivers an open and portable ecosystem for cloud-native development. In addition to this core promise, we also deliver a unique enterprise-grade experience that ensures the reliability and security your workloads demand, while also enabling the agility and efficiency that business today desires. You can securely deploy any workload to Azure Kubernetes Service (AKS) to drive cost-savings at scale across your business. Today, we're going to tell you about even more capabilities that can help you along on your cloud-native journey to Kubernetes on Azure.

Improving latency and operational efficiency

One of the key drivers of cloud adoption is reducing latency. It used to be that it took days to get physical computers and set them up in a cluster. Today, you can deploy a Kubernetes cluster on Azure in less than five minutes. These improvements benefit the agility of our customers. For customers who want to scale and provision faster, we are announcing a preview of ephemeral OS disk support which makes responding to new compute demands on your cluster even faster.

Latency isn’t just about the length of time to create a cluster. It’s also about how fast you can detect and respond to operational problems. To help enterprises improve their operational efficiency, we’re announcing preview integration with Azure Resource Health which can alert you if your cluster is unhealthy for any reason. We’re also announcing the general availability of node image updates which allow you to upgrade the underlying operating system to respond to bugs or vulnerabilities in your cluster while staying on the same Kubernetes version for stability.

Finally, though Kubernetes has always enabled enterprises to drive cost savings through containerization, the new economic realities of the world during a pandemic mean that achieving cost efficiency for your business is more important than ever. We’ve got a great exercise that can help you learn how to optimize your costs using containers and the Azure Kubernetes Service.

Secure by design with Kubernetes on Azure

One of the key pillars of any enterprise computing platform is security. With market-leading features like policy integration and Azure Active Directory identity for Pods and cloud-native security have always been an important part of the Azure Kubernetes Service. I’m excited about some new features we’ve added recently to further enhance the security of your workloads running on Kubernetes.

Though Kubernetes has built-in support for secrets, most enterprise environments require a more secure and more compliant implementation. In the Azure Kubernetes Service, being enterprise-grade means providing integration between Azure Key Vault and the Azure Kubernetes service. Using Key Vault with Kubernetes enables you to securely store your credentials, certificates, and other secrets in state of the art, compliant secret store, and easily use them with your applications in an Azure Kubernetes cluster.

It’s even more exciting that this integration is built on the back of an open Container Storage Interface (CSI) driver that the Azure team built and open sourced for the entire Kubernetes community. Giving back to open source is an important part of what it means to be a community steward, and it was exciting to see our approach get validated as it was picked up and used by the HashiCorp Vault team for their secrets integration. Our open source team has been hard at work on improving many other parts of the security ecosystem. We’ve enhanced the CSI driver for Windows, and worked on cgroups v2 and containerd. If you want to learn more about how to secure your cloud-native workloads and make sure that your enterprise is following Microsoft’s best practices, check out our guide to Kubernetes best practices. They will teach you how to integrate firewalls, policy, and more to ensure you have both security and agility in your cloud-native development.

Next steps and KubeCon EU

I hope that you have an awesome KubeCon EU. As you go through the conference and learn more about Kubernetes, you can also learn more about Kubernetes on Azure with all of the great information online and in our virtual booth. If you’re new to KubeCon and Kubernetes and wondering how you can adopt Kubernetes for workloads from hobbyist to enterprise, we’ve got a great Kubernetes adoption guide for you.
Quelle: Azure