The best of Google Cloud Next ‘20: OnAir: Data Management Week for technical practitioners

It’s Week 6 of Google Cloud Next ‘20: OnAir, and this week we’re covering all things databases, from running your favorite open source database with Cloud SQL to building new enterprise solutions using Cloud Spanner. We have a lot of great content to share with you this week, so let’s dig in.Whatever database questions you have—Should I use SQL or NoSQL? Is it horizontally scalable and disaster recovery-ready?—you can get help answering them during this week’s sessions at Next OnAir and through some cool demos showing how to choose your database and how a high availability setup works (I am very proud of our team’s creativity!).After checking out some must-see sessions below, if you have questions, I’ll be hosting a live developer- and operator-focused recap and Q&A session this Friday, August 21 at 9 AM PST. Or, join our APAC team for a recap Friday at 11 AM SGT. Hope to see you then.Personally, I am looking forward to sharing all the amazing things the Cloud SQL team did in the past year with the session: What’s New With Cloud SQL: So much good material, and the highly anticipated PITR (point in time recovery) for PostgreSQL.Another session we couldn’t miss is this one, super useful for those using Kubernetes and maybe need a little help with the connectivity between the products: Connecting to Cloud SQL from Kubernetes: Your application is scaling with all those nodes; how about adding your persistent data storage into Cloud SQL and taking advantage of our high-availability setup?Speaking of Kubernetes, we couldn’t let the cloud-native databases be forgotten:Simplify complex application development using Cloud Firestore: Get a look at how the Firestore database service makes it easy for developers to scale new and existing applications while adding real-time client data synchronization and offline mode capabilitiesModernizing HBase workloads with Cloud Bigtable: See how to move from your preferred NoSQL database to Bigtable using HBase.And if you’re still not sure which is the right tool for you, check out these sessions:How to Choose the Right Database For Your Workloads:Where you’ll see the advantages of each technology according to your needs.Optimally Deploy an Application Cache with Memorystore: Caching everywhere, folks!Also, this week’s Cloud Study Jam will give you an opportunity to participate in hands-on labs on how to use BigQuery tables across different locations and create data transformation pipelines, so you can get real-world data management experience. You’ll get a chance to learn more about how to prepare for Google Cloud’s Professional Data Engineer Certification as well.One thing to remember about databases is that there’s so much you can solve with just one database, and Google Cloud has a broad set of tools to help you solve your data problems. You may have your single source of truth on a Cloud SQL instance, but want to improve your login and product catalog by adding a caching layer with Cloud Memorystore. Being able to use these products together leads to better management and productivity—essential in a modern, fast-moving world.Next OnAir is running now until Sep. 8. You can check out the full session catalog and register at g.co/cloudnext.
Quelle: Google Cloud Platform

How McKesson used Google Cloud to rethink their database architecture

Editor’s note: We’re hearing today from pharmaceutical and pharmacy technology giant McKesson about how their teams modernized siloed infrastructure to use data better. Here’s how they did it.McKesson Corporation is a global leader in healthcare supply chain management solutions, retail pharmacy, community oncology and specialty care, and healthcare information solutions. Every day, McKesson delivers one-third of all pharmaceuticals used in North America to more than 40,000 customers in retail chains, independent pharmacies, hospitals, long-term care providers, and more. Technology change requires culture changeAs McKesson continues to grow, so does our technology team. Over the last few years, we’ve become the home to multiple databases, platforms, and data warehouses. With this variety of data tools, it was clear that we needed a stronger, long-term solution that would let us get the most out of our data—this wasn’t just a technological shift, but a cultural one as well. Why we chose Google CloudWhile researching various cloud providers, we discovered that Google Cloud offered all the same strengths of the cloud platform we’d previously been using, while also providing unique tools and benefits that fit perfectly with our needs. Google Cloud changed the way we worked. As a legacy shop, we only had one option, which meant teams were using their own batch process databases with low-latency app calls during business hours and batch processing ETL during non-business hours. By moving to Google Cloud, we were able to take advantage of Cloud SQL, a fully managed relational database service for MySQL, SQL Server, and PostgreSQL—the service we preferred. The Google Cloud team also helped us to create a quadrant system for migration that allowed us to see quick wins and add predictability for more complex migrations. Additionally, the familiarity and the wealth of training tools allowed for an easy transition for our team—we’ve been able to retrain our existing team and give them the capacity to focus on more impactful areas to the business. Continuing to evolve and grow with Google CloudWe are continuing to move more data and teams onto Google Cloud, but one aspect that has really made the transition easier is Google Cloud’s advancement in multi-cloud. They’ve made it easier for us to get the best of multiple clouds, which allows us to fine-tune our solutions from a wider array of cloud options before locking in. With Google Cloud’s simplicity, security, and dedication to innovation in fields like healthcare, it was and continues to be a strong fit and an easy choice to provide the best for our teams, customers, and patients who depend on McKesson to keep them healthy.Learn more about McKesson’s data modernization journey here.
Quelle: Google Cloud Platform

The world’s unpredictable, your databases shouldn’t add to it

The ability for your business to tackle the unpredictable has only become more critical for success—whether it’s contending with quick shifts in supply chains, running entirely through online channels, or handling unexpected surges in customer demand. At the core of being able to support this are the databases backing your business and powering applications. And that’s why Google Cloud has designed our databases with the same innovation necessary to handle the unpredictability of Google’s own business. This means building databases that are simple to use and manage; provide security and portability for your data; and are proven for even the most transformative applications.   This week at Google Cloud Next ‘20: OnAir, we’ll dive into our entire portfolio of databases and showcase new advancements that continue to push the envelope. There are sessions to explore for wherever you are in your journey—from moving to the cloud to modernizing legacy systems and building new applications—along with great customer stories to hear how other companies are redefining what it means to be a modern business with Google Cloud.One of those customers is ShareChat, one of India’s largest social media platforms. They recently completed their migration from Amazon DynamoDB to Cloud Spanner and Cloud Bigtable to better meet their needs for scale and cost efficiencies. While they’ve already realized 30% lower costs from the migration, the true value came when they experienced an unexpected 500% spike in traffic over just a few days and were able to scale horizontally, with zero lines of code change, to meet the increased demand. Check out this week’s keynote for the details. What’s new across Google Cloud databasesAcross our database portfolio, we continue to deliver new innovations to solve the hardest data problems and run the most mission-critical applications. Here’s a look at what’s new.Spanner: Better for developers, better for data reliabilitySpanner leads the relational world in delivering the unique pairing of a relational database with non-relational scale and industry-leading five 9s availability. The Spanner Emulator, now generally available, lets application developers do correctness testing when developing an application. The emulator runs in an offline environment, thereby improving productivity and reducing costs. Spanner also provides a new C++ client library, additional introspection capabilities, an open-source JDBC driver and Hibernate ORM integration. Plus, you’ll find an increased SQL feature set with foreign keys, query optimizer versioning, and WITH clause support.In addition, cloud-native Spanner now offers new multi-region configurations for Asia and Europe with 99.999% availability, and user-managed backup and restore that lets you create backups of Spanner databases on demand.Bigtable: Trusted for enterprise workloads, big and smallBigtable, our NoSQL database service, now offers expanded capabilities such as managed backups to help customers achieve high business continuity and add data protection to their most mission-critical workloads with minimal management overhead. We’ve added table-level identity access management (IAM) and audit logs for admin activity. We’ve also expanded support and SLA for single-node production instances to make it even easier to use Bigtable for all of your key-value and wide-column use-cases, both large and small.Cloud Firestore: Better visibility and even easier app developmentFirestore, which lets mobile and web developers build apps easily, now offers a richer query language, C++ client library, and Firestore Unity SDK to make it easy for game developers to adopt Firestore. We are also introducing tools to give you better visibility into usage patterns and performance with Firestore Key Visualizer, which will be coming soon. Firestore remains popular, especially among developers looking to build reliable, real-time applications at scale. Since Firestore became generally available just over 18 months ago, more than 2 million databases have been created, powering mobile, web, and IoT applications. And a recent study from SlashData found that Firestore had the most satisfied developers compared to any other cloud database on the market. Cloud SQL: Enterprise controls for availability, security, financial governanceCloud SQL is the fully managed service for MySQL, PostgreSQL, and SQL Server. We’ve continued to improve the enterprise capabilities of Cloud SQL, including adding more maintenance controls and cross-region replication to make it easy to maintain business continuity in case of failure, and point-in-time-recovery for Postgres to easily restore from past states. Additionally, committed use discounts give you flexibility and financial controls, without the manual overhead. These are transferable across SQL Server, MySQL, Postgres, and machine sizes, so you can maximize your utilization even as your resourcing needs change, and can result in a 25% discount from on-demand pricing for a one-year commitment and a 52% discount from on-demand pricing for a three-year commitment. Not only has Cloud SQL continued to grow in popularity across enterprises, but its native integrations across Google Cloud have resulted in some truly powerful architectures. We’ve seen Google Kubernetes Engine (GKE) users deploying more than 500,000 proxy instances to securely connect to Cloud SQL. And BigQuery users have been able to query 22 PB of data in Cloud SQL in a month, using federated queries to bridge the gap between analytics and operational data.Bare Metal Solution: The cloud fast track for specialized workloadsFor users who may have licensing restrictions or are looking to migrate legacy on-prem databases as-is to the cloud, the Bare Metal Solution is a great option to move workloads quickly and lower overall costs. For those of you running specialized workloads, like Oracle, that require certified hardware and configuration, our Bare Metal Solution is now available in even more regions. Bare Metal Solution helps you move these workloads within milliseconds of latency to Google Cloud, providing a fast path to modernize your application infrastructure landscape while maintaining your existing investments and architecture. How customers are using data management toolsAlong with hearing from ShareChat in our keynote, check out their session on switching from DynamoDB to Bigtable and Spanner to learn how the company migrated from AWS to Google Cloud for better scale, and improved efficiency. And see how the New York Times built a real-time collaborative editor with Firestore so writers and editors can see each others’ edits as they happen and push to publication fast. You can also hear about how gaming publishers have scaled quickly with Spanner, including game publisher Colopl. And join Firestore product experts along with the CTO and VP of engineering at Khan Academy to learn about new Firestore capabilities and how Khan Academy was able to meet the increasing demand for online learning. There’s plenty of unpredictability in the world today—your databases don’t need to add to it. We’re looking forward to seeing you digitally this week to explore, learn, and connect. Want to get your hands on what’s new? Start with a free Google Cloud trial. And check out why Google Cloud was ranked a leader for databases by Gartner.
Quelle: Google Cloud Platform

Looking ahead as GKE, the original managed Kubernetes, turns 5

It’s hard to believe that GKE is already celebrating its 5th birthday. Over these last five years it’s been inspiring to see what businesses have accomplished with Google Cloud and GKE—from powering multi-million QPS retail services, to helping a game publisher deploy 1700 times to production in the week of its launch, to accelerating research into discovery of treatments for both rare and common conditions in cardiology and immunology, to helping map the human brain. These were all made possible by Kubernetes. Today, as Virtual KubeCon kicks off, we want to say first and foremost, thank you to the community for making Kubernetes what it is, and making it the industry standard for managing containerized applications. While GKE transformed how businesses modernize their applications and pushed the bounds of what Kubernetes can achieve, such sustained innovation was only possible thanks to support and feedback from Kubernetes users and close partnership with GKE customers.As we look ahead, we wanted to share five ways we’re continuing our work to make GKE the best place to run Kubernetes.1. Leaving no app behindEvery workload deserves the benefits of portability, isolation, and security. The next wave of Kubernetes users wants to containerize their apps and derive those benefits, but often stumbles when confronted with the complexity of getting started with Kubernetes. Embracing Kubernetes shouldn’t have to be the hard way, and GKE has invested heavily in simplifying the entire journey—from creating your first cluster, to deploying your first app into production. GKE customers can now take advantage of Windows on GKE, as well as Google Cloud optimizations and best practices, to smoothly run workloads from traditional stateless to complex stateful and batch. For enterprise applications, you have access to practical guidance for PCI DSS on GKE and infrastructure-level compliance resources, helping you achieve compliance while reaping the benefits of running your applications on GKE. Even workloads previously stuck on proprietary legacy mainframes can now be migrated into GKE, using automated code refactoring tools. You can also supercharge your advanced AI/ML workloads at an optimized cost performance ratio using GPU and TPU support, the latter of which is only available in Google Cloud. If you’re interested in deploying GPU workloads across cloud, on-premises, and the edge, you can check out our partner NVIDIA’s GPU Operator for Anthos and GKE on-prem, which we are launching this week.2. Saving money with optimal price-to-performance by defaultEmbracing Kubernetes isn’t just about developer velocity but also about cost optimization. GKE helps organizations improve resource utilization through efficient bin packing and auto-scaling. Although this provides some operational efficiency, you can achieve considerable additional savings using multi-dimensional auto-scaling, which is again, only available on GKE. GKE clusters of all sizes use cluster autoscaling, which can help reduce costs over static clusters and reduce the complexity of ensuring Kubernetes scales to meet the needs of your business. You can save even more with flexible options for horizontal pod autoscaling based on both CPU utilization and custom metrics, as well as vertical pod autoscaling and node auto-provisioning. We’ve published our best practices for running cost-optimized Kubernetes applications on GKE to help you get started.3. Container-native networking: no more square pegs in round holesGKE is at the forefront of container-native networking. A new eBPF-based dataplane in GKE provides built-in Kubernetes network policy enforcement to support multi-tenant workloads. It helps increase visibility into network traffic augmented with Kubernetes metadata for security conscious enterprises. VPC native integration affords IP management features such as flexible pod CIDR and non-RFC 1918 IP range support, letting you better manage your IP space and scale your clusters as needed. With network endpoint groups (NEGs) you can load balance HTTP traffic directly to pods for more stable and efficient networking. You can read more about our networking capabilities in this best practices blog post.4. Bringing BeyondProd to containerized appsGoogle protects its own microservices with an initiative called BeyondProd, a new approach to cloud-native security. This protection applies concepts like: mutually authenticated service endpoints, transport security, edge termination with global load balancing, denial of service protection, end-to-end code provenance, and runtime sandboxing. By implementing the BeyondProd methodology for your containerized applications, GKE allows your developers to spend less time worrying about security while still achieving more secure outcomes.5. Democratizing access to learning Kubernetes Kubernetes, conceived and created at Google, is the industry standard for adopting containers and implementing cloud-native applications. Google is the largest engineering contributor1 to the Kubernetes project, contributing to almost every subsystem, SIG, and work-group. Google also funds and provides almost all of the infrastructure for Kubernetes development. We are deeply committed to continuing these contributions.The growth and potential of Kubernetes is accelerating its usage across customers and creating more businesses focused on its distribution, hosting and services. To wit: there are more than 64,500 job openings related to Kubernetes2. To support this growing demand, we are continuing to provide opportunities to learn Kubernetes through GKE. You already have access to quickstarts, how-to guides, tutorials, and certifications from Coursera and Pluralsight. To make it even easier, from now until December 31, 2020 we’re providing Kubernetes training at no charge–visit goo.gle/gketurns5 to learn more.We can’t wait to see what customers will achieve with GKE in the next five years. Until then, we will leave you with this celebratory ‘5 things developers love about GKE’ video.1. DevStats from CNCF.io2. LinkedIn job search results for ‘Kubernetes’ keyword worldwide as of August 12, 2020.
Quelle: Google Cloud Platform

9 data management sessions to check out at Next OnAir

At Google Cloud, we build databases that are simple to operate, secure, and designed to handle both what you’re running today and what you’re looking to build next. No matter where you are in your journey—whether you’re moving on-prem servers to the cloud, modernizing existing systems, or spinning up new workloads—this week at Google Cloud Next ‘20: OnAir is full of sessions, tips, and customer stories to help you meet your needs. Be sure to add these can’t-miss sessions to your playlists, and take a look at the full schedule. All sessions are going live starting tomorrow, August 18, at 9am PT, and will be available on-demand after that.Keynote featuring Sharechat: What’s new with database management Join Director of Product Management Penny Avril for this keynote session featuring new releases, use cases, and a conversation with Sharechat’s VP of Engineering Venkatesh Ramaswamy to learn how his team was able to lower operating costs by 30% while improving team efficiency by migrating to Google Cloud. Data modernization: the McKesson story Global healthcare company McKesson will show you how they modernized their databases using Google Cloud services, including migrating from Oracle to Cloud SQL. Their director of data analytics and innovation will walk through how they brought together multiple siloed legacy tools and data into a successful modern cloud architecture. Learn how they broke down complex database architectures into roadmap items and navigated complex challenges along the way. What’s new with Cloud SQL Cloud SQL makes it easy to run MySQL, PostgreSQL, and SQL Server in the cloud. Over the past year, we’ve added an impressive amount of enterprise capabilities to meet your needs for security, reliability, and financial governance. Learn more about new features and enhancements, including point-in-time recovery and cross-region replicas, and how you can get started with them.  Switching from DynamoDB to Cloud Bigtable and Spanner In this session, hear from ShareChat, one of the largest social media platforms in India, which migrated from AWS to Google Cloud. Learn how ShareChat replaced DynamoDB with Google’s Cloud Spanner database for better scale and improved efficiency. NYT: Building a real-time collaborative editor with Firestore The New York Times has reporters and editors all over the world working to write and produce breaking news on tight deadlines. Using Firestore, the technology team at the Times built a collaborative rich-text editor that for the first time allowed editors to see each others’ edits as they happen. You’ll hear how Firestore empowered the team at the Times to quickly build a highly scalable, collaborative article-editor app.Future-proof your business for global scale and consistency with Cloud Spanner In this session, learn about how massively scalable database Spanner helps build apps that require strong consistency and high availability. You’ll also get a look at new and recently launched Spanner capabilities that offer even more enterprise benefits.Simplify complex application development using Cloud Firestore Get a look at how the Cloud Firestore database service makes it easy for developers to scale new and existing applications while adding real-time client data synchronization and offline mode capabilities. Learn about new Firestore capabilities and hear from Khan Academy how they responded to rapid growth in site usage during the Covid-19 pandemic. Modernizing HBase workloads with Cloud Bigtable See how you can move workloads running on open source database HBase to Bigtable, our fully managed, low-latency NoSQL database. Bigtable can ease operational burdens and allow developers to use the service for large analytical and operational workloads with global replication.Bare Metal Solution: Bring all your enterprise workloads to Google Cloud Most legacy apps weren’t designed to run in the cloud, but businesses running those apps need to adapt quickly to take advantage of modern technology. In this session, check out how Bare Metal Solution addresses the risk and challenge of migrating legacy workloads by providing all the necessary infrastructure to run Oracle databases and other specialized workloads close to Google Cloud—securely and cost-effectively.Check out the full lineup of databases and data management sessions. Have a question or project you’re working on? Remember to schedule time with our database experts during the week. We’ll see you in the cloud!
Quelle: Google Cloud Platform

Deploy GPU workloads across all your clouds with Anthos and NVIDIA

We are very excited to announce a joint solution with NVIDIA now publicly available to all users in beta that allows customers to run NVIDIA GPU workloads on Anthos across hybrid cloud and on-premises environments. Running GPU workloads between cloudsMachine learning is one of the fastest growing application segments in the market today, powering many industries such as biotech, retail, manufacturing and many more. With such unprecedented growth, customers are facing multiple challenges. The first is the difficult choice of where to run your ML and HPC workloads. While the cloud offers elasticity and flexibility for ML workloads, some applications have latency, data size, or even regulatory requirements that mean they need to reside within certain data centers and at edge locations. The other challenge is high demand for on-prem GPU resources; no matter how fast organizations onboard GPU hardware, demand is always greater than supply, so you need to always maximize investment in your GPUs. Organizations are also looking for a hybrid architecture that maximizes both cloud and on-prem resources. In this architecture, bursty, and transient model development and training can run in the cloud, while inference and steady state runtime can be on-prem, or vice versa.Anthos and ML workloadsAnthos was built to enable customers to easily run applications both in the cloud and on-prem. Built on Kubernetes, Anthos’ advanced cluster management and multi-tenancy capabilities allows you to share your ML infrastructure across teams, increasing utilization and reducing the overhead of managing bespoke environments. Anthos also allows you to run applications anywhere, whether they reside on-prem, other cloud providers, or even at the edge. The flexibility of deployment options with Anthos combined with open-source ML frameworks such as Tensorflow and Kubeflow lets you build truly cloud-portable ML solutions and applications.In addition to in-house developed applications, you can use Anthos to deploy Google Cloud’s best-in-class ML services such as Vision AI, Document AI, and many others in your data center and at edge locations, turbocharging ML initiatives in your organizations. Our collaboration with NVIDIAFor this solution, we’ve built on our strong relationship with NVIDIA, a leader in AI/ML acceleration. The solution uses the NVIDIA GPU Operator to deploy GPU drivers and software components required to enable GPUs in Kubernetes. The solution works with many popular NVIDIA data center GPUs such as the V100 and T4. This broad support allows you to take advantage of your existing and future investments in NVIDIA GPUs with Anthos. For more information about supported NVIDIA platforms, please check the NVIDIA GPU Operator documentation. You can also learn more about other Google Cloud and NVIDIA collaborations.Getting startedThis solution is available as beta and will work with Anthos on-prem 1.4 or later. For instructions on getting started using NVIDIA GPUs with Google Cloud’s Anthos and supported NVIDIA GPUs, please refer to the documentation here.
Quelle: Google Cloud Platform

App modernization enabled this retail solution provider to converge physical and digital commerce

As Google Cloud CEO Thomas Kurian said earlier this year, digital transformation in the retail industry is more than just a requirement, it’s a race, and the retailers that transform the fastest will be the most successful in the long term. COVID-19’s impact on the retail industry has only exacerbated this challenge, forcing retailers to rapidly marry digital and physical experiences by offering services such as curbside pickups and contactless check-out.Long before 2020, Manhattan Associates, a leader in supply chain, inventory and omnichannel e-commerce technology, recognized retailers needed a platform that could help them adapt to changing market conditions and bridge the gap from brick and mortar to digital commerce. As a result, they launched the Manhattan Active Solutions portfolio for their customers in retail, manufacturing, and distribution. Solutions like Manhattan Active Warehouse Management, the cloud-native version of their flagship warehouse management system, and Manhattan Active Omni, a suite of cloud-based inventory management, point-of-sale, and customer engagement tools, are versionless so customers get continuous access to the latest features, with automated scaling for peak demand. But for Manhattan Associates to deliver their vision to make their solutions portfolio versionless, agile, scalable, and extensible, they first needed to modernize their application environment. They knew lifting and shifting a monolithic architecture to the cloud would not deliver the level of agility that they—and their end-customers—would need, so they decided to go with microservices. Consequently, they re-architected Manhattan Active Warehouse Management and Manhattan Active Omni from the ground up to take full advantage of the benefits of the cloud.Guiding principles behind Manhattan Associates cloud-native approachFirst, Manhattan Associates established some critical architectural principles to serve as their north star for cloud-native application design. These guiding principles, as outlined below, were created to help ensure Manhattan Associates solutions could deliver the engineering velocity demanded by their end-customers while also aiming to reduce the cost of innovation.Speed of innovation: Manhattan Associates strove to adopt a microservices-based, cloud-native approach so they could deliver new functionality faster than their competitors, and faster than they themselves had previously done. Extensibility: In addition to accelerating innovation in house, they also sought to ensure an open and extensible ecosystem so that their customers and partners can easily build and integrate additional capabilities without dependence on Manhattan Associates.Flexibility: With these managed SaaS versions of their flagship products, Manhattan Associates needed the flexibility to scale performance and resources based on their end-customers’ needs.Low cost of innovation: Finally, Manhattan Associates required the ability to leverage open-source technologies for portability and to avoid vendor lock-in. In addition, they required detailed logging and monitoring tools in order to maintain control over costs.Partnering with Google Cloud to meet business objectivesIn order to deliver on the imperatives outlined above, Manhattan Associates teamed with Google Cloud as their cloud platform. Here’s a quick look at how they are leveraging key Google Cloud services to help meet the objectives laid out in their guiding principles.Achieving agility, scalability, and resiliency with Google Kubernetes Engine: Manhattan Associates built on top of their already extensive Kubernetes experience and took advantage of Google Kubernetes Engine managed features such as regional high availability, autoscaling, node auto-provisioning, and the auto-repair functionality to help ensure scalability and resiliency for their containerized workloads.Gaining ease of use and openness with Cloud SQL for MySQL: Manhattan Associates team members were also long-time users of MySQL. With Cloud SQL for MySQL, they were able to continue using open-source software to run managed MySQL databases. In addition, Manhattan Associates could leverage Cloud SQL to support their environment setup and their customer needs. For instance, they can create a Cloud SQL database instance per project. And each project can correspond with an environment, like dev/test and production or with a specific customer. They also have the flexibility to support optional add-ons for each customer like additional database instances for high availability.Visibility and control with detailed monitoring and logging: Finally, Manhattan Associates demanded very detailed monitoring and logging to help ensure customer environments were running effectively across the stack. They leveraged a combination of open-source, custom and Google Cloud tools like Cloud Logging, Cloud Monitoring, and CloudRun as well as in-house tools to keep tabs on their environments and trigger alerts so that Manhattan Associates teams can take action when needed.As a result, Manhattan Associates was able to deliver a truly cloud-native warehouse management and unified commerce platform, built using microservices. They have been delivering on their goal of engineering velocity; releasing features quarterly versus annually as was the case on their prior monolithic architecture. More importantly, they can pass this speed of innovation on to their end-customers.To learn more about how Manhattan Associates rebuilt their platform in the cloud, check out the Manhattan Active Warehouse Management andManhattan Active Omni on Google Cloud virtual breakout sessions at Manhattan Associates’ digital event, Momentum Connect. In these sessions, they’ll dive into more detail behind their cloud native approach, their use cases, and how they combined Google Cloud, open-source, and home-grown tools to bring their Manhattan Active Warehouse Management and Manhattan Active Omni solutions to market.
Quelle: Google Cloud Platform

Google Cloud VMware Engine explained: Integrated networking and connectivity

Editor’s note: This the first installment in a new blog series that dives deep into our Google Cloud VMware Engine managed service. Stay tuned for other entries on migration, integration, running stateful database workloads, and enabling remote workers, to name a few.We recently announced the general availability of Google Cloud VMware Engine, a managed VMware platform service that enables enterprises to lift and shift their VMware-based applications to Google Cloud without changes to application architectures, tools or processes. With VMware Engine, you can deploy a private cloud—an isolated VMware stack—that consists of three or more nodes, enabling you to run VMware Cloud Foundation platform natively. This approach lets you retire or extend your data center to the cloud, use the cloud as a disaster recovery target, or migrate and modernize workloads by integrating with cloud-native services such as BigQuery, Cloud AI, etc.But before you can do that, you need easy-to-provision, high-performance, highly available networking to connect between:On-premises data centers and the cloudVMware workloads and cloud-native servicesVMware private clouds in single or multi-region deployments.Google Cloud VMware Engine networking leverages existing connectivity services for on-premises connections and provides seamless connectivity to other Google Cloud services. Furthermore, the service is built on high-performance, reliable and high-capacity infrastructure, giving you a fast and highly available VMware experience, at a low cost.Let’s take a closer look at some of the networking features you’ll find on VMware Engine. High Availability and 100G throughputGoogle Cloud VMware Engine private clouds are deployed on enterprise-grade infrastructure with redundant and dedicated 100Gbps networking that provides 99.99% availability, low latency and high throughput.Integrated networking and on-prem connectivity Subnets associated with private clouds are allocated in Google Cloud VPCs and delegated to VMware Engine. As a result, Compute Engine instances in the VPC communicate with VMware workloads using RFC 1918 private addresses, with no need for External IP-based addressing. Private clouds can be accessed from on-prem using existing Cloud VPN or Cloud Interconnect-based connections to Google Cloud VPCs without additional VPN or Interconnect attachments to VMware Engine private clouds. You can also stretch your on-prem networks to VMware Engine to facilitate workload migration.Furthermore, for internet access, you can choose to use VMware Engine’s internet access service or route internet-bound traffic from on-prem to meet your security or regulatory needs.Access to Google Cloud services from VMware Engine private cloudsVMware Engine workloads can access other Google Cloud services such as Cloud SQL, Cloud Storage, etc., using options such as Private Google Access and Private Service Access. Just like a Compute Engine instance in a VPC, a VMware workload can use private access options to communicate with Google Cloud services while staying within a secure and trusted Google Cloud network boundary. As such, you don’t need to exit out to the public internet to access Google Cloud services from VMware Engine, regardless of whether internet access is enabled or disabled. This provides for low-latency and secure communication between VMware Engine and other Google Cloud services.Multi-region connectivity between VMware private clouds VMware workloads in private clouds in the same region can talk to one another directly—without needing to “trombone” or “hairpin” across the Google Cloud VPCs. In the case where VMware workloads need to communicate with one another across regions, they can do so using VMware Engine’s global routing service. This approach to multi-region connectivity doesn’t require a VPN, or any other latency-inducing connectivity options. Access to full NSX-T functionalityVMware Engine supports full NSX-T functionality for VMware workloads. With this, you can use VMware’s NSX-T policy-based UI or API to create network segments, gateway firewall policies or distributed/east-west firewall policies. In addition, you can also leverage NSX-T’s load balancer, NAT and service insertion functionality. Networking is critical to any Enterprise’s cloud transformation journey—even more so for VMware-related use cases. The networking functionality in VMware Engine makes it easy for you to take advantage of the scale, flexibility and agility that Google Cloud provides without compromising on functionality.What’s nextIn the coming weeks, we’ll share more about VMware Engine and migration, building business resiliency, enabling work from anywhere, and your enterprise database options. To learn more or to get started, visit the VMware Engine website where you’ll find detailed information on key features, use cases, product documentation, and pricing.
Quelle: Google Cloud Platform

Under the hood: The security analytics that drive IAM recommendations on Google Cloud

IAM Recommender helps security professionals enforce the principle of least privilege by identifying and removing unwanted access to Google Cloud Platform (GCP) resources. In our previous blog, we described some best practices for achieving least privilege with less effort using IAM Recommender—which uses machine learning to help determine what users actually need by analyzing their permission use over a 90-day period. In this post we’ll peek under the hood to see how IAM Recommender works, with the help of a step-by-step example.A DIY approachFor a little more background, IAM Recommender generates daily policy recommendations and serves them to users automatically. Google collects the logs, correlates data, and recommends a modified IAM policy to minimize risk. We then surface these results in various places to ensure visibility: in-context in the IAM Permissions page in Cloud Console, through a Recommendations Hub in Cloud Console, and through BigQuery.Let’s think through what building an analytics system that does all of this from the ground up would require: You first need to build an entitlements warehouse that periodically collects normalized role bindings for all your resources, so you’ll need to pay attention to hierarchies and inherited role bindings. Then, to ensure your recommendations don’t break any existing workloads, you’ll need to collect and build telemetry to determine which permissions have been used recently. You can do this by storing Cloud Audit Logs data access logs for the resources you want to analyze. This, however, is a very high volume of log data that comes at a cost, and the analysis is non-trivial; it requires series log processing, parsing, and normalization, and aggregation.You will sometimes find gaps in your access logs data, which could arise from sporadic individual behaviors such as users taking vacations or changing projects. You’ll need to use machine learning to plug these gaps, which is also not trivial because of high-dimensions and sparse features of the training data.To ensure you build for business continuity, you’ll need to build in monitoring and controls, and add provisions for break-glass.Once this work is done, you can use the analytics pipeline to analyze utilization against policy data to determine which permissions are safe to remove. You might want to enhance this with machine learning to predict future permission needs to ensure users don’t have to come back for additional access.Lastly, once you’ve determined the right sets of permissions, roles, conditions, and resources, you’ll need to come up with a model that ranks the best IAM policy to meet your users’ needs.We wanted to empower you with actionable intelligence while saving all of this effort. The end result is Active Assist which does this analysis for you at Google scale. But, even if you were able to do all of this, you could only analyze your own data. We’re able to gain additional insight from cross-customer analysis, further identifying gaps and potential misconfigurations in your policies before they can become a problem. Google Cloud proactively protects the privacy of our users during this analysis with techniques that are described in detail in our blog here.Let’s look a little deeper into our implementation.Safe to applyWhen we launched this product, a key consideration was to ensure recommendations were safe to apply—that they wouldn’t break workloads. Making safe recommendations depends on having high-quality input data. IAM Recommender analyzes authorization telemetry data to compute policy utilization and make subsequent recommendations. At Google Cloud, our production systems take care of processing and ensure data quality and freshness directly from the source of the logs. Importantly, IAM Recommender does this for all customers at scale, which is more efficient than each customer doing it on their own. We collect and store petabytes of logs data to enable this functionality, at no additional charge.But authorization logs only tell a part of the story. In Google Cloud, resources can be organized hierarchically, where a child resource inherits the IAM policy attached to a parent. To make accurate recommendations, we also apply attributed inheritance data in our analytics. To ensure the quality of our recommendations, we built comprehensive monitoring and alerting systems with detection and validation scripts. We then automated these checks with ML to measure new recommendations against baselines. These checks against baselines ensure the analytics pipeline from the upstream input data to downstream dependencies are safe to apply. If we detect deviation from baselines, preventative measures kick in to halt the pipeline to ensure we are serving reliable recommendations.ML security analytics at petabyte scaleTo provide recommendations, we developed a multi-stage pipeline using Google Cloud’s Dataflow processing engine. To get a sense of scale, Cloud IAM is a planet-scale authorization engine that processes hundreds of millions of authorization requests every second. IAM Recommender ingests these authorization logs and generates and re-validates hundreds of millions of recommendations daily to serve the best results to our customers. Google Cloud’s scalable infrastructure allows us to provide this service cost-effectively.Our system performs detailed policy utilization analysis that replays authorization logs with the latest policy config snapshot and resource metadata on a daily basis. This data is fed into our ML training models, and the output is piped into policy utilization insights that support recommendations. We then use privacy-preserving ML techniques that plug gaps in observation data, which could be due to a recommendation variant, system outage, or other issue. (Check out this blog to explore these ML techniques in more depth.)Balancing the tradeoff between risk and complexityIAM Recommender uses a cost function to determine the set of roles that cover the needed permission set, ranks the roles by their security risk, and picks the least risky one. Determining the minimum set of roles is equivalent to the NP-complete set cover problem. To cut down on overhead, the approach optimizes for recurring patterns across multiple projects in a given organization, reducing permissions while maximizing role membership. In some cases we determine the best role is one that hasn’t been created yet—though our systems do find opportunities for reuse across your organization—and in these cases we recommend creating a custom role.Learn moreTo learn more about IAM Recommender, check out the documentation and our blog about Exploring the machine learning models behind Cloud IAM Recommender. To learn more about Active Assist, visit our website. To see how our customers solved for least privilege, check out one of our Google Cloud Next ‘20: OnAir sessions:Minimizing Permissions Using IAM RecommenderUsing Policy Intelligence to Achieve Least Privilege AccessCloud is Complex. Managing It Shouldn’t Be
Quelle: Google Cloud Platform

Looker news and highlights from Google Cloud Next '20: OnAir, Week 5

August has been a busy month for us here at Looker. We started off Data Analytics week at Google Cloud Next ’20: OnAir with a major release of top-requested enhancements—building on our vision of empowering companies to build powerful data experiences and drive positive, data-driven business outcomes. Then, throughout this week, we’ve presented our vision for data leaders, shared some of our newest capabilities, led virtual hands-on-labs, and highlighted inspiring customer stories and demos across the entire Next OnAir experience. In the smart analytics keynote, we show you how Looker is a critical component for delivering Google Cloud’s vision of an open, intelligent, and flexible platform. See how Sunrun leveraged Looker and BigQuery to accelerate their digital transformation initiatives, or how Looker fits into Verizon Media’s 100+ PB analytics platform. Hear how the Looker team supported data-driven responses to COVID-19, take a technical deep dive, see a demo of Looker with BigQuery BI Engine, or learn about Looker’s (not so) secret sauce: Intro to LookML. Finally, in our Looker Roadmap session, hear about many of the newest advancements and get a peek at what’s next. We’re particularly proud of the progress we’ve made on the data experiences roadmap we first announced last December. For the last year, we’ve focused on features that help our wide community of system administrators, application builders, model developers, data analysts, and business decision makers by providing the scale and performance, cost optimization, simplified management, and ease of development of data experiences they require. Companies, departments, and individuals all have unique, different ways they use data to guide their work.  This understanding guides Looker’s approach to analytics and the new enhancements we announced this week. Let’s take a further look at a few key aspects of the announcements. Increased performance and efficiency with aggregate awarenessTo drive greater performance and efficiency at scale, Looker now includes aggregate awareness. With aggregate awareness, Looker can materialize query results and dynamically route user queries to different tables based on level of granularity.  This helps minimize the total number of queried records, reducing query cost and improving response time. Aggregate awareness is fully managed in LookML, Looker’s semantic layer, reducing implementation time and cost while maintaining simplicity for users. Unlike other solutions, aggregate awareness doesn’t limit a query to a specific table. Instead, it can automatically UNION-in related data when query scope exceeds that of a given aggregate table.Managing Looker at scale just got easierSince last year, Looker administrators have been able to leverage Looker system activity analytics to track their usage of the Looker platform. Dashboards and data exploration experiences are pre-built in Looker to help measure user activity, query performance, scheduling, content (reports and dashboard) use, and any errors that might occur. With elite system activity, now available, administrators can retain system analytics data longer and perform more complex, faster analysis of this data. Coupled with new user and permissions management tools, including the ability to better leverage existing LDAP systems, system activity analytics helps Looker admins drive platform adoption and ensure the smooth operation of vital reports, dashboards, explores, and other, more advanced, data experiences.Simplifying and speeding access to insightsLooker has also redesigned the dashboard and reporting experience to include more intuitive interactions. It’s now easier than ever to provide users with tools that let them start at high-level dashboards and drill down into the data to find answers to questions. Delivering self-service access to data that’s accessible and intuitive, Looker offers easy cross-filtering in dashboards.  A newly enhanced integration with Slack and improved alerting functionality put data right where users need it, on-demand, streamlining the time from question to insight and then to business-impacting action. These new ways of accessing data, minimizing friction in existing workflows, allow non-technical employees to take a proactive approach to their area of the business.Quickly, easily deliver new data experiencesWith a new extension framework for data product development, Looker also reduces friction developers experience when modeling, visualizing, or operationalizing their data. The extension framework allows front-end developers to build and deploy within Looker without relying on DevOps or standalone servers. Extensions also have full access to Looker APIs and can take advantage of existing authentication and permissions, simplifying the development process and speeding time to value for data products. Working-backwards from their desired end goal and dream application, our most innovative developers go beyond static reports and realize their vision with Looker developer tools and partners.Turnkey models for data-driven marketersWe’ve also introduced new Looker Blocks for marketers—built in partnership with experts at Google Cloud and with integrated BigQuery ML models—that make it easier than ever for data-driven marketers to get up and running quickly with out-of-the-box advanced analytics for Google Analytics 360, Google Marketing Platform, and Google Ads data. With these new blocks, marketing teams can dig deeper into web behavior, optimize campaign investment, define granular KPIs, expand insights by joining external datasets, and shorten the time from insight to action. Looker Blocks continue to be a powerful tool that accelerates analytics value—providing deep expertise and insights, faster, in a way that’s easy to implement and understand.From BI to data experiencesWith the announcements this week, Looker continues to enhance the tools you’re already using by infusing new, relevant data into your workday. To learn more about all the latest Looker enhancements for your business, click here. You can also register here and speak live with our team about these and other features and updates.
Quelle: Google Cloud Platform