Happening hybrid cloud sessions at Next ‘19 (and how you can register)

Even if your workloads and apps can’t fully migrate to the cloud for your unique business reasons, you can still take full advantage of Google’s innovative technology. Containers and Kubernetes underpin any modern hybrid cloud strategy, and Google’s Cloud Services Platform (CSP) brings the best of these technologies to your datacenter.At Google Cloud Next ‘19 this year, we’re offering more than 45 sessions on topics ranging from adapting your existing application to a hybrid cloud environment to building, running, and managing microservices both on-prem and in the cloud.If you’re joining us at the event, don’t forget to mark these specific sessions to hear from the folks who helped originate CSP:1. Bringing the Cloud to You: Join us in this spotlight session after our day-one keynote to learn about one of our big announcements at Next this year. This spotlight session will show you how services built on Kubernetes and Istio will bring the efficiency, speed, and scale of cloud to you. We’ll show you how these tools and technologies can help you build reliable, secure, and high-performing cloud services for today and the future.2. Fireside Chat with Eric Brewer: Hear the latest from the person who introduced Kubernetes to the world almost five years ago. Eric is Vice President of Infrastructure at Google Cloud and has been working on Google’s cluster and compute infrastructure since 2011. This session is presented by the Kubernetes Podcast from Google, a weekly news and interview show with insight from the Kubernetes community.While you’re at it, here’s a list of must-see hybrid cloud and container sessions, as well as sessions on how to modernize your application developments. Be sure to register by clicking the links below to reserve your spot—seats are filling up fast!Managing Applications The Kubernetes WayEver wonder how it is to write applications that manage other applications? In this session, we will show you how to build custom Kubernetes controllers for managing your applications. We’ll also share best practices based on Google’s experience of managing large-scale workloads.Using GKE On-Prem to Manage Kubernetes in Your DatacenterWe’ll explore how customers can best leverage GKE On-Prem and CSP to manage a true hybrid environment. We’ll walk through common use cases for GKE On-Prem and then deep dive into the tech stack. We’ll also walk through how to install GKE On-Prem into your vSphere environment and to manage the cluster from Google Cloud.Next Generation CI/CD with GKE and TektonDeciding on a CI/CD system for Kubernetes can be a frustrating experience—there are a gazillion to choose from, and traditional systems were built before Kubernetes existed. We’ve teamed up with industry leaders to build a standard set of components, APIs and best practices for cloud native, CI/CD systems. Through examples and demos, we will show off new, Kubernetes-native resources that can be used to get your code from source to production with a modern development workflow that works in multi-cloud and hybrid cloud environments.Onramp to Istio: An Adoption StoryThis session will take you through the journey of a customer in EMEA who has implemented Istio in production. We will start from the problems that led them to service mesh in the first place. We will discuss the decisions they made as they planned and started their implementation. Finally, we’ll talk about how Istio has changed their day-to-day life and what the benefits have been.Bringing Your Kubernetes Clusters to GCPCongratulations, you’ve successfully rolled out Kubernetes across your organization and you’re now running clusters in places you never thought was possible. How do you begin to corral those clusters to see what’s going on? In this talk, we’ll show you how to get visibility into your workloads and to take advantage of Google Cloud.Lastly, we know hybrid cloud is top of mind, so we’re bringing you more ways to engage with us face-to-face to answer any burning questions. Check out these on-site attractions in Moscone South.DevZone:  Stop by DevZone and meet the developers behind the cloud products you use everyday. It’s open to all attendees and located in between the keynote floor and the showcase.Google Cloud Next Showcase: Join us at the Showcase to see what we’ve been up to over the past year.The Cloud Lab: Discuss strategy or ask questions with Google Cloud experts one-on-one. When you arrive at Next, be sure to reserve your meeting time by going to The Cloud Lab as space is limited. You can do so in the DevZone.To learn more about these sessions, or to register, visit the Next ‘19 website. See you in San Francisco!
Quelle: Google Cloud Platform

8 statistically sound smart analytics sessions to attend at Next ‘19

From tracking shared bike usage to analyzing a large blockchain, the ability to effectively analyze large datasets has proved essential to solving numerous interesting, large-scale problems. This year at Next ‘19, we’re offering more than 50 breakout sessions and a spotlight session on topics ranging from data warehousing to business intelligence (BI). Leading enterprises and disruptive start-ups will share how they’re using Google Cloud data analytics solutions to achieve more with big data, so that you can too.If you’re short on time, or you’re quickly pulling your conference schedule together, here are eight breakout sessions we recommend:1. Data Processing in Google Cloud: Hadoop, Spark, and Dataflow – Register hereGoogle Cloud is committed to making your infrastructure as easy-to-use as possible, and “easy” means different things to different people and organizations. In data processing, sometimes simplicity means migrating your existing Hadoop and Spark environment to Cloud Dataproc, which gives you a familiar look and feel, but manages the underlying infrastructure for you. For others, easy will mean putting Cloud Dataflow’s serverless, unified batch and stream data processing into production. In this session, we’ll explore the ins and outs of making this decision, with real-world experience from Qubit, a company that uses both Dataproc and Dataflow in production.2. Rethinking Business: Data Analytics With Google Cloud – Register hereGet a deep dive into our newest launches and announcements in our data analytics portfolio in this spotlight session. We’ll present live demos and share the latest in enterprise data warehousing, streaming analytics, managed Hadoop and Spark, modern BI, planet-scale data lakes, and more. Booking.com and ANZ Bank will be on-hand to share how our customers are leveraging GCP’s serverless data platforms to redefine data’s relationship with business.3. Data Solutions for Change: Nonprofit Stories – Register hereGartner defines “data for good” as a movement in which people and organizations transcend organizational boundaries to use data to improve society. From public datasets to citizen-generated data, cross-sectoral organizations are exploring collaborative methods to fill in data gaps, advance an inclusive data charter, and track progress on the UN’s Sustainable Development Goals (SDGs). Attend this session to learn how Google Cloud is empowering organizations and people around the world to address some of the greatest global challenges.4. How Pandora is Migrating Its On-Premises BI & Analytics to GCP – Register hereData is one of Pandora Media’s core differentiators. Since the launch of their music streaming service in 2005, Pandora listeners have created 13 billion stations and have liked their favorite songs with a thumbs up more than 90 billion times. Computation against large datasets is essential to how Pandora personalizes music for their users. This session explains how Pandora Media is building a more scalable, cost effective, and performant analytics platform by using GCP services like BigQuery, Cloud Dataproc, Cloud Composer, Cloud Storage, and others to replace their 2700+ node, 7PB+ on-premises Hadoop environment and gain access to new analytical engines, such as Dataflow, and hardware such as GPUs/TPUs.5. The Migration Chronicles: CBSi Moves from Teradata and Hadoop to BigQuery – Register hereDuring this session, members of CBS Interactive’s data team will cover the journey of migrating a data warehouse from on-premises Teradata and Hadoop clusters to BigQuery using traditional ETL (extract, transform, load) and Google Cloud Platform tools. The CBSi team will cover some of the challenges they faced and how they overcame those challenges. The BigQuery product team will also present some of the strengths of BigQuery as a data warehouse.6. A Technical Dive on Data Warehousing with BigQuery: Cruise Automation’s Stories – Register hereDirector of BigQuery product management, Jordan Tigani, describes how organizations extract meaning from their data with capabilities that surpass traditional data warehousing. Enterprises have to contend with an accelerating onslaught of data, while employees want to make real-time decisions based on the very latest information. At the same time, they want to drill down into the most granular details. Cruise Automation will explain how they unlock answers to problems that they couldn’t solve with a traditional data warehouse. Jordan will also demonstrate some of the latest features of BigQuery that will make you rethink what a modern data warehouse can do for you.7. Building and Securing a Data Lake on Google Cloud Platform – Register hereHaving a data lake in the cloud gives you more than inexpensive storage. Join product manager Chris Crosbie and Cloudera’s Larry McCay for a session that presents how you can successfully ingest, store, secure, process, and analyze your data on Google Cloud Platform. Learn how Google Cloud already supports many of the open source technologies you may be using, and learn how you can achieve end-to-end authentication with your cloud data lake. Lastly, the presenters will discuss how Cloudera has partnered with Google Cloud to help customers extend their data lakes to the cloud with open source security tools like Apache Knox.8. Streaming Analytics: Generating Real Value From Real Time – Register hereData is often most valuable in the moments just after it’s created, so many types of analytics are best performed right after ingest. Real-time analytic solutions unlock time-sensitive value by driving data into real-time decision making and business processes. Join this session to learn how to learn how Cloud Pub/Sub, Cloud Dataflow, and other integrated GCP products can simplify your streaming pipeline and allow your teams to focus on insights—instead of operations.Learn more about the event, or peruse hundreds of additional sessions, on the Next ‘19 website. See you in San Francisco!
Quelle: Google Cloud Platform

Top 10 tremendous app dev sessions to attend at Next ‘19

Whether you develop frontends or backends; on App Engine, Firebase or Kubernetes; for mobile, the web or an embedded device, there’s no lack of options for application development on Google Cloud Platform (GCP)—and we’ll be talking about all of them at Google Cloud Next 19. To help get you situated, we crowdsourced a list from our Google Cloud peers on top app dev sessions you can’t afford to miss. Here are our picks.1.Google Cloud Platform 101Maybe you need an introduction, maybe you just need a refresher. Whatever your reason, check out this session to figure out exactly what all of GCP’s tools are, and get guidance on how best to solve your application development problems.2.Super-Charge Your GKE Developer Workflow in Visual Studio Code and IntelliJKubernetes is portable, extensible, and powerful, but getting started, configuration management, deployments and debugging can be painful. In this session, we’ll explore Visual Studio Code and IntelliJ IDE extensions to simplify these Kubernetes workflows.3.Large-Scale Multiplayer Gaming on KubernetesBuilding the next fast-paced, online, multiplayer game? You’re going to want to use Kubernetes for that, which, when combined with open-source projects like Open Match and Agones, can make the hard work of building a matchmaking platform and coordinating game server orchestration that much easier. Game on!4.Build Mobile Apps with Flutter and Google MapsJust like the real-estate business, mobile app development is all about location, location, location. Come to this session and learn how to build a location-aware mobile app using a powerful combination of Flutter, Firestore and Google Maps Platform.5.Dead Easy Kubernetes Workflows With VS CodeIf you’ve ever worked with a Kuberetes application, you know that there are multiple configuration files to edit, a lot of moving parts to deploy, and that debugging is a pain. Luckily, there is a new Visual Studio Code extension to simplify the Kubernetes development workflow—come check it out.6.The “Why” and the “How” of Testing Games in the CloudTrue statement: testing games is a difficult, imperfect, manual process. In Google’s Firebase Test Lab, we’ve been working ways to perform more sophisticated automated games testing, so you can find problems in your game—before your users do.7.What’s New in Firebase for Development TeamsFirebase is a popular application development platform on GCP, and recent changes to the Firebase back-end as a service tooling makes it an even better fit for building large-scale applications. Come to this session to learn more about testing, continuous integration, global roll-out and more.8.Building Secure Mobile Apps With FirebaseSpeaking of Firebase, it’s important to understand potential attack vectors that arise using Firebase’s direct-from-mobile access, and how to thwart them. We’ll introduce you to the tools Firebase and Google Cloud provide to help you build secure apps, and also tell you about all the things we automatically do for you.9.Serverless Payment Processing with FirebaseCollecting payments on your mobile application is within easy reach, thanks to Firebase and GCP. In this talk, we’ll walk you through how to manage payments in your app using the Stripe API and Cloud Functions for Firebase, and discuss common solutions for payment management, including how to handle refunds and ensure security.10.How Retailers Prepare for Black Friday on Google Cloud PlatformMajor industry events, like Black Friday, can test every facet of a system. In this session, we’ll discuss how cloud-based retailers successfully navigate the holiday peak season—including monitoring tactics, infrastructure designs, and application architectures—to help you prepare for your next peak event!To learn more about these sessions and others, and to register, visit the Next ‘19 website. Until then, happy coding!
Quelle: Google Cloud Platform

Introducing Compute- and Memory-Optimized VMs for Google Compute Engine

Whether you’re running compute-bound applications for HPC or large, in-memory database applications like SAP HANA, you need the right mix of compute resources for the job, while also keeping an eye on price-performance. The vast majority of enterprise workloads run successfully on Google Cloud Platform using our general-purpose VMs. However, as you port more workloads to the cloud, you may need VMs that are optimized for specific types of workloads.Today we are pleased to announce the expansion of our Compute Engine virtual machine (VM) offerings to include new Compute-Optimized VMs and Memory-Optimized VMs. Both are based on 2nd Generation Intel Xeon Scalable Processors, which we delivered to customers last October—the first cloud provider to do so. In addition, these processors will also be coming to our general-purpose VMs. This means you’ll have access to a complete portfolio of machine types to successfully run your workloads across a wide range of memory and compute requirements.Compute-Optimized VMsCompute-Optimized VMs (C2) are a new compute family on GCP, exposing high per-thread performance and memory speeds that benefit the most compute-intensive workloads. Compute-Optimized VMs are great for HPC, electronic design automation (EDA), gaming, single-threaded applications and more. The new Compute-Optimized VMs offer a greater than 40% performance improvement compared to current GCP VMs. They also leverage 2nd Generation Intel Xeon Scalable Processors and can run at a sustained clock speed of 3.8 GHz. Additionally, C2 VMs provide full transparency into the architecture of the underlying server platforms, enabling advanced performance tuning. You can choose Compute-Optimized VMs with up to 60 vCPUs, 240 GBs of memory, and up to 3TB of local storage. Compute-Optimized VMs are currently available in alpha.Memory-Optimized VMsMemory-Optimized VMs (M2) offer the highest memory configuration for a Compute Engine VM. They are well suited for memory-intensive workloads such as large in-memory databases, e.g., SAP HANA, as well as in-memory data analytics workloads. Last July, we announced memory optimized VMs with up to 4 TBs of memory. Today’s additions to the M2 family offer up to 12 TB of memory and 416 vCPUs, enabling you to run scale-up workloads on GCP. These VMs are also based on 2nd Generation Intel Xeon Scalable Processors, and these newest Memory-Optimized VMs will be available in the following sizes:M2 machine types will be available to early access customers this quarter.PricingThe new Compute-Optimized VMs will start at $0.209/hr for a c2-standard-4, and up to $3.13/hr for a c2-standard-60 instance. C2 VMs are also available as Preemptible VMs starting at $0.0505/hr.  Pricing for the newest M2 VMs will be announced at a later date.If you’re ready to get started, you can sign up for early access. Once your account is approved for access, you can log in to the GCP Console, use the Google Cloud SDK, or use Google Cloud APIs to launch the new VMs. Stay tuned for updates on beta and general availability.
Quelle: Google Cloud Platform

Cloud Memorystore: Now with Redis version 4.0 support and manual failover API

After announcing the general availability of Cloud Memorystore for Redis last year, we have seen tremendous growth across various industries, especially in gaming and retail. Cloud Memorystore for Redis lets Google Cloud Platform (GCP) customers use a fully managed, in-memory data store service. Cloud Memorystore automates all administrative tasks to manage your Redis instances, including provisioning, scaling, and monitoring, so you can focus on building apps with low latency and high availability.We are excited to announce Cloud Memorystore support for Redis version 4.0 (in beta) and a new manual failover API here at RedisConf 2019.You can see here how to access the new version:What’s new with Redis 4.0Key features added in Redis version 4.0 include:Caching improvement. Redis introduced a least frequently used (LFU) algorithm, which can provide a more accurate estimation of caching usage than least recently used (LRU) caching.Active memory defragmentation. Redis can now defragment memory while online. This helps with actively reclaiming unused memory, which prevents unnecessary crashes.We’ve also added a manual failover API to Cloud Memorystore so you can test its failover behavior. Before deploying applications in production using Cloud Memorystore, it’s important to test the behavior of the client and the application when a failover happens. With the new API, it’s easy to trigger a failover and observe the application behavior so you can plan accordingly for backup and restore purposes.We exposed Redis metrics to Stackdriver in the previous release, so that you can easily debug Redis issues in your application. To make it easier to debug client-side issues, we’ve partnered with OpenCensus to automatically collect traces and metrics from your app. The traces and metrics are available in a variety of back-end monitoring tools, including Stackdriver, so you can get an even more detailed picture of Redis performance. You can learn more about Cloud Memorystore and OpenCensus in this video:Learn more about Cloud Memorystore for Redis here and see various deployment scenarios for running Cloud Memorystore on GCP here.
Quelle: Google Cloud Platform

Scale storage out with new Elastifile Cloud File Service for GCP

File storage works great for running traditional applications and achieving the high throughput and performance of compute-heavy workloads, like rendering movies or electronic design automation. It’s been a foundational element of enterprise infrastructure for a very long time. As those applications and workloads move to the cloud, you need the same access to highly available shared storage, but with a managed service experience and scale.That’s why we are pleased to announce the availability of Elastifile Cloud File Service for Google Cloud Platform (GCP), a fully managed file service integrated with GCP. We’ve been working closely with Elastifile on this effort to bring you scale-out file services that complement our Cloud Filestore offering and help you meet high-performance storage needs. The deep engineering partnership between Elastifile and Google Cloud has resulted in a service that provides:A native GCP user experience with monitoring, billing, and support integrationPetabyte capacity on demandCost-effective, pay-as-you-go pricing tiersThe ability to tier data between file and object storage and optimize your price vs. performanceMulti-zone availability, automated snapshots, and asynchronous replicationSimpler storage management for a broad range of enterprise use casesA fully managed file service helps eliminate the stress and complexity associated with forecasting, managing, and provisioning enterprise storage. Simplified storage management is particularly helpful with high-performance computing (HPC) jobs like computational fluid dynamics (CFD), seismic analysis, data analytics, machine learning model training, rendering or risk modeling. These are the types of jobs that Appsbroker focuses on at the Extreme Cloud Center of Excellence (ECCoE)—a three-way investment between Appsbroker, Intel and Google Cloud that’s designed to help users of HPC systems benchmark their current workloads.For Geoff Newell, the Technical Director at Appsbroker Limited (UK), the potential time savings he saw from Elastifile Cloud File Service at the ECCoE is promising. “When using the service, it quickly became clear that Elastifile had delivered a robust, enterprise-grade storage solution, with the performance characteristics our HPC customers require, while still providing an efficient, cloud-native user experience,” Newell said.In addition to industry-specific workflows, Elastifile’s new service is also well-suited for horizontal use cases such as persistent storage for Kubernetes, data resilience for preemptible cloud VMs and scalable Network Attached Storage (NAS) for cloud-native services. This support helps bridge the gap between traditional and cloud-native workflows, making cloud integration easier than ever before. One common use case is running SAP on Google Cloud. The diagram below illustrates a customer architecture that uses Elastifile to support SAP workflows involving NetWeaver and HANA.As shown above, Elastifile’s cloud-native file storage serves as the storage backbone for this workflow. When delivered via the new fully managed service, Elastifile also simplifies IT administration by eliminating the complexity associated with manual storage management.Whether facilitating full-scale lift-and-shift migrations to cloud (e.g., for SAP), providing high-performance storage to support application bursting to cloud, or delivering scalable, persistent storage for Kubernetes, Elastifile’s new managed service provides storage simplicity at scale.Elastifile Cloud File Service is available now on Google Cloud Marketplace.
Quelle: Google Cloud Platform

Increasing trust in your cloud: security sessions at Next ‘19

Google Cloud Next ‘19 promises to be chock-full of learning opportunities across all areas of cloud technology. If security will be your focus for Next ‘19, we’ve got you covered.This year’s security spotlight session is your best bet to get a big-picture look at how security works across Google Cloud, and here are a few other can’t-miss talks from the more than 30 security sessions to check out while you’re at Next ‘19.1.Google Cloud: Data Protection and Regulatory ComplianceThese two topics are the goal of many security practices. This session will cover the latest trends in data protection and regulatory compliance and tools to address these needs. You’ll learn how Google Cloud handles these particular security challenges. This related session will offer specifics on risk management and compliance for healthcare companies in particular.2.Comprehensive Protection of PII in GCPNo matter your industry, you need to treat personally identifiable information (PII) with care. In this session, Scotiabank’s platform VP will share the company’s cloud-native approach to protecting PII in Google Cloud Platform (GCP). The session will cover their considerations around access and bank application reidentification.3.Shared Responsibility: What This Means for You as a CISOThe shared responsibility model is foundational to understanding cloud security, and you’ll learn more in this session about how security responsibility gets divided between customers and providers.4.Who Protects What? Shared Security in GKEIf you’re running Google Kubernetes Engine (GKE) or thinking about it, this session can help you further understand the shared responsibility model in a containerized world, and specifically what that means for GKE security. The session will cover how Google secures GKE clusters and offers tips on how you can further harden your GKE workloads.5.Enhance Your Security Posture with Cloud Security Command CenterTake a look under the hood of Cloud Security Command Center to see how it delivers centralized visibility into your GCP assets. It helps you prevent, detect, and respond to threats and works with partner solutions.6.Detecting Threats in Logs at Cloud ScaleLearn about how you can take advantage of Google’s threat intelligence capabilities. In this session, you’ll hear about recent threats and see how GCP services can help detect them.7.Identifying and Protecting Sensitive Data in the Cloud: The Latest Innovations in Cloud DLPWe know that data privacy in today’s world is absolutely essential, and a key part of any company’s security practices. In this session, you’ll hear about the latest capabilities in Google Cloud’s Data Loss Prevention product and get some tips on protecting your sensitive data.8.How Airbnb Secured Access to Their Cloud With Context-Aware AccessThis session will show how Google’s BeyondCorp security model works in practice, and how Airbnb uses it to protect its apps. BeyondCorp uses identity and context instead of the corporate network as a perimeter, resulting in stronger security, broader access, and better user experiences.9.Minimizing Insider Risk From Cloud Provider AdministratorsInsider risk is a common concern when moving to a cloud provider. You have to be sure you can trust what your provider is doing with your data and that there are proper administrative controls in place. In this session, you’ll see how Access Transparency and Access Approval work in GCP, and how to set these controls up to get more visibility and control in the cloud.10.Applying Machine Learning and Analytics to Security in GCPSee how machine learning and data analytics help underpin Google Cloud’s security efforts. These modern tools help navigate the security complexities of cloud environments by pulling information together to make good decisions. The result is cloud security that saves time and reduces risk.  11.Preventing Data Exfiltration on GCPA robust cloud security strategy includes minimizing opportunities for data exfiltration. Check this session out to get the details on connecting securely to GCP services, and how to configure a deployment that isolates and protects your resources. You’ll also see how networking and security products work together for strong security.12.Twitter’s GCP Architecture for Its Petabyte-Scale Data Storage in Cloud Storage and User Identity ManagementSecurity needs to scale as a cloud deployment grows. This session with Twitter shows a real-life example of organizing, managing and securing petabytes of data with a hybrid cloud model. You’ll get a look at user-management tooling that manages account access and hear some of Twitter’s security lessons learned.13.A Use-Case Based, Demo-Led Approach to G Suite SecurityTake a look at some security use cases that can be solved in G Suite. You’ll see new products and view demos based around real issues, such as what to do when detecting phishing, finding a device is compromised, or seeing who’s sharing outside of your organization.14.Protections From Bleeding Edge Phishing, Malware AttacksPhishing, malware, scams and impersonation attacks are getting more sophisticated. Here’s a look at how G Suite can protect you and your users from these attacks, and ways to configure G Suite for real-time link scanning and anomalous file type blocking.For more on what to expect at Google Cloud Next ‘19, take a look at the session list here, and register here if you haven’t already. We’ll see you there!
Quelle: Google Cloud Platform

The service mesh era: Istio’s role in the future of hybrid cloud

Welcome back to our blog post series on Service Mesh and Istio.In our previous posts, we talked about what the Istio service mesh is, and why it matters. Then, we dove into demos on how to bring Istio into production, from safe application rollouts and security, to SRE monitoring best practices.Today, leading up to Google Cloud NEXT ‘19, we’re talking all about using Istio across environments, and how Istio can help you unlock the power of hybrid cloud.Why hybrid?Hybrid cloud can take on many forms. Typically, hybrid cloud refers to operating across public cloud and private (on-premises) cloud, and multi-cloud means operating across multiple public cloud platforms.Adopting a hybrid- or multi-cloud architecture could provide a ton of benefits for your organization. For instance, using multiple cloud providers helps you avoid vendor lock-in, and allows you to choose the best cloud services for your goals. Using both cloud and on-premises environments allows you to simultaneously enjoy the benefits of the cloud (flexibility, scalability, reduced costs) and on-prem (security, lower latency, hardware re-use). And if you’re looking to move to the cloud for the first time, adopting a hybrid setup lets you do so at your own pace, in the way that works best for your business.Based on our experience at Google, and what we hear from our customers, we believe that adopting a hybrid service mesh is key to simplifying application management, security, and reliability across cloud and on-prem environments—no matter if your applications run in containers, or in virtual machines. Let’s talk about how to use Istio to bring that hybrid service mesh into reality.Hybrid Istio: a mesh across environmentsOne key feature of Istio is that it provides a services abstraction for your workloads (Pods, Jobs, VM-based applications). When you move to a hybrid topology, this services abstraction becomes even more crucial, because now you have not just one, but many environments to worry about.When you adopt Istio, you get all the management benefits for your microservices on one Kubernetes cluster—visibility, granular traffic policies, unified telemetry, and security. But when you adopt Istio across multiple environments, you are effectively giving your applications new superpowers. Because Istio is not just a services abstraction on Kubernetes. Istio is also a way to standardize networking across your environments. It’s a way to centralize API management and decouple JWT validation from your code. It’s a fast-track to a secure, zero-trust network across cloud providers.So how does all this magic happen? Hybrid Istio refers to a set of sidecar Istio proxies (Envoys) that sit next to all your services across your environments—every VM, every container—and know how to talk to each other across boundaries. These Envoy sidecars might be managed by one central Istio control plane, or by multiple control planes running in each environment.Let’s dive into some examples.  Multicluster Istio, one control planeOne way to enable hybrid Istio is by configuring a remote Kubernetes cluster that “calls home” to a centrally-running Istio control plane. This setup is useful if you have multiple GKE clusters in the same GCP project, but Kubernetes pods in both clusters need to talk to each other. Use cases for this include: production and test clusters through which you canary new features, standby clusters ready to handle failover, or redundant clusters across zones or regions.This demo spins up two GKE clusters in the same project, but across two different zones (us-central and us-east). We install the Istio control plane on one cluster, and Istio’s remote components (including the sidecar proxy injector) on the other cluster. From there, we can deploy a sample application spanning both Kubernetes clusters.The exciting thing about this single control plane approach is that we didn’t have to change anything about how our microservices talk to each other. For instance, the Frontend can still call CartService with a local Kubernetes DNS name (cartservice:port). This DNS resolution works because GKE pods in the same GCP project belong to the same virtual network, thus allowing direct pod-to-pod communication across clusters.Multicluster Istio, two control planesNow that we have seen a basic multi-cluster Istio example, let’s take it a step further with another demo.Say you’re running applications on-prem and in the cloud, or across cloud platforms. For Istio to span these different environments, pods inside both clusters must be able to cross network boundaries.This demo uses two Istio control planes—one per cluster—to form a single, two-headed logical service mesh. Rather than having the sidecar proxies talk directly to each other, traffic moves across clusters using Istio’s Ingress Gateways. An Istio Gateway is just another Envoy proxy, but it’s specifically dedicated for traffic in and out of a single-cluster Istio mesh.For this setup to work across a network partition, each Istio control plane has a special domain name server (DNS) configuration. In this dual-control-plane topology, Istio installs a secondary DNS server (CoreDNS) which resolves domain names for services outside of the local cluster. For those outside services, traffic moves between the Istio Ingress Gateways, then onwards to the relevant service.In the demo for this topology, we show how this installation works, then how to configure the microservices running across both clusters to talk to each other. We do this through the Istio  ServiceEntry resource. For instance, we deploy aservice entry for the Frontend (cluster 2) into cluster 1. In this way, cluster 1 knows about services running in cluster 2.Unlike the first demo, this dual control-plane Istio setup does not require a flat network between clusters. This means you can have overlapping GKE pod CIDRs between your clusters. All that this setup requires is that the Istio Gateways are exposed to the Internet. In this way, the services inside each cluster can stay safe in their own respective environments.Adding a virtual machine to the Istio meshMany organizations use virtual machines (VMs) to run their applications, instead of (or in addition to) containers. If you’re using VMs, you can still enjoy the benefits of an Istio mesh. This demo shows you how to integrate a Google Compute Engine instance with Istio running on GKE. We deploy the same application as before. But this time, one service (ProductCatalog) runs on an external VM, outside of the Kubernetes cluster.This GCE VM runs a minimal set of Istio components to be able to communicate with the central Istio Control Plane. We then deploy an Istio ServiceEntry object to the GKE cluster, which logically adds the external ProductCatalog service to the mesh.This Istio configuration model is useful because now, all the other microservices can reference ProductCatalog as if it were running internal to the Kubernetes cluster. From here, you could even add Istio policies and rules for ProductCatalog as if it were running in Kubernetes; for instance, you could enable mutual TLS for all inbound traffic to the VM.Note that while this demo uses a Google Cloud VM for demo purposes, you could run this same example on bare metal, or with an on-prem VM. In this way, you can bring Istio’s modern, cloud-native principles to virtual machines running anywhere.  Building the hybrid futureWe hope that one or more of these hybrid Istio demos resonates with the way your organization runs applications today. But we also understand that adopting a service mesh like Istio means taking on complexity and installation overhead, in addition to any complexity associated with moving to microservices and Kubernetes. In that case, adopting a hybrid service mesh is even more complex, because you’re dealing with different environments, each with their own technical specifications.Here at Google Cloud, we are dedicated to helping you simplify your day-to-day cloud operations with a consistent, modern, cross-platform setup. It’s why we created Istio on GKE, which provides a one-click install of Istio on Google Kubernetes Engine (GKE). And it’s the driving force behind our work on our Cloud Services Platform (CSP). CSP is a product to help your organization move to (and across) the cloud—at your own pace, and in the way that works best for you. CSP relies on an open cloud stack—Kubernetes and Istio—to emphasize portability. We are excited to make CSP a reality this year.Thank you for joining us in the service mesh series so far. Stay tuned for the Keynotes and Hybrid Cloud track at Google Cloud NEXT in April. After NEXT, we’ll continue the series with a few advanced posts on Istio operations.
Quelle: Google Cloud Platform

5 need-to-know networking sessions at Next ‘19

Google Cloud Next ‘19 has everything you need to navigate all the networking products, services, and innovations GCP has to offer. With almost 20 networking sessions at Google Cloud Next this year, we have something for you, whether you’re just starting to move data to Google Cloud or you’re looking to modernize your traffic management using the latest advancements in networking. Here are five sessions that you definitely shouldn’t miss.1. A Year in GCP NetworkingFerris Bueller said it best, “[Networking] moves pretty fast. If you don’t stop and look around once in a while, you could miss it.” This session provides a 360-degree view of the advancements we have made in Networking over the past year across the 20+ networking products in our portfolio along the pillars of connect, secure, optimize, scale and modernize your network.  But, we just don’t talk about these advancements in theory. U.S. retailer Target will share how they are using some of the latest networking products and services, right now, to advance their business objectives. Learn more.2. The High-Performance NetworkGoogle’s network backbone has thousands of miles of fiber optic cable, uses advanced software-defined networking, and provides edge caching services to deliver fast, consistent, and scalable performance. Get an inside look at this premium global network—built around the world and under the sea—and see how Google’s software innovations are designed to make the internet faster. Learn more.3. Think Big, Think GlobalIf you’re a global organization, or you want to be one, Google’s global Virtual Private Cloud (VPC) offers the flexibility to scale and control how workloads connect regionally and globally. Learn the advantages of multi-region deployments and check out tips and tricks to keep your VPC secure, how to extend it to on-prem, how to deploy highly available services, and much more. Learn more.4. Traffic Director and Envoy-Based L7 ILB for Production-Grade Service Mesh and IstioService mesh is one of the most important networking paradigms to emerge for delivering multi-cloud applications and (micro)services. Istio is a leading open-source service mesh built using open proxies like Envoy. Be one of the first people to get a close look at Traffic Director, our new GCP-managed service that provides configuration and traffic control service for service mesh. Also get a preview of L7 internal load balancing, which is essentially fully managed Traffic Director and Envoy proxies under the hood but looks like a traditional load balancer, making it easier to bring the benefits of service mesh to brownfield environments. Learn more.5. Open Systems: Key to Unlocking Multi-Cloud and New Business With Lyft, Juniper, GoogleHear directly from leaders at Juniper, Google and Lyft as they unpack what “open” means to them and how open source, open interfaces, and open systems are paving the path to seamless multi-cloud services and new business models. You will also hear in-depth about several open-source projects including Kubernetes, gRPC, Envoy, Traffic Director, and Tungsten Fabric (Open Contrail) and get a chance to ask questions about bringing these technologies to your own environments. Learn more.While these five sessions are certainly highlights, it doesn’t end there. From network security, visibility, and monitoring to partner and third-party services discussions, Google Cloud Next ‘19 has the information you need to help you get the most from your network. Be sure to check out the session list here, and register here.
Quelle: Google Cloud Platform

Taking charge of your data: Understanding re-identification risk and quasi-identifiers with Cloud DLP

Preventing the exposure of personally identifiable information, a.k.a. PII, is a big concern for organizations—and not so easy to do. Google’s Cloud Data Loss Prevention (DLP) can help, with a variety of techniques to identify and hide PII that are exposed via an intuitive and flexible platform.In previous “Taking charge of your data” posts, we talked about how to use Cloud DLP to gain visibility into your data and how to protect sensitive data with de-identification, obfuscation, and minimization techniques. In this post, we’re going to talk about another kind of risk: re-identification, and how to measure and reduce it.A recent Google Research paper defines re-identification risk as “the potential that some supposedly anonymous or pseudonymous data sets could be de-anonymized to recover the identities of users.” In other words, data that can be connected to an individual can expose information about them and this can make the data more sensitive. For example, the number 54,392 alone isn’t particularly sensitive. However, if you learned this was someone’s salary alongside other details about them (e.g., their gender, zip code, alma mater), the risk of associating that data with them goes up.Thinking about re-identification risksThere are various factors that can increase or decrease re-identification risks and these factors can shift over time as data changes. In this blog post, we present a way to reason about these risks using a systematic and measurable approach.Let’s say you want to share data with an analytics team and you want to ensure lower risk of re-identification; there are two main types of identifiers to consider:Direct identifiers – These are identifiers that directly link to and identify an individual. For example, a phone number, email address, or social security number usually qualify as direct identifiers since they are typically associated with a single individual.Quasi-identifiers – These are identifiers that do not uniquely identify an individual in most cases but can in some instances or when combined with other quasi-identifiers. For example, data like someone’s job title may not identify most users in a population since many people might share these job title. But some values like “CEO” or “Vice President” may only be present for a small group or single individual.When assessing re-identification risk you want to consider how to address both direct and quasi identifiers. For direct identifiers you can consider options like redaction or replacement with a pseudonym or token. To identify risk in quasi-identifiers, one approach is to measure the statistical distribution to find any unique values. For example, take the data point “age 27”. How many people in your dataset are age 27? If there are very few people of “age 27” in your data set, there’s a higher potential risk of re-identification, whereas if there are a lot of people aged 27, the risk is reduced.Understanding k-anonymityK-anonymity is a property that indicates how many individuals share the same value or set of values. Continuing with the example above, imagine you have 1M rows of data including a column of ages, and in that 1M rows only one person has the age=27. In that case, the “age” column has a k value of 1. If there are at least 10 people for every age, then you have a k value of 10. You can measure this property across a single column, like age, or across multiple columns like age+zip-code. If there is only one person age 27 in zip code 94043 then that group (27, 94043) has a k value of 1.Understanding the lowest k value for a set of columns is important, but you also want to know the distribution of those k values. That is, does 10% of your data have a low k value or does 90% of your data have a low k value? In other words, can you simply drop the rows that have low k values or do you need to fix it another way? A technique called generalization can be helpful here by allowing you to retain more rows at the cost of revealing less information per row; for example, “bucketing” ages into five-year spans would replace age=27 with age=”26-30”, allowing you to retain utility in the data but make it less distinguishing.Understanding how much of your data is below a certain k threshold, and whether you drop the data or “generalize” the data, are all forms of measuring the re-identification risk vs. the data loss and utility value in the data. In this trade off you are asking questions like:What k threshold is acceptable for this use case?Am I okay to drop the percentage of data that is below that threshold?Does generalization allow me to retain more data value compared to dropping rows?Let’s walk through one more exampleImagine you have a database that contains users’ age and zip code and you want to ensure that no combination of age + zip is identifying below a certain threshold (like k=10). You can use Cloud DLP to measure this distribution and use Cloud Data Studio to visualize it (how-to guide here). Below is what this looks like on our sample dataset:This shows the percentage of rows (blue) and unique values (red) that correlate to a k-value. In the example above, we see that 100% of the data maps to fewer than 10 people. To fix this, without dropping 100% of rows, we applied generalization to convert ages to age ranges. Here is the graph after the transform:Now only 3.9% of the rows and 21.15% of the unique values fall below the k=10 threshold. So as a result, we reduced the re-identifiability while preserving much of the data utility, dropping only 3.9% of rows.All hands on deck to prevent data lossOf course, k-anonymity is just one way to assess quasi-identifiers and your risk of re-identification. Cloud DLP, for example, lets you assess other properties like l-diversity, k-map, and delta-presence. To learn more, check out this resource.In addition, we plan to present a research paper on Estimating Reidentifiability and Joinability of Large Data at Scale at the IEEE conference in May, covering techniques for doing this kind of analysis at incredibly large scale. We also explore how these techniques can be used to understand additional use cases around join-ability and data flow. These techniques are very useful for data owners who want to have a risk-based approach towards anonymization, while gaining insights into their data. Hope to see you there!
Quelle: Google Cloud Platform