Packet Mirroring: Visualize and protect your cloud network

As networks grow in complexity, network and security administrators need to be able to analyze and monitor network traffic to respond to security breaches and attacks. However, in public cloud environments, getting access to network traffic can be challenging.Many customers use advanced security and traffic inspection tools on-prem, and need the same tools to be available in the cloud for certain applications. Our new Packet Mirroring service is now in beta, and allows you to troubleshoot your existing Virtual Private Clouds (VPCs). With this service, you can use third-party tools to collect and inspect network traffic at scale, provide intrusion detection, application performance monitoring, and better security controls, helping you ensure the security and compliance of workloads running in Compute Engine and Google Kubernetes Engine (GKE). For more, watch this video.For instance, Packet Mirroring lets you identify network anomalies within and across VPCs,  internal traffic from VMs to VMs, traffic between end locations on the Internet and VMs, and also traffic between VMs to Google services in production. Packet Mirroring is available in all Google Cloud Platform (GCP) regions, for all machine types, for both Compute Engine instances and GKE clusters.In short, Packet Mirroring allows you to:Help ensure advanced network security by proactively detecting threats. Respond to intrusions with signature-based detection on predetermined attack patterns, and also identify previously unknown attacks with anomaly-based detection.Improve application availability and performance with the capability to diagnose and analyze what’s going on over the wire instead of relying only on application logs.Support regulatory and compliance requirements by logging and monitoring of transactions for auditing purposes.“Google Cloud’s new Packet Mirroring service accelerates our cloud adoption by giving us the visibility we need to secure our applications and protect our most precious asset, our customers.” – Diane Brown, Senior Director IT Risk Management, Ulta BeautyPacket Mirroring is important for enterprise users from both a security and networking perspective. You can use Packet Mirroring in a variety of deployment setups for different network topologies, such as VPC Network Peering and Shared VPC. In Shared VPC environments, for instance, an organization may have packet mirroring policies and collector backends that were set up by the networking or security team in the host project; the packet mirroring policy, meanwhile, is enabled in the service projects where the developer team runs its applications. This centralized deployment mode improves the ease of use of Packet Mirroring for security and networking teams, at the same time making it transparent to the development teams.Packet Mirroring is natively integrated with Google’s Andromeda SDN fabric. This approach keeps Packet Mirroring performance and management overhead low, as the receiving software appliances running on the collector backends don’t need to perform any decapsulation operation to parse the receive packet mirrored data. Partnering for network securityWe’ve been working with several partners to help us test and develop Packet Mirroring, and have received valuable feedback along the way. Here are our Packet Mirroring partners, and how they work with the tool:Awake Security – Awake Delivers Security with Network Traffic Analysis in Google CloudCheck Point -CloudGuard IaaS Now Integrates with Google Cloud Packet MirroringCisco – Cisco Stealthwatch Cloud and Google Cloud continue partnership to secure customersCorelight – Finding Truth in the Cloud: Google Cloud Packet Mirroring & Corelight Network Traffic AnalysiscPacket Networks – Googling Packets Inside Google CloudExtraHop Networks – ExtraHop Reveal(x) + Google Cloud Packet MirroringFlowmon – Enhancing Network Visibility & Security in Google CloudIxia by Keysight Technologies – Improving cloud visibility with CloudLens and Packet MirroringNetscout – NETSCOUT Extends Its Visibility Without Borders into Google Cloud with Packet MirroringPalo Alto Networks – Announcing the new VM-Series Integration with Google Cloud Packet Mirroring ServiceHelp ensure security and compliance in the cloudOur goal is to give you the right advanced security solutions for connecting your business to Google Cloud. With Packet Mirroring, you can reduce risk, diagnose to ensure the availability of your mission-critical applications and services and meet compliance requirements. Click here to learn more about GCP’s cloud networking portfolio and reach out to us with feedback at  gcp-networking@google.com.
Quelle: Google Cloud Platform

8 production-ready features you’ll find in Cloud Run fully managed

Since we launched Cloud Run at Google Cloud Next in April, developers have discovered that “serverless” and “containers” run well together. With Cloud Run, not only do you benefit from fully managed infrastructure, up and down auto-scaling, and pay-as-you-go pricing, but you’re also able to package your workload however you like, inside a stateless container listening for incoming requests, with any language, runtime, or library of your choice. And you get all this without compromising portability, thanks to its Knative open-source underpinnings. Many Google Cloud customers already use Cloud Run in production, for example, to deploy public websites or APIs, or as a way to perform fast and lightweight data transformations or background operations. “Cloud Run promises to dramatically reduce the operational complexity of deploying containerized software. The ability to put an automatically scaling service in production with one command is very attractive.” – Jamie Talbot, Principal Engineer at Mailchimp.Cloud Run recently became generally available, as both a fully managed platform or on Anthos, and offers a bunch of new features. What are those new capabilities? Today, let’s take a look at what’s new in the fully managed Cloud Run platform.1. Service level agreementWith general availability, Cloud Run now comes with a Service Level Agreement (SLA). In addition, it now offers data location commitments that allow you to store customer data in a specific region/multi-region. 2. Available in 9 GCP regionsIn addition to South Carolina, Iowa, Tokyo, and Belgium, in the coming weeks, you’ll also be able to deploy containers to Cloud Run in North Virginia, Oregon, Netherlands, Finland, and Taiwan, for a total of nine cloud regions.3. Max instancesAuto-scaling can be magic, but there are times when you want to limit the maximum number of instances of your Cloud Run services, for example, to limit costs. Or imagine a backend service like a database is limited to a certain number of connections—you might want to limit the number of instances that can connect to that service. With the max instance feature, you can now set such a limit.Use the Cloud Console or Cloud SDK to set this limit:gcloud run services update SERVICE-NAME –max-instances 424. More secure: HTTPS onlyAll fully managed Cloud Run services receive a stable and secure URL. Cloud Run now only accepts secure HTTPS connection and redirects any HTTP connection to the HTTPS endpoint. But having an HTTPS endpoint does not mean that your service is publicly accessible—you are in control and can opt into allowing public access to your service. Alternatively, you can require authentication by leveraging the “Cloud Run Invoker” IAM role.5. Unary gRPC protocol supportCloud Run now lets you deploy and run unary gRPC services (i.e., non-streaming gRPC), allowing your microservices to leverage this RPC framework. To learn more, read Peter Malinas’ tutorial on Serverless gRPC with Cloud Run using Go, as well as Ahmet Alp Balkan’s article on gRPC authentication on Cloud Run.6. New metrics to track your instancesOut of the box, Cloud Run integrates with Stackdriver Monitoring. From within the Google Cloud Console, the Cloud Run page now includes a new “Metrics” tab that shows charts of key performance indicators for your Cloud Run service: requests per second, request latency, used instance time, CPU and memory.A new built-in Stackdriver metric called container/billable_instance_time gives you insights into the number of container instances for a service, with the billable time aggregated from all container instances.7. LabelsLike the bibs that identify the runners in a race, GCP labels can help you easily identify a set of services, break down costs, or distinguish different environments.You can set labels from the Cloud Run service list page in Cloud Console, or update labels with this command and flag:gcloud run services update SERVICE-NAME –update-labels KEY=VALUE8. Terraform supportFinally, if you practice Infrastructure as Code, you’ll be glad to know that Terraform now  supports Cloud Run, allowing you to provision Cloud Run services from a Terraform configuration. Ready, set, go!The baton is now in your hands. To start deploying your container images to Cloud Run, head over to our quickstart guides on building and deploying your images. With the always free tier and the $300 credit for new GCP accounts, you’re ready to take Cloud Run for a spin. To learn more, there’s the documentation of course, as well as the numerous samples with different language runtimes (don’t miss the “Run on Google Cloud” button to automatically deploy your code). In addition, be sure to check out the community-contributed resources on the Awesome Cloud Run github project. We’re looking forward to seeing what you build and deploy!
Quelle: Google Cloud Platform

New climate model data now in Google Public Datasets

Exploring public datasets is an important aspect of modern data analytics, and all this gathered data can help us understand our world. At Google Cloud, we maintain a collection of public datasets, and we’re pleased to collaborate with the Lamont-Doherty Earth Observatory (LDEO) of Columbia University and the Pangeo Project to host the latest climate simulation data in the cloud. The World Climate Research Programme (WCRP) recently began releasing the Coupled Model Intercomparison Project Phase 6 (CMIP6) data archive, aggregating the climate models created across approximately 30 working groups and 1,000 researchers investigating the urgent environmental problem of climate change. The CMIP6 climate model datasets include rich details on many aspects of the climate system, including historical and future simulations. The data are now accessible in Cloud Storage and will be in BigQuery soon. Along with making CMIP6 available on Google Cloud, the Pangeo Project develops software and infrastructure to make it easier to analyze and visualize climate data using cloud computing.On Google Cloud, this dataset will be continuously updated and available to researchers around the globe to use for their own projects—without the constraints of downloading terabytes or even petabytes of data. The entire archive may eventually contain 20 PB of data, of which about 100 TB of data are currently available in the cloud. You can request data from Pangeo’s CMIP6 Google Cloud Collection in this form.“It’s a very live data set. It’s going to be updated over the next year as the data come online and as people’s needs arise,” says Ryan Abernathey, associate professor of Earth and environmental sciences at Columbia University and LDEO. He emphasizes the practical impact of this project. “What people actually care about most is not the global mean temperature because no one lives in the ‘global mean world.’ People care about the local impacts of drought or extreme rainfall, which can cause severe hardship for society. With these high-resolution simulations of rare events, we get much better information for planning in response to expected changes in the climate.”What you’ll find in the CMIP6 dataThe models in CMIP6’s data range from high-resolution simulations based on historical data from 1850 onward to hypothetical scenarios that manipulate key variables. For example, Abernathey asks, “What if carbon dioxide (CO2) were to instantaneously quadruple its concentration overnight? That’s a very useful experiment, not because it helps us make a detailed projection about the future, but because it helps us probe our physical understanding of how the climate system responds to CO2.” Each of the CMIP6 models includes dozens of variables, ensemble members, and scenarios, leading to large, unwieldy datasets. But Pangeo, an ensemble of open-source Python tools for big data analysis, makes it easier to perform large-scale computations on CMIP6 and other similar large datasets.To help researchers work with the multidimensional datasets of climate research, Abernathey and his colleagues at LDEO and the National Center for Atmospheric Research (NCAR) drew on funding from the National Science Foundation (NSF) and computing support from Google Cloud to develop Pangeo, which is an open-source platform aimed at accelerating geoscience data analysis. Pangeo can be run on nearly any high-performance computing system, including Google Kubernetes Engine (GKE), which supports easy deployment with autoscaling (both up and down) and integration with other Google Cloud tools such as Cloud Storage and BigQuery. The Pangeo community shares expertise, such as use cases for different domain-specific applications, and contributes to the development of open-source tools, like a cloud-optimized data storage format called Zarr.”The CMIP project has grown since its early days, and now is seeing tremendous growth beyond the U.S. and E.U. into the developing world,” says V. Balaji, a computational climate scientist on leave from Princeton University. Currently at the Institut Pierre-Simon Laplace in Paris, Balaji has been involved with all aspects of CMIP, from defining the experiments and running the simulations to analyzing the output and designing the Earth System Grid Federation (ESGF), a network of services that underpin the global data infrastructure enabling this critical research enterprise. “For new entrants, and for academic researchers worldwide, Pangeo in the cloud represents an exciting new opportunity to broaden the user base of very large-scale climate data, without the need to acquire supercomputer-scale storage and analysis facilities,” says Balaji. “It bridges what I call the gap between ‘inspiration-driven’ and ‘industrial strength’ science, enabling a scientist to explore the data and design their own analysis, and immediately apply their findings at very large scale. The progress of Pangeo in the cloud will inform our own architectural choices in designing the future of the global climate data infrastructure.”With these high-resolution simulations of rare events, we get much better information for planning in response to expected changes in the climate.The Pangeo team at LDEO and NCAR recently hosted a hackathon to jumpstart the analysis of the CMIP6 data on Google Cloud for pressing scientific questions. One participant—Henri Drake, a Ph.D. candidate in MIT’s Program in Atmospheres, Oceans, and Climate—created a tutorial for analyzing simulations of global warming in state-of-the-art CMIP6 models, under the worst-case scenario of uncontrolled greenhouse gas emissions. These CMIP6 model projections “reflect millions of lines of model code and represent everything from forest transpiration in the Amazon rainforest and thunderstorms in the U.S. Midwest to the formation of meltwater ponds on Arctic sea ice,” says Drake. “We would need a huge supercomputer to run the simulations from the model source code ourselves. Thankfully, the climate modeling community does this for us by making their output publicly available.”Drake used these tutorials as a teaching assistant for the Climate Change course at MIT to demonstrate the ease of cloud computing for data-intensive climate science research, and also the value of open-source tools like the Pangeo software stack on Google Cloud. “The CMIP6 dataset was already technically publicly available, it just was not very accessible,” says Drake. “The cloud-based data and computation, when combined with the Pangeo software stack, enabled me to make calculations in just a few hours that could have taken weeks using more conventional methods. Using the Pangeo binder, it was easy to make these calculations available to the rest of the world.”The CMIP6 data join many other weather and climate-related datasets available through Google’s Public Dataset program at no charge. By making data more accessible and usable with BigQuery and Cloud Storage, we support academic research by accelerating discoveries and promoting innovative solutions to complex problems. For Abernathey, the benefits of cloud computing are a particularly good match for the needs of scientific research: “With Google Cloud, you’ve essentially got a supercomputer just sitting right there, so you can directly process the data at a very high speed.”Get started with your own project by requesting data here.
Quelle: Google Cloud Platform

Learning—and teaching—the art of service-level objectives — CRE Life Lessons

Avid readers of CRE Life Lessons blog posts (there are dozens of us!) will appreciate the value of well-tuned service-level indicators (SLIs) and service-level objectives (SLOs). These concepts are fundamental building blocks of a site reliability engineering (SRE) practice. After all, how can you have a meaningful discussion about the reliability you want your services to achieve without properly measuring that reliability?The Customer Reliability Engineering (CRE) team has helped many of Google Cloud’s customers create their first SLOs and better understand the reliability of their services. We want to make sure that teams everywhere can implement these principles. We’re pleased to announce that we’re making all the materials for our Art of SLOs workshop freely available under the Creative Commons CC-BY 4.0 license for anyone to use and re-use—as long as Google is credited as the original author. We’ve been inviting customers from around the world to this workshop for the past year. From now on, anyone can run their own version of this workshop, to teach their coworkers, their customers, or conference attendees why all services need SLOs.What’s covered in the Art of SLOsThe Art of SLOs teaches the essential elements of developing SLOs to an audience from across the realms of development, operations, product, and business. The workshop slides are accompanied by a 28-page supporting handbook for participants, which is part reference and part background material for the practical problems that workshop participants engage with.In the workshop, we start by making a business case for the value of SLOs based on two fundamental assertions. First, that reliability is the most important feature of any service, and second, that 100% is the wrong reliability target for basically everything. These assertions underlie the concept of an error budget, a non-zero quantity of allowable errors in a given time window that arises from an SLO target set somewhere just short of 100%. The tension between a fast pace of innovation and service reliability can be resolved by aiming to roll out new features as fast as possible without exhausting this error budget.Once everyone is (hopefully) convinced that SLOs are a Good Thing, we explain how to choose good SLIs from the wealth of telemetry generated by a service running in production, and introduce the SLI equation, our recommended way of expressing any SLI. We cover two alternate ways of setting your first SLO targets, which arise from making different tradeoffs, and offer advice on how to converge these targets over time. We introduce a hands-on example—the server-side infrastructure supporting a fictional mobile game called Fang Faction—and use it to demonstrate the process of refining an SLI from a simple, generic specification to a concrete implementation that could be measured by a monitoring system.Art (noun): A skill at doing a specified thing, typically one acquired through practice.Critically, participants put this newly acquired knowledge to practical use straight away, as they develop more SLIs and SLOs for Fang Faction. Typically, when we run this workshop with customers, we break them up into groups of eight or so and unleash them on the workshop problems for 90 minutes. Each group is paired with an experienced SRE volunteer, who facilitates the discussion, encourages participation, and keeps the group on track.Run your own SLO workshop!If this sounds interesting, you’ll want to check out the Facilitators Handbook, which has a lot more information on how to organize an Art of SLOs workshop. If you don’t have a whole team to educate, you might be interested in our Measuring and Managing Reliability course on Coursera, which is a more thorough, self-paced dive into the world of SLIs, SLOs and error budgets.
Quelle: Google Cloud Platform

Modernize your apps with Migrate for Anthos

In a perfect cloud world, you would host all your applications in containers running on Kubernetes and Istio, benefitting from the portability and improved resource utilization of containers, plus a robust orchestration platform with advanced application management, networking, and security functionality. This is easy to do if you’re developing a new application, but it can be hard for existing applications to take advantage of those capabilities.Many of the applications that you may want to move to the cloud have been around a long time, and you may not have the application-specific knowledge that would be required to rewrite them to be more cloud-native—or it would be incredibly time-consuming to do so. Another option is to lift-and-shift to a virtual machine (VM) hosting platform like Compute Engine, but that means you still need to maintain the VMs. Even if you’re not able to fully modernize an existing app, it would still be great to get some of the benefits of containers and Kubernetes.What is Migrate for Anthos?Enter Migrate for Anthos, a fast and easy way to modernize your existing applications with a service that encapsulates them in a container. Moving your physical servers or existing VMs into Kubernetes containers gives you crucial portability and resource utilization benefits without having to rewrite the underlying application. Since Migrate for Anthos is built for Google Kubernetes Engine (GKE), you also automatically capture the scaling and flexibility benefits of a managed Kubernetes environment in the cloud. Migrate for Anthos recently became generally available.Converting an application with Migrate for Anthos happens in two phases. First, it creates a generic container wrapper around your application that makes it seem like it is still running in a full VM environment. Then, you launch Migrate for Anthos software on your Kubernetes cluster that runs the containerized application. You can find more details about this in the documentation and in our blog post: Migrating from Compute Engine to Kubernetes Engine with Migrate for Anthos.As the name suggests, Migrate for Anthos works with Anthos GKE. However, you can also use Migrate for Anthos with only GKE—all it  requires is your application and a GKE cluster running the Migrate for Anthos software. Getting started with Migrate for AnthosMigrate for Anthos works with a variety of workloads, but not all. It’s particularly adept at migrating legacy applications, stand-alone applications, and monolithic applications. As you start start the modernization process, here are some questions to ask to determine whether to use Migrate for Anthos with your applications:1. Should this app be in the cloud?By its nature, the cloud may not be able to support some characteristics of your on-prem environment, such as geography and legal compliance. The best way to find out whether the cloud will work for each of your applications is to plan out a full migration. That will allow you to identify groups of applications that can benefit from cloud offerings such as a global network and ease of resizing resources. After that, try out a proof of concept by testing the apps in the cloud to see if it fits your business needs.2. Should this app be in Kubernetes?Containerizing an application simplifies workload administration, improves scalability (both up and down), and increases host utilization. Kubernetes orchestrates the containers and GKE handles node upgrades, while add-ons like Istio let you manage network and security policies independently of your application.With those advantages it’s easy to think that containers are always the right way to go, but there are some cases where it may make sense to stick with VMs. Strict hardware requirements, specialized kernel modules, and license constraints may be harder to run with containers, negating their advantages.3. Should this app migration use Migrate for Anthos?Migrating your apps or workloads to the cloud isn’t just about shifting where the compute resources run; it’s also an opportunity to modernize them with containers. Using Migrate for Anthos (or Migrate for Compute Engine) gives you the ability to get your workloads in the cloud quickly, with minimal upfront downtime that’s easy to plan for. However, even if you use the Migrate for Anthos wrapper, your application is still the same application. The benefits of the modern platform may not outweigh a legacy application and a rewrite may be the only way to meet your business needs. There are also some specific services from your VM that may not work with Migrate for Anthos, for example licensing requirements.Migrate for Anthos can also be the first step on a larger migration effort. Once you’ve migrated the application to GKE, you can gradually break up a monolithic app into microservices by manually rewriting parts. Spreading out the migration effort gets you in the cloud sooner, giving you more time to modernize.Next stepsA successful modernization starts with creating a full migration plan, testing the workloads, and monitoring them. You can experience the benefits of modernization with Migrate for Anthos by picking a small workload and trying it out for yourself!As you test different workloads for your migration, be sure to reference the documentation. And keep an eye out for an upcoming blog series on the migration process. Our first blog steps through how to modernize a Compute Engine instance and host it on GKE.
Quelle: Google Cloud Platform

ShapeMask: High-performance, large-scale instance segmentation with Cloud TPUs

Many of us take the ability to see the world for granted, but the everyday task of identifying objects of various shapes, colors, and sizes is a challenging feat. Yet, this type of technology is critical for a range of applications, from medical image analysis to photo editing. As part of our ongoing effort to create software that can perform useful visual tasks, we’ve developed a new image segmentation model called ShapeMask that offers a great combination of high accuracy and high scalability. In this blog we’ll look at what exactly ShapeMask is, what its advantages are, and how you can get started with it.An overview of ShapeMaskThe task that ShapeMask performs is called “instance segmentation,” which involves identifying and tracing the boundaries of specific instances of various objects in a visual scene. For example, in a cityscape image that contains several cars, ShapeMask can be used to highlight each car with a different color. Each of these highlighted areas is called a “mask.”ShapeMask builds on a well-known object detection model called RetinaNet (this Cloud TPU tutorial has more information) which can detect the location and size of various objects in an image but does not produce object masks. ShapeMask initially locates objects using RetinaNet, but then gradually refines the shapes of these detected objects by grouping pixels that have a similar appearance. This new approach allows ShapeMask to create accurate masks. We have made a well-optimized implementation of the ShapeMask model available open-source here.A pictorial overview of how ShapeMask starts with detection boxes and gradually refines object shapes (source of the image)A highly scalable solutionMany businesses depend on automated image segmentation to enable a broad set of applications. These businesses often work with large, frequently changing datasets, and their researchers and engineers need to experiment with a variety of ML model architectures. To iterate quickly on large, realistic datasets, they need to be able to scale up the training of their image segmentation models. One advantage ShapeMask has over other image segmentation models is that it can train efficiently at large batch sizes, which makes it possible to distribute ShapeMask training across a large number of ML accelerators.Cloud TPUs are designed to enable exactly the type of scaling that ShapeMask uses. For example, a machine perception engineer can experiment with ShapeMask on a small dataset on a single Cloud TPU v3 device—which has eight cores—and then use the same code to quickly train the same ShapeMask model on a much larger dataset using a “slice” of a Cloud TPU Pod with 32, 128, or 256 cores. Without any code changes, ShapeMask can scale to a batch size of 2048, while still achieving an accuracy of 34.7 mask mAP (more on mAP here). With a 256-core Cloud TPU v3 slice, the ShapeMask model can be trained on the standard COCO image segmentation dataset in just under 40 minutes—that’s a big improvement from waiting hours to train ShapeMask (or a comparable Mask R-CNN model) on a single Cloud TPU device.High accuracy, tooFor customers that require the absolute highest image segmentation accuracy, the ShapeMask model can be trained to an accuracy of 38 mask mAP on a Cloud TPU v3 device with a batch size of 64. For comparison, our reference implementation of Mask R-CNN trains to an accuracy of 37.3 mask mAP on a Cloud TPU v3 device with a batch size of 64 and requires about 200 more minutes than ShapeMask to train.All the statistics mentioned above were collected using TensorFlow version 1.14. While we expect you to get similar results, your results may vary. More information about ShapeMask and its performance is available in this methodology doc.Getting started—in your browser or in a Google Cloud projectIt’s easy to start experimenting with ShapeMask right in your browser using a free Cloud TPU in Colab. (Just select the “Runtime” tab and “Change Runtime Type” to “Python 3” and “TPU”).You can also get started with a well-optimized, open-source implementation of ShapeMask in your own Google Cloud projects by following this ShapeMask tutorial.If you’re new to Cloud TPUs, you can get familiar with them by following our quickstart guide. Cloud TPU Pods are currently available in beta, and we encourage you to contact a Google Cloud Sales representative to evaluate them.AcknowledgementsThanks to everyone who contributed to this post and who helped develop this new capability, including Tsung-Yi Lin, Anelia Angelova, Pengchong Jin, Zak Stone, Wes Wahlin, Adam Kerin, Vishal Mishra, David Shevitz, and Allen Wang.
Quelle: Google Cloud Platform

Get the word out: AutoML Translation goes GA, plus updates to Translation API

Translation is a critical function for many industries, whether it’s a media organization delivering the news or a software company making sure its documentation is understood in many different languages. As a result, many businesses are using machine learning to help them translate faster and more efficiently than ever. Today, we’re excited to announce updates to our Cloud AI translation services, including the general availability of AutoML Translation and Advanced features on Translation API.How customers are using AutoML Translation Since announcing AutoML Translation in beta at Next 2018, we have seen customers use the technology to build domain specific translation into their workflows. One such example is Bloomberg, who is using it to provide their analysts with a competitive edge in a fast moving industry. “It’s extremely important for our customers to access information from wherever they are,” says Ted Merz, Global Head of News Product at Bloomberg LP. “We decided we wanted to add the ability for people across the world to see news translated into the language that was most useful to them. Google’s AutoML Translation helped us improve the fluency of translation for financial jargon and terms, ensuring that the stories on Bloomberg’s First Word service could be delivered in real time.” Updates to our Translation APIAlongside the general availability of AutoML Translation, we’re also expanding the ways customers can use our Translation API, by introducing Advanced and Basic editions of the API. By offering two versions, we’re giving customers the flexibility to choose the functionality that best serves the needs of their business.The first edition, Translation API Advanced, includes our latest feature updates to Translation API. We will continue to build in advanced features to this API, such as Glossary (see image below) and model selection which were launched this year. This edition of the API is an iteration of our Translation API v3 that launched at Google Cloud Next and is now generally available.Use the Glossary to translate brand and domain-specific terms with Translation API AdvancedTranslation API Advanced will only support Service Accounts for authentication, so that customer managed resources can be protected with secure authentication. For a more detailed walk through of developing with Translation API Advanced, check out this blog post.One customer that has been evaluating translation APIs is AllTrails, who helps people explore the outdoors with the largest collection of detailed, hand-curated trail maps as well as trail reviews and photos crowdsourced from a community of over 10 million users. Now that the Translation API Advanced is generally available, they’re excited to put it into production in their organization. Says James Graham, Head of Engineering at AllTrails, “Landmark and hiking trail names can have many possible translations that could be correct in different contexts, and having inaccurate translations can result in a bad day on the trail. We are looking forward to using the Glossary feature on Translation API Advanced to enable us to control the names of places in the translation results provided by Google’s great machine translations.”The second edition, Translation API Basic, is designed for customers integrating Translation services into chat applications, social media and gaming, and makes our API as easy to use as possible. Translation API Basic, the new name for Translation API v2, includes support for API keys, and the simplicity and quality the API is known for, and we’ll continue to deliver quality improvements to Translation API to support our customers.With this announcement, Translation API Advanced joins Translation API Basic as a GA service. You can learn more about our Cloud Translation services on our website. For a more technical review of the two editions of the Translation API, view the documentation.
Quelle: Google Cloud Platform

Expanding the serverless ecosystem for Cloud Run

Increasingly, organizations want to write applications in containers, without having to worry about provisioning, managing and scaling the underlying infrastructure. They also want to pay only for what they use. Cloud Run, which recently became generally available, makes this possible, with additional facilities like built-in logging and monitoring through Stackdriver.Organizations also want to leverage their existing tools and workflows with Cloud Run’s native serverless capabilities, or add tools or third-party services that enhance the developer experience. This means being compatible with CI/CD tooling, third-party observability tools for monitoring the serverless stack, and integrating open source DevOps and security tools.Building on strong partnerships with leading ISVs, we are proud to have a variety of partners that support Cloud Run, focusing on three key use cases:CI/CD – Our partner ecosystem ensures that an application that is deployed on Cloud Run is supported natively in the build, test, deploy, and run stages.Observability – Integrations with APM/logging or monitoring solutions ensure that they can collect telemetry and provide insights into any application that is deployed on Cloud Run.Security – Our integrations ensure the security and policy enforcement of the applications deployed on Cloud Run.Below are some of the solutions our partners have built in these areas:CI/CDCircleCI – A new Cloud Run orb by CircleCI helps execute a CI/CD pipeline that tests application code and automatically builds a Docker image, subsequently deploying the image as a Cloud Run service on GCP. More here.Cloudbees –  Cloudbees Core can be used to deploy a serverless preview environment with GitHub and Cloud Run, allowing developers to test their changes before a PR merge and deploying to production. More here.GitLab – GitLab provides templates allowing developers to easily build and deploy Knative services that can be deployed to Cloud Run. More here.JFrog – Developers can build serverless functions as Docker containers using JFrog Pipelines, which pushes them into JFrog Container Registry and deploys them to Cloud Run for Anthos. More here.ObservabilityDatadog – Datadog’s Cloud Run integration allows you to collect metrics, logs and traces from across your cluster and view these in real-time as your application scales up and down. More here.Dynatrace – Dynatrace’s automated monitoring supports Cloud Run fully managed and Anthos and provides full visibility of your applications—including issues that affect performance—enabling management of operations and costs across the environment. More here. New Relic – The New Relic Kubernetes solution for Google Cloud on Anthos gives you infrastructure and application-centric views into your Kubernetes clusters. More here. SecurityOctarine – Octarine takes a service mesh approach to runtime visibility and security for Kubernetes, either by leveraging Istio or by deploying a standalone, lightweight Envoy-based service mesh. More here. Palo Alto Networks – Using capabilities from Palo Alto subsidiary Twistlock, Palo Alto’s Prisma Cloud provides security for both Cloud Run fully managed as well as Cloud Run for Anthos. More here.Sumo Logic- Sumo Logic integrates metrics and logs for Kubernetes clusters via FluentD, FluentBit, Prometheus, Falco, and Stackdriver for GKE control plane logs. More here.Sysdig – Sysdig’s open platform provides a DevSecOps workflow for Cloud Run that embeds security, maximizes availability and validates compliance across the serverless lifecycle. More here.If your teams use any of these  tools, they should find it easy to adopt their workflow with Cloud Run. Over time, we hope to build broader and deeper partnerships for both Cloud Run fully managed and Cloud Run for Anthos.We would like to thank Anil Dhawan, Product Manager, for his sustained guidance in serverless product integrations.
Quelle: Google Cloud Platform

Protecting your GCP infrastructure with Forseti Config Validator part four: Using Terraform Validator

In the previous posts of this series, we discussed how you can secure your infrastructure at scale by applying security policies as code to continuously monitor your environment with the Config Validator policy library and Forseti. In this article, we’ll discuss how you can reuse the exact same policies and Terraform Validator to preventively check your infrastructure deployments, and block bad resources from being deployed in Google Cloud Platform (GCP). The goal is to catch non-compliant resources before they get deployed in your CI/CD pipeline, since you now have a solution in place (Forseti) to continuously monitor your infrastructure for violations in your environment in the first place.It is best practice to have your security policies into a separate repository, and integrate them with other tools from there, so that you have a single source of truth for your security requirements. Whenever you need a new policy to be applied everywhere, you can simply update your constraint repository once, making sure that your CI/CD pipeline and Forseti instances always use the latest version of your constraint repository. Using Infrastructure as Code for your deploymentsA good way to control what resources get deployed in your cloud environment is to automate your deployments (and remove direct write access from your users). This can be achieved using a variety of tools, but in this article we’ll focus on Terraform.Terraform lets you describe the infrastructure you would like to deploy in code, or rather using template files, in HCL. One your template file is ready to get deployed, you can create a Terraform plan to preview what Terraform will deploy in your target environment before actually deploying the changes. Terraform compares your template to what it knows of the state of your infrastructure, and stores it in a state file, either locally, or in a remote target like a Cloud Storage bucket.  It’s a best practice to store this Terraform plan output into an artifact that can be tested or audited later on. Here is an example of how to create a Terraform plan:As you can see, to actually apply the changes, you can run the Terraform apply command on the plan output file, if everything looks good to you. In our example, we’ll add these steps to the CI/CD pipeline, and add some extra tests between creating the plan and applying it to our infrastructure. We will use a demo constraint to ensure that we can actually catch non-compliant resources before they get deployed in GCP.Validating your policies in your CI/CD pipelineAnother best practice is to use a CI/CD pipeline to deploy your Terraform templates (or equivalent). It is important that all changes in your infrastructure go through a code change (and/or code review) and get deployed by your pipeline, unless there is an absolute necessity to bypass it (like a break-glass scenario). Also, you don’t have to deploy your entire cloud environment using a single pipeline; this becomes hard to maintain, and when things go wrong, creates a large blast radius. It’s advisable to have different teams manage smaller pieces of your infrastructure, each with their own pipelines, and their own separate Terraform state files.In general, your pipeline should follow these basic steps:A code change triggers a build (either a merge to master, or a scheduled pull from your CI server).The new code is checked out and some basic tests are run on it (sanity testing).If all tests pass, run the terraform plan command on your template and store the output as an artifact (for instance in a GCS bucket).Run the terraform-validator validate command on your cloud plans, applying the latest policies of your separate policy repository.If everything passes, run your usual infrastructure deployments steps, according to your internal processes. This basically comes down to how your terraform apply command gets executed. This might happen as soon as all tests pass in a lower-level environment, or it could trigger a new set of tests for a higher-level environment (smoke tests, pen tests, load tests etc.), and/or manual reviews / approvals etc.A fellow Googler, Morgante Pell, presented a demo of this pipeline at Next ‘19. Here is a basic diagram of how to integrate the Config Validator terraform-validator tool in your CI/CD deployments:Click to enlargeCloud Build pipeline exampleThis pipeline uses a GitHub repository as a source for your Terraform templates, and each merge to master triggers a new build in Cloud Build, Google Cloud’s serverless CI/CD service, to test and deploy the new code. We won’t go over all the steps to deploy this particular solution, as there will be more public documentation on this soon, but you should install the Cloud Build App for Github to add the Cloud Build triggers on specific events in your repository.At the end of your setup, you should see something like the following message to confirm that you allow Github to trigger your Cloud Build pipeline via the Cloud Build Github App:Next, verify in your GitHub repository that the Cloud Build App has been successfully installed:Finally, add the right triggers in your Cloud Build configuration, for instance:In order for the Cloud Build service to be able to deploy your resources successfully, you need to ensure its service account has the right permissions (i.e., roles) in your target project. This service account is formatted like: [PROJECT_NUMBER]@cloudbuild.gserviceaccount.com. For instance, this demo repository needs at least the editor role for your CI/CD and application projects.Here is a simple example of what your Cloud Build pipeline could look like, for the CI part of your deployments:cloudbuild.yaml:This lets you configure a four-step Cloud Build pipeline for the deployments/app1/dev directory:Run terraform init Run terraform plan and save it into a terraform.plan file (binary)Convert the terraform.plan file into JSONRun terraform-validator validate on the Terraform plan to look for violationsNote: We are using the latest public builder image for the terraform-validator: gcr.io/config-validator/terraform-validator. This image contains the latest release of terraform-validator, which only supports Terrafom versions 0.12 or higher. If you need support for prior versions, you can use a previous release, or you can build your own that includes all the additional steps/tests that you need for your terraform deployments.For the CD part of it, you can configure a second Cloud Build trigger to simply run terraform apply:cloud-build-apply.yaml:Testing our setupFor our use case, we will simply add a policy to the policy-library/policies/constraints folder directly. In more realistic scenarios, these policies should be maintained by a separate team, in a different repository (you may want to use a Git submodule to maintain consistency between your repositories).The policy that we will enforce here is simply the policy that restricts the location of Cloud Storage buckets to a fixed list of regions (say, for compliance reasons). For demonstration purposes, we set the authorized regions for our bucket to be any region in Europe.This way, if someone needs to create a Cloud Storage bucket in your environment, they first need to create or modify a Terraform template and merge the code change to your repository. This triggers the Cloud Build build and runs the security checks, which include all the constraints in the policies/constraints folder. A good practice before testing this setup is to test your code locally before pushing your Terraform templates to your common repository, which will trigger a Cloud Build build, and run all the tests referenced in your cloudBuild YAML file. Once you are confident enough in your code state, you can merge it to the right branch in your setup to trigger a Cloud Build build on it. On our first try, we add a bucket in the asia-southeast1 region, which is non-compliant with our constraint. Testing the code locally, we can see that it raises a violation:Run terraform-validator:Now, let’s try to push this code to master anyway, to ensure our CI/CD pipeline catches the violation too:Now we can try again to fix this violation, by fixing the value for bucket region in our template (updating it to europe-north-1 this time).Now that our local tests pass, we can push to master, and test our CI/CD pipeline:And double check that everything was tested and passed in Cloud Build:ConclusionThis concludes this series on protecting your GCP Infrastructure at scale with Config Validator. In the end, we have seen that it is possible to efficiently control what gets deployed in your environments, using Cloud Build, Terraform and the terraform-validator tool, in a serverless and fully automated way. This lets you enable your users to deploy in GCP while still adding strict guardrails about what they can deploy, where. You do this by enforcing that all deploys to the cloud be done via automation. The only backdoor to bypass your pipelines should be in case of an emergency, and it should be tightly controlled. Finally, you can trust these controls to prevent bad resources from being deployed in the first place, but it is also a best practice to continuously monitor your existing infrastructure against the same policies, using Forseti and the config_validator scanner. This will let you catch any non-compliant, legacy infrastructure, and any resources that would have been deployed outside of your trusted CI/CD pipeline. If you have any questions or comments about this series, don’t hesitate to reach out. Useful Links: OPA / rego:OPA Playbox (free testing environment)OPA official documentationOPA language referenceOPA language cheatsheetOpen Policy Agent Deep Dive Seattle 2018Repositories:Terraform Validator source codeForseti Terraform moduleForseti source codeConfig Validator source codeConfig Validator policy library
Quelle: Google Cloud Platform

Networking cost optimization best practices: an overview

Every cloud deployment needs a network over which to move data. Without a network, you can’t view cat videos or upload your selfies, much less allow microservices to talk to one another. Google Cloud provides a global, scalable, flexible network for your cloud-based workloads and services, and how you utilize that network impacts four critical aspects of your deployment: cost, security, performance and availability. When designing a reliable, sound, yet cost effective network architecture, you’ll want multiple teams within the company to weigh in on these four elements, to determine your priorities. The following tips highlight a few considerations you should think about when architecting your network solution. (Note that we’ll focus here on optimizing network cost. Check out our blog for cost optimizations on Cloud Storage and BigQuery.)Flow and beholdThe first step when reviewing your overall networking spend strategy is to understand what you’re using, namely, what traffic is flowing in and out of your Google Cloud Platform (GCP) environment. This is easy to do with VPC Flow Logs, which keeps a record of the network flows sent from and received by VM instances. Each flow log entry records details such as source IP, destination IP, and bytes sent and received for each network connection—exactly the type of information needed when trying to understand your network traffic. These logs are collected in Stackdriver logging and you can then export these logs to BigQuery to help visualize your trends. Some of the use cases for VPC Flow Logs include: network monitoring, forensics, real-time security analysis, and for today’s purposes, cost optimization. When it comes to optimizing networking spend, the most relevant information in VPC Flow Logs is:Traffic between regions and zonesTraffic to specific countries on the InternetTop talkersHere are step-by-step instructions on how to enable VPC Flow logs. What’s in a name? That which we call a region might not cost the sameThe information you get from VPC Flow Logs can help you determine where you might be able to save on your existing network costs. For example, geo location is an important factor to consider when architecting for optimal spend. Not all network charges are created equal; different regions have varying network costs. As well as using VPC Flow Logs, you can also take advantage of the recently released network monitoring, verification and optimization platform, Network Intelligence Center, which allows you to  view the network bandwidth in use between regions and geo locations. When transferring data around the world either to customers or to other internal services in your GCP environment, the ability to drill down and understand your traffic patterns across regions is crucial.For general internet egress charges, e.g., a group of web servers that serve content to the internet, prices can vary depending on the region where those servers are located. For instance, the price per GB in us-central1 is cheaper than the price per GB in asia-southeast1. Another example is traffic flowing between GCP regions, which can vary significantly depending on the location of those regions—even if it isn’t egressing out to the Internet. For example, the cost to synchronize data between asia-south1 (India) and asia-east1 (Taiwan) is five times as much as synchronizing traffic between us-east1 (South Carolina) and us-west1 (Oregon).As well as regional considerations, you should also consider which zones your workloads are in, as depending on their availability requirements, you may be able to architect them to use intrazone network traffic at no cost. You read that right, at no cost! Consider your VMs communicating via public, external IP addresses, but that are in the same region or zone. By configuring them to communicate via their internal IP address, you can save on the cost of what you would have paid for that traffic communicating via external IP addresses. Keep in mind, you’ll need to weigh any potential network cost saving with the availability implications of a single-zone architecture. Deploying to only a single zone is not recommended for workloads that require high availability, but it can make sense to have certain services use a VPC network within the same zone. One example could be to use a single-zone approach in regions that have higher costs (Asia), but a multi-zone or multi-regional architecture in North American where the costs are lower.Once you have established what your network costs are for an average month, you may want to consider a few different approaches to better allocate spending. Some customers re-architect solutions to bring applications closer to their user base, and some employ Cloud CDN to reduce traffic volume and latency, as well as potentially take advantage of CDN’s lower costs to serve content to users. Both of these are viable options that can both reduce costs and/or enhance performance.To VPN or not to VPN?Next in line when reviewing overall networking spend is total bytes transferred. Using VPC Flow Logs, you can see the “Top Talkers” within your environment, and if you’re pushing large amounts of data, you want to ensure that you take advantage of any potential discounts you might be entitled to. We have seen many customers who push large amounts of data on a daily basis from their on-premises environment to GCP, either using a VPN or perhaps directly over the Internet (encrypted with SSL hopefully!). Some customers, for example, have databases on dedicated, on-prem hardware, whereas their frontend applications are serving requests in GCP. If this describes you, consider whether you should leverage a Dedicated Interconnect or Partner Interconnect. If you push large amounts of data (think TBs/PBs) on a consistent basis, it can be cheaper to establish a dedicated connection vs. accruing costs associated with your traffic traversing the public internet or using a VPN. There are a few architectural considerations to review when selecting an Interconnect, which you can read about in further detail here. Your network optimized your way with Network TiersOne of Google Cloud’s biggest differentiators is having access to Google’s premium network backbone, which is used by default for all services. But you might not need that performance and low latency for all your services. An example might be the distribution of a daily sales report that doesn’t need to be immediately available around the globe. With services for which you are willing to trade off between performance and cost, we offer Network Service Tiers. By choosing either Standard or Premium Tier, you can allocate the appropriate connectivity between your services, fine-tuning the network to the needs of your application and potentially reducing costs on services that might tolerate more latency and don’t require an SLA.There are some limitations when leveraging the Standard tier for its pricing benefits. At a high level, these include compliance needs around traffic traversing the public internet, as well as HTTP(S), SSL Proxy, TCP Proxy load-balancing, or usage of Cloud CDN. You can read about these in more detail here. After reviewing some of the recommendations, you’ll be empowered to review your services with your team and determine whether you can benefit from lower Standard Tier pricing without impacting the performance of your external-facing services.Waste not, want notThe above topics are some of the larger levers you can pull when conducting a networking cost review. But overall you should ensure that you are taking advantage of one of the greatest cloud benefits: pay only for what you use. With this in mind, we recommend reviewing the following to ensure you get the most out of your GCP investment:Log generation for services like VPC Flow Logs, Firewall Rule Logging, and NAT Logging. Enable and customize these logs where possible to reduce costs.Private Access for enterprise or high volume customers – Leverage Private Google Access when possible to reduce cost and improve your security posture.External IP Addresses – Starting in 2020, external IP addresses that don’t fall under the Free Tier will incur a small cost. However, as a general security best practice, it’s a good idea to use internal IP addresses where applicable. For information on how to migrate to internal IPs, refer to our guides for building internet connectivity for private VMs or setting up a private cluster on Google Kubernetes Engine.Reviewing the above will ensure you are eliminating wasteful spending within your design, and also ensure that you are taking full advantage of your cloud-based solution. A packet saved is a penny earnedBalancing costs with performance, availability, and security is no simple feat, often requiring collaboration across multiple teams. We like to think that there are many approaches to consider, and more often than not, cost optimization is not so much a one time review, but your application teams’ philosophy. Hopefully this post will give you food for thought when reviewing your network designs. Click here to learn more about Google Cloud’s networking portfolio. And for more on cost optimization, check out these blogs on cost optimization for Cloud Storage and BigQuery.
Quelle: Google Cloud Platform