Moving a publishing workflow to BigQuery for new data insights

Google Cloud’s technology powers both our customers as well as our internal teams. Recently, the Solutions Architect team decided to move an internal process to use BigQuery to streamline and better focus efforts across the team. The Solutions Architect team publishes reference guides for customers to use as they build applications on Google Cloud. Our publishing process has many steps, including outline approval, draft, peer review, technical editing, legal review, PR approval and finally, publishing on our site. This process involves collaboration across the technical editing, legal, and PR teams.With so many steps and people involved, it’s important that we effectively collaborate. Our team uses a collaboration tool running on Google Cloud Platform (GCP) as a central repository and workflow for our reference guides. Increased data needs required more sophisticated toolsAs our team of solution architects grew and our reporting needs became more sophisticated, we realized that we couldn’t effectively provide the insights that we needed directly in our existing collaboration tool. For example, we needed to build and share status dashboards of our reference guides, build a roadmap for upcoming work, and analyze how long our solutions take to publish, from outline approval to publication. We also needed to share this information outside our team, but didn’t want to share unnecessary information by broadly granting access to our entire collaboration instance.Building a script with BigQuery on the back endSince our collaboration tool provides a robust and flexible REST API, we decided to write an export script which stored the results in BigQuery. We chose BigQuery because we knew that we could write advanced queries against the data and then use Data Studio to build our dashboards. Using BigQuery for analysis provided a scalable solution that is well-integrated into other GCP tools and has support for both batch and real-time inserts using the streaming API.We used a simple Python script to read the issues from the API and then insert the entries into BigQuery using the streaming API. We chose the streaming API, rather than Cloud Pub/Sub or Cloud Dataflow, because we wanted to repopulate the BigQuery content with the latest data several times a day. The Google API Python client library was an obvious choice, because it provides an idiomatic way to interact with the Google APIs, including the BigQuery streaming API. Since this data would only be used for reporting purposes, we opted to keep only the most recent version of the data as extracted. There were two reasons for this decision: Master data: There would never be any question about which data was the master version of the data. Historical data: We had no use cases that required capturing any historical data that wasn’t already captured in the data extract. Following common extract, transform, load (ETL) best practices, we used a staging table and a separate production table so that we could load data into the staging table without impacting users of the data. The design we created based on ETL best practices called for first deleting all the records from the staging table, loading the staging table, and then replacing the production table with the contents. When using the streaming API, the BigQuery streaming buffer remains active for about 30 to 60 minutes or more after use, which means that you can’t delete or change data during that time. Since we used the streaming API, we scheduled the load every three hours to balance getting data into BigQuery quickly and being able to subsequently delete the data from the staging table during the load process.Once our data was in BigQuery, we could write SQL queries directly against the data or use any of the wide range of integrated tools available to analyze the data. We chose Data Studio for visualization because it’s well-integrated with BigQuery, offers customizable dashboard capabilities, provides the ability to collaborate, and of course, is free. Because BigQuery datasets can be shared with users, this opened up the usability of the data for whomever was granted access and also had appropriate authorization. This also meant that we could combine this data in BigQuery with other datasets. For example, we track the online engagement metrics for our reference guides and load them into BigQuery. With both datasets in BigQuery, it made it easy to factor in the online engagement numbers to build dashboards.Creating a sample dashboardOne of the biggest reasons that we wanted to create reporting against our publishing process is to track the publishing process over time. Data Studio made it easy to build a dashboard with charts, similar to the two charts below. Building the dashboard in Data Studio allowed us to easily analyze our publication metrics over time and then share the specific dashboards with teams outside ours.Monitoring the load processMonitoring is an important part of any ETL pipeline. Stackdriver Monitoring provides monitoring, alerting and dashboards for GCP environments. We opted to use the Google Cloud Logging module in the Python load script, because this would generate logs for errors in Stackdriver Logging that we could use for error alerting in Stackdriver Monitoring. We set up a Stackdriver Monitoring Workspace specifically for the project with the load process. We then created a management dashboard to track any application errors. We set up alerts to send an SMS notification whenever errors appeared in the load process log files. Here’s a look at the dashboards in the Stackdriver Workspace:And this shows the details of the alerts we set up:BigQuery provides the flexibility for you to meet your business or analytical needs, whether they’re petabyte-sized or not. BigQuery’s streaming API means that you can stream data directly into BigQuery and provide end users with rapid access to data. Data Studio provides an easy-to-use integration with BigQuery that makes it simple to develop advanced dashboards. The cost-per-query approach means that you’ll pay for what you store and analyze, though BigQuery also offers flat-rate pricing if you have a high number of large queries. For our team, we’ve been able to gain considerable new insights into our publishing process using BigQuery, which have helped us both refine our publishing process and focus more effort on the most popular technical topics. If you haven’t already, check out what BigQuery can do using the BigQuery public datasets and see what else you can do with GCP in our reference guides.
Quelle: Google Cloud Platform

Container-native load balancing on GKE now generally available

Last year, we announced container-native load balancing, a feature that allows you to create services using network endpoint groups (NEGs) so that requests to your service get load balanced directly to the containers serving the requests. Since announcing the beta, we have worked hard to improve the performance, scalability and user experience of container-native load balancing and are excited to announce that it is now generally available.Container-native load balancing removes the second hop between virtual machines running containers in your Google Kubernetes Engine (GKE) cluster and the containers serving the requests, improving efficiency, traffic visibility and container support for advanced load balancer capabilities. The NEG abstraction layer that enables this container-native load balancing is integrated with the Kubernetes Ingress controller running on Google Cloud Platform (GCP). If you have a multi-tiered deployment where you want to expose one or more services to the internet using GKE, you can also create an Ingress object, which provisions an HTTP(S) load balancer and allows you to configure path-based or host-based routing to your backend services.Figure 1. Ingress support with instance groups vs. with network endpoint groups.Improvements in container-native load balancingThanks to your feedback during the beta period, we’ve made several improvements to container-native load balancing with NEGs. In addition to having several advantages over the previous approach (based on IPTables), container-native load balancing now also includes:Latency improvements: The latency of scaling down your load-balanced application to zero pod backends and then subsequently scaling back up is now faster by over 90%. This significantly improves response times for low-traffic services, which can now quickly scale back up from zero pods when there’s traffic. Improved Kubernetes integration: Using the Kubernetes pod readiness gate feature, a load-balancer backend pod is considered ‘Ready’ once the Load balancer health check for the pod is successful and the pod is healthy. This ensures that rolling updates will proceed only after the pods are ready and fully configured to serve traffic. Now, you can manage the load balancer and backend pods with native Kubernetes APIs without injecting any unnecessary latency. Standalone NEGs (beta): You can now manage your own load balancer (without having to create an HTTP/S based Ingress on GKE) using standalone NEGs, allowing you to configure and manage several flavors of Google Cloud Load Balancing. These include TCP proxy or SSL proxy based load balancing for external traffic, HTTP(S) based load balancing for internal traffic (beta) and global load balancing using Traffic Director for internal traffic. You can also create a load balancer with hybrid backends (GKE pods and Compute Engine VMs) or a load balancer with backends spread across multiple GKE clusters.Getting started with container-native load balancingYou can use container-native load balancing in several scenarios. For example, you can create an Ingress using NEGs with VPC-native GKE clusters created using Alias IPs. This provides native support for pod IP routing and enables advertising prefixes. Check out how to create an Ingress using container native load balancing. Then, drop us a line about how you use NEGs, and other networking features you’d like to see on GCP.
Quelle: Google Cloud Platform

DACH businesses embrace Google Cloud for digital transformation

Whether they’re lifting-and-shifting workloads or taking cloud-native approaches to application development, enterprises are increasingly looking to the cloud to build and grow their businesses. As a result, over the past 12 months we’ve seen extraordinary momentum in Germany, Austria, and Switzerland—also known as the DACH region—as more and more businesses take advantage of Google Cloud. In March, we launched a new Google Cloud region in Zurich to support businesses in the region. We continue to expand our support of regional and global certifications and standards, now including FINMA-regulated customers in Switzerland. And we continue to work closely with many enterprises in the region, such as METRO AG, and many more.Yesterday’s Cloud Summit in Munich is a great example of the continuing momentum we’re seeing. With 2200 attendees and 45 customer speakers, the summit was an important opportunity for us to connect with—and learn from—businesses in the region. Customers across every industry shared with us how they’re transforming with the power of the cloud. Here are a few highlights.DLR: Groundbreaking robotics research with the help of Google Cloud infrastructurePart of Deutsches Zentrum für Luft- und Raumfahrt(DLR), Germany’s national aerospace center, the Institute of Robotics and Mechatronics is one of the largest pure robotics research institutes in the world. To advance its work, the institute relies on deep learning techniques, but this requires a substantial amount of compute resources for training and running simulations. Thanks to Google Cloud, it can now easily configure powerful instances with Compute Engine, even using hundreds of CPU cores if necessary. And it now has the flexibility to spin up specialized configurations when needed—something it couldn’t do with in-house servers. You can learn more in DLR’s case study.“With Google Cloud, we can train our robots between five to ten times faster than before and really explore things like deep reinforcement learning,” says Berthold Bäuml, Head of Autonomous Learning Robots Lab, DLR. “We can do cutting edge research and we’re no longer bound by our computing resources. It’s providing totally new opportunities for us.”ARD: From monolithic architecture to modern microservicesA joint organization of Germany’s regional public-service broadcasters, the ARD network produces key national and regional television and radio programs, and faces challenges shared by many media organizations aiming to serve customers with the content they want, where and when they want it. Starting from a monolithic, rigid structure, ARD wanted a technology stack that fit its own culture—open and scalable. With Google Cloud, ARD is able to focus on five digital, scalable core products and take advantage of fully-managed services which has allowed it to increase its number of releases from once a year to 50 releases a day. In addition to using GCP as a scalable underlying platform, ARD is also using an API-as-a-service approach with Apigee to rethink the way they are delivering content to German citizens by making content delivery universally accessible to other broadcasting entities.Deutsche Börse Group: Embracing a multi-cloud approach to infrastructureAs an international exchange organization and market infrastructure provider, Deutsche Börse Group offers its customers a wide range of products, services and technologies covering the entire value chain of financial markets. As a result, Deutsche Bӧrse Group has frequently been an early adopter of new technologies that drive its industry forward, whether that’s using distributed ledger/blockchain technology in new ways or offering new advanced analytics and artificial intelligence (AI) services to clients. Most recently, Deutsche Bӧrse Group has ramped up its cloud-first strategy, led by its inaugural Chief Cloud Officer Michael Girg, choosing Google Cloud to help them modernize, develop and operate its enterprise workloads in a more efficient and secure way that also supports compliance. “As a key technology, cloud lays the foundation for enabling some of Deutsche Börse’s major initiatives focussing on new technologies,” says Michael Girg, Chief of Cloud Officer at Deutsche Börse. “Together with Google Cloud as a strong partner, we are very much looking forward to accelerating cloud adoption and to jointly define innovative solutions that further push ahead data security for the financial industry” You can read more on their progress in our recent blog post.MediaMarktSaturn Retail Group: Transforming retail, online and offlineOne of the world’s largest consumer electronics retailers, the MediaMarktSaturn Retail Group operates more than 1,000 stores across 15 countries in Europe, including a sizeable ecommerce presence. But the company was increasingly finding that its on-premises infrastructure, used to first built its online stores, was not able to keep up with evolving business and customer demands. So MediaMarktSaturn turned to the cloud, upgrading its infrastructure but also to adopting a new way of working that places technology at the heart of its strategy. For MediaMarktSaturn, this meant combining data streams into a new data lake from which it can query data with and derive insights with ease, as well as adopting Google Kubernetes Engine (GKE), to manage the Kubernetes clusters that form the backbone of its new online system.“Google Cloud has paved the way for us to break new ground not only technically, but also in our way of working,” says Dr. Johannes Wechsler, Managing Director, MediaMarktSaturn Technology. “Thanks to this strong partnership, we are well equipped to offer our customers a great user experience even in high-load situations and at the same time deliver new features more quickly.”Looking aheadWe’re excited to see what enterprises in Germany, Austria, and Switzerland can do as they embrace Google Cloud. We remain committed to working with them to help them grow and digitally transform their businesses.
Quelle: Google Cloud Platform

3 steps to detect and remediate security anomalies with Cloud Anomaly Detection

Editor’s Note:This is the third blog in our six-part series on how to use Cloud Security Command Center. There are links to the first two blogs in the series at the end of this post. When a threat is detected, every second counts. But, sometimes it can be difficult to know if a threat is present or how to respond. Cloud Anomaly Detection is a built-in Cloud Security Command Center (Cloud SCC) feature that uses behavioral signals to detect security abnormalities, such as leaked credentials or unusual activity, in your GCP projects and virtual machines. In this blog, and the accompanying video, we’ll look at how to enable Cloud Anomaly Detection and quickly respond to threats. 1. Enable Cloud Anomaly Detection from Cloud Security Command CenterCloud Anomaly Detection is not turned on by default. You need to go to Security Sources from the Cloud SCC dashboard and activate it. Keep in mind, to enable a security source, you need to have the Organization Administrator Cloud IAM role. Once it’s turned on, findings will automatically be surfaced and displayed in the Cloud Anomaly Detection card on the Cloud Security Command Center dashboard.2. View findings in Cloud Security Command Center Cloud Anomaly Detection can surface a variety of anomalous findings, including:Leaked service account credentials: GCP service account credentials that are accidentally leaked online or compromised.Resource used for outbound intrusion: One of the resources or GCP services in your organization is being used for intrusion activities, like an attempt to break in to or compromise a target system. These include SSH brute force attacks, Port scans, and FTP brute force attacks.Potential compromised machine: A potential compromise of a resource in your organization.Resource used for crypto mining: Behavioral signals around a VM in your organization indicate that it might have been compromised and could be getting used for crypto mining.Unusual Activity/Connection: Unusual activity from a resource in your organization.Resource used for phishing: One of the resources or GCP services in your organization is being used for phishing.3. Remediate findings from Cloud Security Command Center After Cloud Anomaly Detection generates a finding, you can click on the finding for more information about what happened and use that information to fix the security issue.To learn more about Cloud Anomaly Detection, including how to turn it on and how it can help your organization, check out the video below.Previous blogs in this series:5 steps to improve your cloud security posture with Cloud Security Command CenterCatch web app vulnerabilities before they hit production with Cloud Web Security Scanner
Quelle: Google Cloud Platform

Google Cloud Firewall Rules Logging: How and why you should use it

Google Cloud Platform (GCP) firewall rules are a great tool for securing applications. Firewall rules are customizable software-defined networking constructs that let you allow or deny traffic to and from your virtual machine (VM) instances. To secure applications and respond to modern threats, firewall rules require monitoring and adjustment over time. GCP Firewall Rules Logging, which Google Cloud made generally available in February 2019, is a feature that allows the network administrators to monitor, verify and analyze the effects of firewall rules in Google Cloud. In this blog (the first of many on this topic), we’ll discuss the basics of Firewall Rule Logging, then look at an example of how to use it to identify mislabeled VMs and refine firewall rules with minimal traffic interruption.GCP Firewall Rules Logging: The BasicsFirewall Rules Logging provides visibility to help you better understand the effectiveness of rules in troubleshooting scenarios. It helps answer common questions, like: How can I ensure the firewall rules are doing (or not doing) what they were created for?How many connections match the firewall rules I just implemented? Are firewall rules the root cause of some application failures?Unlike VPC flow logs, firewall rules logs are not sampled. Every connection is logged (subject to some limits. Please refer to the Appendix for details). The Firewall Rule log format can be found here.Additionally, network administrators have the options to export firewall logs toGoogle Cloud Storage for long term log retention,to BigQuery for in-depth analysis using standard SQL, or to Pub/Sub to integrate with popular security information and event management software (SIEM), such assplunk for detecting/alerting traffic abnormalities and threats at near real time.For reference, GCP firewall rules are software-defined constructs with the following properties:GCP firewalls are VM-centric. Unlike traditional firewall devices, which are applied at the network edge, GCP firewall rules are implemented at VM level. This means the firewall rules can exist between your instances and other networks, and also between individual instances within the same VPC.GCP firewall rules always have targets. The targets are considered source VMs when defining egress firewall rules, and destination VMs when defining ingress firewall rules. Do not confuse “target” with the “destination” in the traditional firewall concept.GCP firewall rules are defined within the scope of aVPC network. There is no concept of subnets when defining firewall rules. However, you can specify source CIDR ranges, which give you better flexibility than subnets.Every VM has two immutable implied firewall rules: implied allow of egress, and implied deny of ingress at lowest priority. However, Firewall Rule Logging does not generate any entries for these implied firewall rules.While GCP firewall rules support many protocols—including TCP, UDP, ICMP, ESP, AH, SCTP, and IPIP—Firewall Rule Logging only logs entries for TCP and UDP connections.Firewall Best Practices Follow the least privilege principle — make firewall rules as tight as possible. Only allow well-documented and required traffic (ingress and egress), and deny all others. Use a good naming convention to indicate each firewall rule’s purpose.Use fewer and broader firewall rule sets when possible. Observe the standard quota of 200 firewall rules per project. The complexity of the firewall also matters. A good rule of thumb is to not throw too many atoms (tags/service accounts, protocols/ports, source/destination ranges) at the firewall rules. Please refer to the Appendix for more on firewall quota/limits.Progressively define and refine rules; start with the broader rules first, and then use rules to narrow down to a smaller set of VMs.Isolate VMs using service accounts when possible. If you can’t do that, use network tags instead. But do not use both. Service account access is tightly controlled by IAM. While network tags are more flexible, anyone with the instanceAdmin role can change them. More on filtering using service accounts versus network tags can be foundin our firewall rules overview.Conserve network space by planning proper CIDR blocks (segmentations) for your VPC network to group related applications in the same subnet.Use firewall rule logging to analyze traffic, detect misconfigurations, and report real-time abnormalities.In practice, there are many uses for Firewall Rule Logging. One common use case is to help identify mislabeled VM instances. Let’s walk through this scenario in more detail.Scenario: Mislabeled VM instancesACME, Inc. is migrating on-prem applications to Google Cloud. Their network admins implemented a shared VPC to centrally manage the entire company’s networking infrastructure. There are dozens of applications, run by multiple engineering teams, deployed in each GCP region with multi-tiered applications. User-facing proxies in the DMZ talk to the web servers, which communicate with the application servers, which, talk to database layers.This diagram represents one of many regions, each of which has multiple subnets. This region, US-EAST1, includes:acme-dmz-use1: 172.16.255.0/27acme-web-use1: 10.2.0.0/22acme-app-use1: 10.2.4.0/22acme-db-use1: 10.2.8.0/22Here are the traffic directions and firewall rules in place for this region:Proxy can access web servers in acme-web-use1:firewall rule: acme-web-use1-allow-ingress-acme-dmz-use1-proxyWeb servers can access app servers in acme-app-use1:firewall rule: acme-app-use1-allow-ingress-acme-web-use1-webserverApp servers can access database servers in acme-db-use1:firewall rule: acme-db-use1-allow-ingress-acme-app-use1-appsvrThis setup has granular partitions of network space that categorize compute resources by application function. This makes it possible for firewall rules to control the network space and lock the network infrastructure to comply with the least-privilege principle.For large organizations with thousands of VMs provisioned by dozens of application teams, we use service accounts or network tags to group the VMs, in conjunction with subnet CIDR ranges to define firewall rules. For simplicity, we use the network tags to demonstrate each use case.The problemIt’s not unusual for large enterprises to have hundreds of firewall rules due to the scale of the infrastructure and the complexity of network traffic patterns.With so much going on, it’s understandable, that application teams sometimes mislabel the VMs when they migrate them from on-prem to cloud and scale-up applications after migration. The consequences of mislabeling range from an application outage to a security breach. The same problem can also arise if we (mis)use service accounts.Going back to our example, an ACME application team mislabeled one of their new VMs as “appsrv” when it should actually be “appsvr”. As a result, the web server’s requests to access app servers are denied.Solution 1In order to identify the mislabeling and mitigate the impact quickly, we can enable the firewall rule logging for all the firewall rules using the following gcloud command.Next, we export the logs to BigQuery for further analysis with following steps: Create a BigQuery dataset to store the Firewall Rule Log entries:Create the BigQuery sink to export the firewall rules logs:You can also export individual firewall rules by adding filters on the jsonPayload.rule_details.reference. Here is a sample filter:When the BigQuery sink is created, a service account is generated automatically by the GCP. The service account follows the naming convention of “p<project_number>-[0–9]*@gcp-sa-logging.iam.gserviceaccount.com” and is used to write the log entries to the BigQuery table. We need to grant this service account the Data Editor role on the BigQuery dataset as following three steps:Once the logs are populated to the BigQuery sink, you can run queries to determine which rule denies what as following:The BigQuery table will be loaded with the log entries from the logFileName filter, which, in this case, contains all the firewall rules logs. The BigQuery table schema is directly mapped to the log entry’s json format. To keep it simple, for our example we’ll use the log viewer to inspect the log entries.Since we have implicit ingress and the denial rule is not being logged, we create a “deny all” rule with priority 65534 to capture anything that gets denied.The firewall rules for this scenario are:As we can see in the viewer,  “acme-deny-all-ingress-internal” is taking effect, and  “acme-allow-all-ingress-internal” is disabled, so we can ignore it.Below, we can see that the connection from websvr01 to the new appsvr02 (with the incorrect “appsrv” label) is denied.While this approach works for this example, it presents two potential problems:If we have a large amount of traffic, it will generate too much data for real-time analysis. In fact, one of our clients implemented this approach and ended up generating 5TB of firewall logs per day.Mislabeled VMs can cause traffic interruptions. The firewall is doing what it is designed to do, but nobody likes outages.So, we need a better approach to address both of the issues above.Solution 2To resolve the potential issues mentioned above, we can create another ingress rule to allow all traffic at priority 65533 and turn it on for a short period of time whenever there are new deployments.In this scenario, we don’t need to turn on all the Firewall Rules Logging. In fact, we could turn off most of it to save space.Any allowed evaluation logged by this firewall rule is a violator, and we do not expect many of them. The suspects are identified in real time.Now, we fix the label.As we can see, the connection from websvr01 to appsvr02 now works fine.After all the mislabels are fixed, we can turn off the allow and capture rule. Everyone is happy… until the next time new resources are added to the network.ConclusionWith Firewall Rules Logging, we can refine our firewall rules by following a few best-practices and identify undesired network traffic in near real-time. In addition to firewall rules logging, we’re always working on more tools and features to make managing firewall rules and network security in general easier. AppendixFirewall quotas and limitsThe default quota is 200 firewall rules per project, but can be bumped up to 500 through quota requests. If you need more than 200 firewall rules, we recommend  that you review the firewall design to see whether there is a way to consolidate the firewall rules. The upcoming hierarchical firewall rules can define firewall rules at folders/org level, and are not counted toward the per-project limit.The maximum number of source/target tags per firewall rule is 30/70, and the maximum number of source/target per service account is 10/10. A firewall rule can use network tags or service accounts, but not both.As mentioned, the complexity of the firewall rules also matters. Anything that is defined in firewall rules, such as source ranges, protocol/ports, network tags, and service accounts, counts towards an aggregated per-network hard limit. This number is at the scale of tens of thousands, so it doesn’t concern most customers, except in rare cases where a large enterprise may reach this limit.There are per-VM maximum number of logged connections in a 5-second interval depending on machine types: f1-macro (100), g1-small (250), and other machine types (500 per vCPU up to 4,000 in total).
Quelle: Google Cloud Platform

Virtual display devices for Compute Engine now GA

Today, we’re excited to announce the general availability (GA) of virtual display devices for Compute Engine virtual machines (VMs), letting you add a virtual display device to any VM on Google Cloud. This gives your VM Video Graphics Array (VGA) capabilities without having to use GPUs, which can be powerful but also expensive. Many solutions such as system management tools, remote desktop software, and graphical applications require you to connect to a display device on a remote server. Compute Engine virtual displays allow you to add a virtual display to a VM at startup, as well as to existing, running VMs. For Windows VMs, the drivers are already included in the Windows public images; and for Linux VMs, this feature works with the default VGA driver. Plus, this feature is offered at no extra cost. We’ve been hard at work with partners Itopia, Nutanix, Teradici and others to help them integrate their remote desktop solutions with Compute Engine virtual displays to allow our mutual customers to leverage Google Cloud Platform (GCP) for their remote desktop and management needs. Customers such as Forthright Technology Partners and PALFINGER Structural Inspection GmbH (StrucInspect) are already benefiting from partner solutions enabled by virtual display devices. “We needed a cloud provider that could effectively support both our 3D modelling and our artificial intelligence requirements with remote workstations,” said Michael Diener, Engineering Manager for StrucInspect. “Google Cloud was well able to handle both of these applications, and with Teradici Cloud Access Software, our modelling teams saw a vast improvement in virtual workstation performance over our previous solution. The expansion of GCP virtual display devices to support a wider range of use cases and operating systems is a welcome development that ensures customers like us can continue to use any application required for our client projects.”Our partners are equally excited about the general availability of virtual display devices.“We’re excited that the GCP Virtual Display feature is now GA because it enables our mutual customers to quickly leverage Itopia CAS with Google Cloud to power their Virtual Desktop Infrastructure (VDI) initiatives,” said Jonathan Lieberman, itopia Co-Founder & CEO.”With the new Virtual Display feature, our customers get a much wider variety of cost-effective virtual machines (versus GPU VMs) to choose from in GCP,” said Carsten Puls, Sr. Director, Frame at Nutanix. “The feature is now available to our joint customers worldwide in our Early Access of Xi Frame for GCP.”Now that virtual display devices is GA, we welcome you to start using the feature in your production environment. For simple steps on how you can use a virtual display device when you create a VM instance or add it to a running VM, please refer to the documentation.
Quelle: Google Cloud Platform

Deutsche Bӧrse Group chooses Google Cloud for future growth

Deutsche Bӧrse Group, an international exchange organisation and innovative market infrastructure provider, has a history of being an early adopter of new technologies that drive its industry forward. Over the last three years, the company has been leading a new charge–the adoption of cloud computing–both inside the company and across the industry. Deutsche Bӧrse is now partnering with Google Cloud to digitize their own business, as well as increase the usage and acceptance of cloud technology across the financial services industry.As a critical infrastructure provider that drives global markets, Deutsche Bӧrse has a tough balance to strike. On one hand, it needs to keep innovating and expanding its own portfolio of offerings for the clients that depend on their services. However, that innovation can’t come at the expense of the high level of security and reliability which Deutsche Bӧrse and global financial markets demand, while taking regulatory requirements into account. This is a balance the company takes seriously and has executed well on over the years.  In 2018, Deutsche Bӧrse laid out their growth strategy called Roadmap 2020, which focuses on three key pillars: organic growth in existing markets; M&A expansion into new areas; and new technology investments to maintain that leadership position. These investments include distributed ledger/blockchain technology, robotics and AI, advanced analytics, and cloud computing, which the company sees as a foundation for many of these growth initiatives.The cloud as a foundation for growthDeutsche Bӧrse has been preparing for and moving to the cloud for more than three years now, but has really ramped up its cloud-first strategy over the last year, led by Dr. Christoph Böhm, the company’s Chief Information Officer & Chief Operating Officer. Deutsche Börse follows a multi-vendor strategy for cloud usage and chose Google Cloud as its partner to further modernize, develop and operate its enterprise workloads in an efficient, secure and compliant way. There are four ways the company will benefit from its move to the cloud: Increased speed and agility: Provisioning happens in minutes rather than months. This means Deutsche Bӧrse’s IT team can develop and deploy services faster than ever before, especially when it comes to new services that use the emerging technologies mentioned in the company’s Roadmap 2020 vision. It’s a win for everyone involved. Developers don’t have to wait on infrastructure changes to start moving, clients get new features and services faster, and the company is better equipped to adapt to changing market conditions.Increased efficiency and scale: Like many large enterprises, Deutsche Bӧrse is looking at acquisitions as a way to expand into new areas and fuel growth. The openness and interoperability of cloud technology makes it easier to bring on these acquired companies and integrate new solutions into Deutsche Bӧrse’s broader portfolio. The company can also tap into the cloud’s economies of scale, using existing infrastructure and technology to save time and money in order to focus on what matters most:  serving its clients.A leap forward in emerging technologies: Google Cloud’s global scale and innovative technology will allow Deutsche Börse to accelerate its push into distributed ledger/blockchain technology, robotics, AI and advanced analytics.Additional transparency and security: For a company whose mission it is to create trust in the markets of today and tomorrow, security and compliance are of utmost importance. Deutsche Bӧrse and Google Cloud are committed to addressing the regulatory requirements for the financial industry and have worked closely to ensure workloads are migrated in a safe and secure manner. “As part of our collaboration with Google Cloud we are looking forward to jointly define unique data security solutions for the financial industry,” said Dr. Böhm. “We are excited to continue our journey into the cloud, driving data security and compliance for cloud services to the next level, to the benefit of our customers as well as our company.” Security and compliance in financial services is always a moving target as the threat landscape changes, new technologies arise, and legislation evolves. That’s why Google Cloud and Deutsche Bӧrse are looking to set the standard when it comes to trust and the cloud. We look forward to a long and fruitful partnership.
Quelle: Google Cloud Platform

Protecting your GCP infrastructure at scale with Forseti Config Validator

One of the greatest challenges customers face when onboarding in the cloud is how to control and protect their assets while letting their users deploy resources securely. In this series of four articles, we’ll show you how to start implementing your security policies at scale on Google Cloud Platform (GCP). The goal is to write your security policies as code once and for all, and to apply them both before and after you deploy resources in your GCP environment.In this first post, we’ll discuss two open-source tools that can help you secure your infrastructure at scale and scan for non-compliant resources: Forseti and Config Validator. You can see it in action in this live demo from Alex Sung, PM for Forseti at Google Cloud.In follow-up articles, we’ll go over how you can use policy templates to add policies to your Forseti scans on your GCP resources (using the enforce_label template as an example). Then, we’ll explain how to write your own templates, before expanding to securing your deployments by applying your policies  in your CI/CD pipelines using the terraform-validator tool.Scanning for violations with Forseti and the config_validator scanner Cloud environments can be very dynamic. It’s a best practice to use Forseti to scan your GCP resources on a regular basis (a new scan runs every two hours by default) and evaluate them for violations. In this example, Forseti will forward its findings to Cloud Security Command Center (Cloud SCC) for integration, using a custom notifier. Cloud SCC also integrates with the most popular security tools within the Google Cloud ecosystem like DLP, Cloud Security Scanner, Cloud Anomaly Detection, as well as third party tools (Chef Automate, Cloudflare, Dome9, Qualys etc.).This provides a single-pane of glass for your security and operation teams to look for violations.Here is an example of the Cloud SCC dashboard with few security sources set up. At a high level, here’s what you need to do to get your Forseti integration working:Deploy a basic Forseti infrastructure with the config_validator scanner enabled in a dedicated projectAdd a new SCC connector for Forseti via the UI manually (the other alternative is to use the API directly at this point)Update your Forseti notifier configuration to send the violations to SCCAdd your custom policy library to the forseti server GCS bucket so that the next scan applies your constraints on your infrastructure. You can use Google’s open-source policy-library as a starting point for this.Let’s go over these steps in greater detail.1. Forseti initial setupThe official Forseti documentation lists a few options to deploy Forseti in your organization. A good option is the Forseti Terraform module, since it’s easy to maintain, and we because it’s easy to deploy Terraform templates from a CI/CD pipeline, as you’ll see in next posts.Another alternative for installing Forseti is to follow this simple tutorial for the Terraform module (includes a full Cloud Shell tutorial).There are 139 inputs (for v2.2.0) you can play with to configure your Forseti deployment if you feel like it. For this demo, we recommend you use the default values for most of them.First, clone the repo:Then, set some variables to specify the input you need in a new terraform.tfvars file:Note: Make sure your credential file is valid and corresponds to a service account with the right permissions, unless you are leveraging an existing CI/CD pipeline that handles that part for you. Check out the module helper script to create the service account using your own credentials if needed.You can now test your setup. First run terraform init:Then create a terraform plan from these templates and save it as a file.If everything looks good, you can deploy your plan:You now have a Forseti client and a Forseti server in your project (among many other things, like a SQL instance and Cloud Storage buckets). 2. Setting up Cloud SCCAt this point, you’ll need to follow these steps to configure Cloud SCC to receive Forseti notifications. You simply need to create a new source, that you’ll use in your Forseti configuration.Note: Stop at step #4 (do not follow step 5) in the Cloud SCC setup instructions, as you’ll do this using Terraform instead of manually updating the Forseti server.If you follow the steps in the link above to add the Forseti Cloud SCC Connector as a new security source, you should end up with something like this in your Cloud SCC settings:Take a note of your Forseti Cloud SCC Connector source ID and Service Account for the next step.3. Updating the Forseti configurationNow, you’ll need to update our Forseti infrastructure to configure the server to send the notifications to Cloud SCC. Here is your updated terraform.tfvars file:If you run terraform plan and terraform deploy again, your Forseti server should now be correctly configured.You can check the /home/ubuntu/forseti-security/configs/forseti_conf_server.yaml file on the Forseti server to see the changes, or by running the forseti server configuration get command). Then, add your policy library to let the config_validator scanner check for violations once everything is set up.4. Setting up the config_validator scanner in ForsetiNow, you need to import your policy-library folder in the Forseti server Cloud Storage bucket and reload its configuration. Please refer to the Config Validator user guide to learn more about these steps.Then, once the config validator scanner is enabled, you can add your own constraints to it. You do this by updating the Forseti Cloud Storage server bucket, following these instructions. The end result should look like this (after Forseti first runs):Note: All of these steps should be automated in your CI/CD pipeline. Any merge to your policy-library repository should trigger a build that updates this bucket. As a general rule, constraints need to be added in the policies/constraints folder and use a template in the policies/templates folder.You can also check that the config-validator service is running and healthy using:Now you can test out your setup, by running a scan and sending the violations manually to Cloud SCC. This is just to confirm that everything is working as expected and avoid waiting until the next scheduled scan to troubleshoot it.The traditional way to query a Forseti server is SSH to the forseti client, using the console UI, and create a new model based on the latest inventory (or you can create a new inventory if you need to capture newly created resources). Using this model you can run the server scanners manually and finally run the notifier command to send out the result to Cloud SCC.A quicker way to test out this setup is to run the same script that will run automatically on the server every two hours. Simply SSH into the server and manually run (from /home/ubuntu/forseti-security):This gets the latest data from the Cloud Storage bucket, and run all the steps mentioned earlier (create a model from the latest inventory, run the scanners and then the notifiers) in an automated fashion. Once it successfully runs, you can check in Cloud SCC what violations (if any) were found. Since you didn’t add any custom constraints in the policy-library/policies/constraints folder, the config_validator scanner shouldn’t find any violations at this point.If you are having issues in any of the setup steps, please read the Troubleshooting tips section for common issues that people run into. Troubleshooting tipsForseti install issuesIf you do not see the forseti binary when you SSH in the client or the server. Check out your various log files, to see if the install was successful. This is usually a red flag that means your Forseti installation failed. You cannot move forward from there; you need to fix the situation first. Most of the useful logs are in /var/log: syslog, cloud-init.log, cloud-init-output.log and forseti.log.Do not hesitate to run ‘terraform destroy’ and double check every variable you passed to the module too, to check for permissions issues.Config Validator issuesForseti runs scanners independently, based on the server configuration file. If everything is configured properly, when you run the forseti scanner command, you should see among other things, something like:If the Forseti config validator scanner does not run, check out the forseti server configuration file to see if it’s enabled (/home/ubuntu/forseti-security/configs/forseti_conf_server.yaml under scanners):Also check out if the current configuration has the same value, using ‘forseti server configuration get | grep –color config_validator’ to make it easier to spot.Finally, verify that the config_validator service is up and running.If your issue is that your latest constraint changes are not automatically updated in your scan results (even though they should be), you can upload the latest version to the Cloud Storage bucket, and restart the config_validator service on the server.Cloud SCC issuesIf you don’t see the Forseti connector in your Cloud SCC UI, restart the steps to enable the Forseti connector in SCC, or check that your connector is enabled in the settings.If you don’t receive the violations you can see in the Forseti server, make sure that the Forseti server’s service-account has the Security Center Findings Editor role assigned at the org level.Next stepsAt this point, you are ready to add your own constraints in your policy-library and start scanning your infrastructure for violations based on them. The Forseti project offers a great list of sample constraints you can use freely to get started.In the next article of this series, you’ll learn to we will add a new constraint to scan for labels in your existing environment. This can prove quite useful to ensure your environment is like you expect it to be (no shadow infrastructure, for instance) and let you react quickly whenever some non-compliant (or mislabeled in this case) resource is detected.Useful linksForseti / Config ValidatorForseti Config Validator overview User GuideWriting your own custom constraint templatesRepositories:Forseti Terraform moduleForseti source codeConfig Validator source codeConfig Validator policy library
Quelle: Google Cloud Platform

WPP unlocks the power of data and creativity using Google Cloud

Over the past year, our customers have shared many stories with me about how cloud technology is transforming their businesses. One theme that frequently comes up is how the cloud enables marketers to deliver better customer experiences across online and offline campaigns, email, apps, websites, and more. By stitching together these consumer touchpoints via the cloud, marketers can ultimately produce more seamless, consistent experiences—and better outcomes for the businesses they support.One of the companies at the forefront of this transformation is WPP. As the world’s largest advertising holding company, WPP operates across 112 countries and supports nearly three-quarters of the Fortune 500 with media, creative, public relations, and marketing analytics expertise. It can be challenging for a company operating at this scale to manage and unlock the value of its data across its businesses. Information can become siloed, with valuable insights lost within the organization. In 2018, WPP CEO Mark Read recognized this challenge and set forth a new vision. By better aligning his company’s technology, creative and talent, WPP aims to deliver transformative experiences for audiences and superior results for its clients.“Creativity and technology are the two key pillars of WPP’s future strategy. Creativity is what differentiates us and technology allows us to scale. In the first year of our transformation journey we have invested significantly in our technology capabilities and our strong partnership with Google Cloud is key to helping us realise our vision. Their vast experience in advertising and marketing combined with their strength in analytics and AI helps us to deliver powerful and innovative solutions for our clients,”  said Mark Read, WPP CEO.  To fast-track its business transformation, WPP chose Google Cloud for our technology and expertise, and is focusing on 3 key initiatives:  Campaign Governance—Creating better ways of working through cloud-driven automation for campaign set-up, creative management, reporting and optimization across the WPP network.Customer Data Management—Bringing together data points from the customer, market intelligence and WPP data into an open data platform to enable better insights, planning and activation.  WPP AI—Utilizing Google Cloud’s ML tools and technology to help fuel innovation in WPP’s analytics, campaign optimization, content intelligence and customer experience practices.WPP is deploying Google Cloud across multiple projects—from building a media planning stewardship system, to improving campaigns with the use of tools for image recognition, sentiment analysis, and natural language processing. By incorporating cloud technology into WPP’s daily practices, teams can speed up their time-to-insight and uncover new opportunities for clients. Also, by connecting Google Cloud to other products like the Google Marketing Platform, WPP can deliver better experiences for their audiences across media and marketing.WPP has already begun putting its data-forward thinking to work. For example, Wunderman Thompson, a global agency within WPP, worked with GlaxoSmithKline to develop theTheraflu Flu Tracker. Using statistical data from Mexico’s National Institute of Epidemiology, along with weather data and other indicators, it developed deep learning models on Google Cloud that predicted where and when flu cases would occur in Mexico, with up to 97% accuracy. Wunderman turned this knowledge into relevant digital ads that communicated the risk of flu in the 32 federal entities in Mexico. This campaign increased e-commerce sales by nearly 200%, won a Bronze Lion at Cannes and helped people be better informed about flu likelihood.This is just one example of how WPP is tapping into data, obtaining insights at-scale and using creativity to produce meaningful business results—and it’s only the beginning. We are proud to collaborate with WPP to deliver truly transformative experiences to consumers using Google Cloud.
Quelle: Google Cloud Platform

A CIO’s guide to cloud success: decouple to shift your business into high gear

They say 80% of success is showing up—but unfortunately for enterprises moving to the cloud, this doesn’t always hold up. A recent McKinsey survey, for example, found that despite migrating to the cloud, many enterprises are nonetheless “falling short of their IT agility expectations.” Because CTOs and CIOs are struggling to increase IT agility, many organizations are unable to achieve their larger business goals. McKinsey notes that 95% of CIOs indicated that the majority of the C-Suite’s overall goals depend on them.The disconnect between moving to the cloud and successful digital transformation can be traced back to the way most organizations adopt cloud:renting pooled resources from cloud vendors or investing in SaaS subscriptions. By adopting cloud in this cookie-cutter way, an enterprise basically keeps doing what it’s always done—perhaps just a little faster and a little more efficiently. But we’re entering a new age. Cloud services are increasingly about intelligence, automation, and velocity—not just the economies of scale offered by big providers renting out their infrastructure. As McKinsey notes, enterprises sometimes stumble because they use the cloud for scale, but do not take advantage of the agility and velocity benefits it provides. At its core, achieving velocity and agility isn’t about where an application is hosted so much as how fast, freely, and efficiently enterprises can launch and adjust strategies, whether creating ways to interact with customers on new technology platforms, quickly adding requested features to apps or monetizing data. This in turn relies on decoupling the dependencies between different systems and minimizing the amount of manual coordination that enterprise IT typically has to perform. The result is more loosely-coupled distributed systems that are far better equipped for today’s dynamic technology landscape. This concept of decoupling, and how it can accelerate business results, drives much of what we do at Google—and it has strongly informed how we built Anthos, our open source-based multi-cloud platform that lets enterprises run apps anywhere, but also achieve the elusive IT agility and velocity that enterprises crave.Decoupling = agility: shift your development into high gearMigrating to the cloud does not, by default, transform an enterprise because digital transformation isn’t about the cloud itself. Rather, it’s about changing the way software is built and the consequent explosion in new business strategies that software can support—from selling products via voice assistants, to exposing proprietary data and functionality to partners at scale, to automating IT administration and security operations that used to require manual oversight. Specifically, modern software development eschews ‘monolithic’ application architectures whose design makes it difficult to update or reuse functionality without impacting the entire application. Instead, developers increasingly build applications by assembling small, reusable, independently deployable microservices. This shift not only makes software easier to reuse, combine, and modify (which can help an enterprise to be more responsive to changing business needs), but also lets developers work in small parallel teams rather than large groups (which helps them to create and deploy applications much faster. What’s more, microservices exposed as APIs can help developers leverage resources from a range of providers spread across many different clouds, giving them the tools to create richer applications and connected experiences. These decouplings of services from an application and developers from one another is often done via containers. By abstracting applications and libraries from the underlying operating system and hardware, containers make it easier for one team of developers to focus on its work without worrying about what any of the teams with which they’re collaborating are doing. Containers also represent another important form of decoupling that can dramatically change the relationship among an IT department, severs, and maintenance. Thanks to containers, for example, many applications can reside on the same server without impacting one another, which reduces the need for application-specific hardware deployments. Containers can also be ported from one machine to another, opening opportunities for developers to create applications on-premises and scale them via the cloud, or to move applications from one cloud to another based on changing needs. This abstraction from the hardware they run on is one reason containers are often referred to as “cloud-native.” This overview only scratches the surface, but the point is, by decoupling functionality and creating new architectures built around loosely-coupled distributed systems, enterprises can empower their developers to work faster in smaller, parallel teams and unlock the IT agility through which modern, software-driven business strategies are executed.  But doesn’t decoupling increase complexity?Containers and distributed systems offer many advantages, but adoption isn’t as simple as flipping a switch. Decomposing fat applications into hundreds of smaller services can increase an enterprise’s agility, but orchestrating all those services can be tremendously complicated, as can authenticating their users and protecting against threats. When millions of microservices are communicating with one another, it becomes literally impossible to put a human being in the middle of those processes, requiring automated solutions. Many enterprises consequently struggle with not only governance across these distributed environments, but also identifying the right solutions to put in place. Moreover, not everything within a large enterprise will evolve at the same pace. Running containers in the cloud can help an enterprise focus on building great applications while handing off infrastructure management to a vendor. In fact, teams in almost every large enterprise are already operating this way—but other teams accustomed to legacy approaches may require a more incremental transition. Additionally, enterprises may have a variety of reasons, whether strategic or regulatory, for keeping data on-prem—but they may still want ways to apply cloud-based analytics and machine learning services to that data and otherwise merge the cloud with their on-prem assets. Assembling the orchestration, management, and monitoring solutions for such deployments has historically been difficult. Another significant challenge is that though containers are intrinsically portable, the various public clouds provide different platforms, which can make moving containers—let alone giving developers and administrators consistent experiences—quite difficult. Many open-source options are not the panacea they once seemed because the open-source version of a solution and the managed deployment sold by a cloud provider may be meaningfully different. These challenges can be particularly vexing because enterprises want the flexibility to change cloud vendors, utilize multiple clouds, and otherwise avoid lock-in.   Helping enterprises to enjoy the benefits of distributed systems while avoiding these challenges shaped our development of Anthos. Anthos: Agility minus the complexity Google runs multiple web services with billions of users and is an enormously complex organization whose IT systems connect tens of thousands of employees, contractors, and partners. No surprise then that we’ve spent a lot of time solving the puzzle of distributed systems and their dynamic, loosely-coupled components. For example, we open-sourced Kubernetes, the de facto standard for container orchestration, and Istio, a leading service mesh for managing microservices—and both are major components in Anthos and both are based on internal best practices. Istio, provides systematic centralized management for microservices and enables what is arguably the most important form of decoupling: policies from services. Developers supported by Istio are free to write code without encoding policies into their microservices, allowing administrators to change policies in a controlled rollout without redeploying individual services. This automates away the expensive, time-consuming coordination and bureaucracy traditionally required for IT governance and helps accelerate developer velocity. Recognizing that enterprises demand choice and openness, Anthos launched with hybrid support and will soon include multi-cloud functionality as well, with all options offering simplified management via single-pane-of-glass views, policy-driven controls, and a consistent experience across all environments, whether on Google Cloud Platform, in a corporate data center with Anthos deployed on VMware, or, after our coming update, in a third-party cloud such as Azure or AWS. Because Anthos is software-based, on-prem deployments don’t require stack refreshes, letting enterprises utilize existing hardware investments, ensuring developers and administrators have a consistent experience, regardless of where workloads are located or whose hardware they run on. We’re already seeing fantastic momentum with customers using Anthos. For example, KeyBank, a superregional bank that’s been in business for almost 200 years, is adopting Anthos after using containers and Kubernetes for several years for customer-facing applications. “The speed of innovation and competitive advantage of a container-based approach is unlike any technology we’ve used before,” said Keybank’s CTO Keith Silvestri and Director of DevOps Practices Chris McFee in a recent blog post, adding that the technologies also helped the bank spin up infrastructure on demand when traffic spiked, such as during Black Friday or Cyber Monday. KeyBank chose Anthos to bring this agility and “burstability” to the rest of its IT operations, including internal-facing applications, while staying as close as possible to the open-source version of Kubernetes. “We deploy Anthos locally on our familiar and high-performance Cisco HyperFlex hyperconverged infrastructure,” Silvestri and McFee noted. “We manage the containerized workloads as if they’re all running in GCP, from the single source of truth, our GCP console.” Anthos includes much more—such as Migrate for Anthos to auto-migrate virtual machines into containers in Google Kubernetes Engine (GKE) and an ecosystem of more than 40 hardware and software partners. But as the preceding attests, at the highest level, the platform helps enterprises to balance developer agility, operational efficiency, and platform governance by facilitating the decoupling central to successful digital transformation:Infrastructure is decoupled from the applicationsTeams are decoupled from one anotherDevelopment is decoupled from operations Security is decoupled from development and operationsSuccessful decoupling minimizes the need for manual coordination, cuts costs, reduces complexity, and significantly increases developer velocity, operational efficiency, and business productivity. Decoupling delivers a framework, implementation, and operating model to ensure consistency across an open, hybrid, and multi-cloud future—a future Anthos has been built to serve.Check out McKinsey’s report “Unlocking Business Acceleration in a Hybrid Cloud World” for more about how hybrid technologies can accelerate digital transformation, and tune in to our “Cloud OnAir with Anthos” session to learn even more about how Anthos is helping enterprises digitally transform—including special appearances by KeyBank and OpenText!
Quelle: Google Cloud Platform