Virtual display devices for Compute Engine now GA

Today, we’re excited to announce the general availability (GA) of virtual display devices for Compute Engine virtual machines (VMs), letting you add a virtual display device to any VM on Google Cloud. This gives your VM Video Graphics Array (VGA) capabilities without having to use GPUs, which can be powerful but also expensive. Many solutions such as system management tools, remote desktop software, and graphical applications require you to connect to a display device on a remote server. Compute Engine virtual displays allow you to add a virtual display to a VM at startup, as well as to existing, running VMs. For Windows VMs, the drivers are already included in the Windows public images; and for Linux VMs, this feature works with the default VGA driver. Plus, this feature is offered at no extra cost. We’ve been hard at work with partners Itopia, Nutanix, Teradici and others to help them integrate their remote desktop solutions with Compute Engine virtual displays to allow our mutual customers to leverage Google Cloud Platform (GCP) for their remote desktop and management needs. Customers such as Forthright Technology Partners and PALFINGER Structural Inspection GmbH (StrucInspect) are already benefiting from partner solutions enabled by virtual display devices. “We needed a cloud provider that could effectively support both our 3D modelling and our artificial intelligence requirements with remote workstations,” said Michael Diener, Engineering Manager for StrucInspect. “Google Cloud was well able to handle both of these applications, and with Teradici Cloud Access Software, our modelling teams saw a vast improvement in virtual workstation performance over our previous solution. The expansion of GCP virtual display devices to support a wider range of use cases and operating systems is a welcome development that ensures customers like us can continue to use any application required for our client projects.”Our partners are equally excited about the general availability of virtual display devices.“We’re excited that the GCP Virtual Display feature is now GA because it enables our mutual customers to quickly leverage Itopia CAS with Google Cloud to power their Virtual Desktop Infrastructure (VDI) initiatives,” said Jonathan Lieberman, itopia Co-Founder & CEO.”With the new Virtual Display feature, our customers get a much wider variety of cost-effective virtual machines (versus GPU VMs) to choose from in GCP,” said Carsten Puls, Sr. Director, Frame at Nutanix. “The feature is now available to our joint customers worldwide in our Early Access of Xi Frame for GCP.”Now that virtual display devices is GA, we welcome you to start using the feature in your production environment. For simple steps on how you can use a virtual display device when you create a VM instance or add it to a running VM, please refer to the documentation.
Quelle: Google Cloud Platform

Deutsche Bӧrse Group chooses Google Cloud for future growth

Deutsche Bӧrse Group, an international exchange organisation and innovative market infrastructure provider, has a history of being an early adopter of new technologies that drive its industry forward. Over the last three years, the company has been leading a new charge–the adoption of cloud computing–both inside the company and across the industry. Deutsche Bӧrse is now partnering with Google Cloud to digitize their own business, as well as increase the usage and acceptance of cloud technology across the financial services industry.As a critical infrastructure provider that drives global markets, Deutsche Bӧrse has a tough balance to strike. On one hand, it needs to keep innovating and expanding its own portfolio of offerings for the clients that depend on their services. However, that innovation can’t come at the expense of the high level of security and reliability which Deutsche Bӧrse and global financial markets demand, while taking regulatory requirements into account. This is a balance the company takes seriously and has executed well on over the years.  In 2018, Deutsche Bӧrse laid out their growth strategy called Roadmap 2020, which focuses on three key pillars: organic growth in existing markets; M&A expansion into new areas; and new technology investments to maintain that leadership position. These investments include distributed ledger/blockchain technology, robotics and AI, advanced analytics, and cloud computing, which the company sees as a foundation for many of these growth initiatives.The cloud as a foundation for growthDeutsche Bӧrse has been preparing for and moving to the cloud for more than three years now, but has really ramped up its cloud-first strategy over the last year, led by Dr. Christoph Böhm, the company’s Chief Information Officer & Chief Operating Officer. Deutsche Börse follows a multi-vendor strategy for cloud usage and chose Google Cloud as its partner to further modernize, develop and operate its enterprise workloads in an efficient, secure and compliant way. There are four ways the company will benefit from its move to the cloud: Increased speed and agility: Provisioning happens in minutes rather than months. This means Deutsche Bӧrse’s IT team can develop and deploy services faster than ever before, especially when it comes to new services that use the emerging technologies mentioned in the company’s Roadmap 2020 vision. It’s a win for everyone involved. Developers don’t have to wait on infrastructure changes to start moving, clients get new features and services faster, and the company is better equipped to adapt to changing market conditions.Increased efficiency and scale: Like many large enterprises, Deutsche Bӧrse is looking at acquisitions as a way to expand into new areas and fuel growth. The openness and interoperability of cloud technology makes it easier to bring on these acquired companies and integrate new solutions into Deutsche Bӧrse’s broader portfolio. The company can also tap into the cloud’s economies of scale, using existing infrastructure and technology to save time and money in order to focus on what matters most:  serving its clients.A leap forward in emerging technologies: Google Cloud’s global scale and innovative technology will allow Deutsche Börse to accelerate its push into distributed ledger/blockchain technology, robotics, AI and advanced analytics.Additional transparency and security: For a company whose mission it is to create trust in the markets of today and tomorrow, security and compliance are of utmost importance. Deutsche Bӧrse and Google Cloud are committed to addressing the regulatory requirements for the financial industry and have worked closely to ensure workloads are migrated in a safe and secure manner. “As part of our collaboration with Google Cloud we are looking forward to jointly define unique data security solutions for the financial industry,” said Dr. Böhm. “We are excited to continue our journey into the cloud, driving data security and compliance for cloud services to the next level, to the benefit of our customers as well as our company.” Security and compliance in financial services is always a moving target as the threat landscape changes, new technologies arise, and legislation evolves. That’s why Google Cloud and Deutsche Bӧrse are looking to set the standard when it comes to trust and the cloud. We look forward to a long and fruitful partnership.
Quelle: Google Cloud Platform

Protecting your GCP infrastructure at scale with Forseti Config Validator

One of the greatest challenges customers face when onboarding in the cloud is how to control and protect their assets while letting their users deploy resources securely. In this series of four articles, we’ll show you how to start implementing your security policies at scale on Google Cloud Platform (GCP). The goal is to write your security policies as code once and for all, and to apply them both before and after you deploy resources in your GCP environment.In this first post, we’ll discuss two open-source tools that can help you secure your infrastructure at scale and scan for non-compliant resources: Forseti and Config Validator. You can see it in action in this live demo from Alex Sung, PM for Forseti at Google Cloud.In follow-up articles, we’ll go over how you can use policy templates to add policies to your Forseti scans on your GCP resources (using the enforce_label template as an example). Then, we’ll explain how to write your own templates, before expanding to securing your deployments by applying your policies  in your CI/CD pipelines using the terraform-validator tool.Scanning for violations with Forseti and the config_validator scanner Cloud environments can be very dynamic. It’s a best practice to use Forseti to scan your GCP resources on a regular basis (a new scan runs every two hours by default) and evaluate them for violations. In this example, Forseti will forward its findings to Cloud Security Command Center (Cloud SCC) for integration, using a custom notifier. Cloud SCC also integrates with the most popular security tools within the Google Cloud ecosystem like DLP, Cloud Security Scanner, Cloud Anomaly Detection, as well as third party tools (Chef Automate, Cloudflare, Dome9, Qualys etc.).This provides a single-pane of glass for your security and operation teams to look for violations.Here is an example of the Cloud SCC dashboard with few security sources set up. At a high level, here’s what you need to do to get your Forseti integration working:Deploy a basic Forseti infrastructure with the config_validator scanner enabled in a dedicated projectAdd a new SCC connector for Forseti via the UI manually (the other alternative is to use the API directly at this point)Update your Forseti notifier configuration to send the violations to SCCAdd your custom policy library to the forseti server GCS bucket so that the next scan applies your constraints on your infrastructure. You can use Google’s open-source policy-library as a starting point for this.Let’s go over these steps in greater detail.1. Forseti initial setupThe official Forseti documentation lists a few options to deploy Forseti in your organization. A good option is the Forseti Terraform module, since it’s easy to maintain, and we because it’s easy to deploy Terraform templates from a CI/CD pipeline, as you’ll see in next posts.Another alternative for installing Forseti is to follow this simple tutorial for the Terraform module (includes a full Cloud Shell tutorial).There are 139 inputs (for v2.2.0) you can play with to configure your Forseti deployment if you feel like it. For this demo, we recommend you use the default values for most of them.First, clone the repo:Then, set some variables to specify the input you need in a new terraform.tfvars file:Note: Make sure your credential file is valid and corresponds to a service account with the right permissions, unless you are leveraging an existing CI/CD pipeline that handles that part for you. Check out the module helper script to create the service account using your own credentials if needed.You can now test your setup. First run terraform init:Then create a terraform plan from these templates and save it as a file.If everything looks good, you can deploy your plan:You now have a Forseti client and a Forseti server in your project (among many other things, like a SQL instance and Cloud Storage buckets). 2. Setting up Cloud SCCAt this point, you’ll need to follow these steps to configure Cloud SCC to receive Forseti notifications. You simply need to create a new source, that you’ll use in your Forseti configuration.Note: Stop at step #4 (do not follow step 5) in the Cloud SCC setup instructions, as you’ll do this using Terraform instead of manually updating the Forseti server.If you follow the steps in the link above to add the Forseti Cloud SCC Connector as a new security source, you should end up with something like this in your Cloud SCC settings:Take a note of your Forseti Cloud SCC Connector source ID and Service Account for the next step.3. Updating the Forseti configurationNow, you’ll need to update our Forseti infrastructure to configure the server to send the notifications to Cloud SCC. Here is your updated terraform.tfvars file:If you run terraform plan and terraform deploy again, your Forseti server should now be correctly configured.You can check the /home/ubuntu/forseti-security/configs/forseti_conf_server.yaml file on the Forseti server to see the changes, or by running the forseti server configuration get command). Then, add your policy library to let the config_validator scanner check for violations once everything is set up.4. Setting up the config_validator scanner in ForsetiNow, you need to import your policy-library folder in the Forseti server Cloud Storage bucket and reload its configuration. Please refer to the Config Validator user guide to learn more about these steps.Then, once the config validator scanner is enabled, you can add your own constraints to it. You do this by updating the Forseti Cloud Storage server bucket, following these instructions. The end result should look like this (after Forseti first runs):Note: All of these steps should be automated in your CI/CD pipeline. Any merge to your policy-library repository should trigger a build that updates this bucket. As a general rule, constraints need to be added in the policies/constraints folder and use a template in the policies/templates folder.You can also check that the config-validator service is running and healthy using:Now you can test out your setup, by running a scan and sending the violations manually to Cloud SCC. This is just to confirm that everything is working as expected and avoid waiting until the next scheduled scan to troubleshoot it.The traditional way to query a Forseti server is SSH to the forseti client, using the console UI, and create a new model based on the latest inventory (or you can create a new inventory if you need to capture newly created resources). Using this model you can run the server scanners manually and finally run the notifier command to send out the result to Cloud SCC.A quicker way to test out this setup is to run the same script that will run automatically on the server every two hours. Simply SSH into the server and manually run (from /home/ubuntu/forseti-security):This gets the latest data from the Cloud Storage bucket, and run all the steps mentioned earlier (create a model from the latest inventory, run the scanners and then the notifiers) in an automated fashion. Once it successfully runs, you can check in Cloud SCC what violations (if any) were found. Since you didn’t add any custom constraints in the policy-library/policies/constraints folder, the config_validator scanner shouldn’t find any violations at this point.If you are having issues in any of the setup steps, please read the Troubleshooting tips section for common issues that people run into. Troubleshooting tipsForseti install issuesIf you do not see the forseti binary when you SSH in the client or the server. Check out your various log files, to see if the install was successful. This is usually a red flag that means your Forseti installation failed. You cannot move forward from there; you need to fix the situation first. Most of the useful logs are in /var/log: syslog, cloud-init.log, cloud-init-output.log and forseti.log.Do not hesitate to run ‘terraform destroy’ and double check every variable you passed to the module too, to check for permissions issues.Config Validator issuesForseti runs scanners independently, based on the server configuration file. If everything is configured properly, when you run the forseti scanner command, you should see among other things, something like:If the Forseti config validator scanner does not run, check out the forseti server configuration file to see if it’s enabled (/home/ubuntu/forseti-security/configs/forseti_conf_server.yaml under scanners):Also check out if the current configuration has the same value, using ‘forseti server configuration get | grep –color config_validator’ to make it easier to spot.Finally, verify that the config_validator service is up and running.If your issue is that your latest constraint changes are not automatically updated in your scan results (even though they should be), you can upload the latest version to the Cloud Storage bucket, and restart the config_validator service on the server.Cloud SCC issuesIf you don’t see the Forseti connector in your Cloud SCC UI, restart the steps to enable the Forseti connector in SCC, or check that your connector is enabled in the settings.If you don’t receive the violations you can see in the Forseti server, make sure that the Forseti server’s service-account has the Security Center Findings Editor role assigned at the org level.Next stepsAt this point, you are ready to add your own constraints in your policy-library and start scanning your infrastructure for violations based on them. The Forseti project offers a great list of sample constraints you can use freely to get started.In the next article of this series, you’ll learn to we will add a new constraint to scan for labels in your existing environment. This can prove quite useful to ensure your environment is like you expect it to be (no shadow infrastructure, for instance) and let you react quickly whenever some non-compliant (or mislabeled in this case) resource is detected.Useful linksForseti / Config ValidatorForseti Config Validator overview User GuideWriting your own custom constraint templatesRepositories:Forseti Terraform moduleForseti source codeConfig Validator source codeConfig Validator policy library
Quelle: Google Cloud Platform

WPP unlocks the power of data and creativity using Google Cloud

Over the past year, our customers have shared many stories with me about how cloud technology is transforming their businesses. One theme that frequently comes up is how the cloud enables marketers to deliver better customer experiences across online and offline campaigns, email, apps, websites, and more. By stitching together these consumer touchpoints via the cloud, marketers can ultimately produce more seamless, consistent experiences—and better outcomes for the businesses they support.One of the companies at the forefront of this transformation is WPP. As the world’s largest advertising holding company, WPP operates across 112 countries and supports nearly three-quarters of the Fortune 500 with media, creative, public relations, and marketing analytics expertise. It can be challenging for a company operating at this scale to manage and unlock the value of its data across its businesses. Information can become siloed, with valuable insights lost within the organization. In 2018, WPP CEO Mark Read recognized this challenge and set forth a new vision. By better aligning his company’s technology, creative and talent, WPP aims to deliver transformative experiences for audiences and superior results for its clients.“Creativity and technology are the two key pillars of WPP’s future strategy. Creativity is what differentiates us and technology allows us to scale. In the first year of our transformation journey we have invested significantly in our technology capabilities and our strong partnership with Google Cloud is key to helping us realise our vision. Their vast experience in advertising and marketing combined with their strength in analytics and AI helps us to deliver powerful and innovative solutions for our clients,”  said Mark Read, WPP CEO.  To fast-track its business transformation, WPP chose Google Cloud for our technology and expertise, and is focusing on 3 key initiatives:  Campaign Governance—Creating better ways of working through cloud-driven automation for campaign set-up, creative management, reporting and optimization across the WPP network.Customer Data Management—Bringing together data points from the customer, market intelligence and WPP data into an open data platform to enable better insights, planning and activation.  WPP AI—Utilizing Google Cloud’s ML tools and technology to help fuel innovation in WPP’s analytics, campaign optimization, content intelligence and customer experience practices.WPP is deploying Google Cloud across multiple projects—from building a media planning stewardship system, to improving campaigns with the use of tools for image recognition, sentiment analysis, and natural language processing. By incorporating cloud technology into WPP’s daily practices, teams can speed up their time-to-insight and uncover new opportunities for clients. Also, by connecting Google Cloud to other products like the Google Marketing Platform, WPP can deliver better experiences for their audiences across media and marketing.WPP has already begun putting its data-forward thinking to work. For example, Wunderman Thompson, a global agency within WPP, worked with GlaxoSmithKline to develop theTheraflu Flu Tracker. Using statistical data from Mexico’s National Institute of Epidemiology, along with weather data and other indicators, it developed deep learning models on Google Cloud that predicted where and when flu cases would occur in Mexico, with up to 97% accuracy. Wunderman turned this knowledge into relevant digital ads that communicated the risk of flu in the 32 federal entities in Mexico. This campaign increased e-commerce sales by nearly 200%, won a Bronze Lion at Cannes and helped people be better informed about flu likelihood.This is just one example of how WPP is tapping into data, obtaining insights at-scale and using creativity to produce meaningful business results—and it’s only the beginning. We are proud to collaborate with WPP to deliver truly transformative experiences to consumers using Google Cloud.
Quelle: Google Cloud Platform

A CIO’s guide to cloud success: decouple to shift your business into high gear

They say 80% of success is showing up—but unfortunately for enterprises moving to the cloud, this doesn’t always hold up. A recent McKinsey survey, for example, found that despite migrating to the cloud, many enterprises are nonetheless “falling short of their IT agility expectations.” Because CTOs and CIOs are struggling to increase IT agility, many organizations are unable to achieve their larger business goals. McKinsey notes that 95% of CIOs indicated that the majority of the C-Suite’s overall goals depend on them.The disconnect between moving to the cloud and successful digital transformation can be traced back to the way most organizations adopt cloud:renting pooled resources from cloud vendors or investing in SaaS subscriptions. By adopting cloud in this cookie-cutter way, an enterprise basically keeps doing what it’s always done—perhaps just a little faster and a little more efficiently. But we’re entering a new age. Cloud services are increasingly about intelligence, automation, and velocity—not just the economies of scale offered by big providers renting out their infrastructure. As McKinsey notes, enterprises sometimes stumble because they use the cloud for scale, but do not take advantage of the agility and velocity benefits it provides. At its core, achieving velocity and agility isn’t about where an application is hosted so much as how fast, freely, and efficiently enterprises can launch and adjust strategies, whether creating ways to interact with customers on new technology platforms, quickly adding requested features to apps or monetizing data. This in turn relies on decoupling the dependencies between different systems and minimizing the amount of manual coordination that enterprise IT typically has to perform. The result is more loosely-coupled distributed systems that are far better equipped for today’s dynamic technology landscape. This concept of decoupling, and how it can accelerate business results, drives much of what we do at Google—and it has strongly informed how we built Anthos, our open source-based multi-cloud platform that lets enterprises run apps anywhere, but also achieve the elusive IT agility and velocity that enterprises crave.Decoupling = agility: shift your development into high gearMigrating to the cloud does not, by default, transform an enterprise because digital transformation isn’t about the cloud itself. Rather, it’s about changing the way software is built and the consequent explosion in new business strategies that software can support—from selling products via voice assistants, to exposing proprietary data and functionality to partners at scale, to automating IT administration and security operations that used to require manual oversight. Specifically, modern software development eschews ‘monolithic’ application architectures whose design makes it difficult to update or reuse functionality without impacting the entire application. Instead, developers increasingly build applications by assembling small, reusable, independently deployable microservices. This shift not only makes software easier to reuse, combine, and modify (which can help an enterprise to be more responsive to changing business needs), but also lets developers work in small parallel teams rather than large groups (which helps them to create and deploy applications much faster. What’s more, microservices exposed as APIs can help developers leverage resources from a range of providers spread across many different clouds, giving them the tools to create richer applications and connected experiences. These decouplings of services from an application and developers from one another is often done via containers. By abstracting applications and libraries from the underlying operating system and hardware, containers make it easier for one team of developers to focus on its work without worrying about what any of the teams with which they’re collaborating are doing. Containers also represent another important form of decoupling that can dramatically change the relationship among an IT department, severs, and maintenance. Thanks to containers, for example, many applications can reside on the same server without impacting one another, which reduces the need for application-specific hardware deployments. Containers can also be ported from one machine to another, opening opportunities for developers to create applications on-premises and scale them via the cloud, or to move applications from one cloud to another based on changing needs. This abstraction from the hardware they run on is one reason containers are often referred to as “cloud-native.” This overview only scratches the surface, but the point is, by decoupling functionality and creating new architectures built around loosely-coupled distributed systems, enterprises can empower their developers to work faster in smaller, parallel teams and unlock the IT agility through which modern, software-driven business strategies are executed.  But doesn’t decoupling increase complexity?Containers and distributed systems offer many advantages, but adoption isn’t as simple as flipping a switch. Decomposing fat applications into hundreds of smaller services can increase an enterprise’s agility, but orchestrating all those services can be tremendously complicated, as can authenticating their users and protecting against threats. When millions of microservices are communicating with one another, it becomes literally impossible to put a human being in the middle of those processes, requiring automated solutions. Many enterprises consequently struggle with not only governance across these distributed environments, but also identifying the right solutions to put in place. Moreover, not everything within a large enterprise will evolve at the same pace. Running containers in the cloud can help an enterprise focus on building great applications while handing off infrastructure management to a vendor. In fact, teams in almost every large enterprise are already operating this way—but other teams accustomed to legacy approaches may require a more incremental transition. Additionally, enterprises may have a variety of reasons, whether strategic or regulatory, for keeping data on-prem—but they may still want ways to apply cloud-based analytics and machine learning services to that data and otherwise merge the cloud with their on-prem assets. Assembling the orchestration, management, and monitoring solutions for such deployments has historically been difficult. Another significant challenge is that though containers are intrinsically portable, the various public clouds provide different platforms, which can make moving containers—let alone giving developers and administrators consistent experiences—quite difficult. Many open-source options are not the panacea they once seemed because the open-source version of a solution and the managed deployment sold by a cloud provider may be meaningfully different. These challenges can be particularly vexing because enterprises want the flexibility to change cloud vendors, utilize multiple clouds, and otherwise avoid lock-in.   Helping enterprises to enjoy the benefits of distributed systems while avoiding these challenges shaped our development of Anthos. Anthos: Agility minus the complexity Google runs multiple web services with billions of users and is an enormously complex organization whose IT systems connect tens of thousands of employees, contractors, and partners. No surprise then that we’ve spent a lot of time solving the puzzle of distributed systems and their dynamic, loosely-coupled components. For example, we open-sourced Kubernetes, the de facto standard for container orchestration, and Istio, a leading service mesh for managing microservices—and both are major components in Anthos and both are based on internal best practices. Istio, provides systematic centralized management for microservices and enables what is arguably the most important form of decoupling: policies from services. Developers supported by Istio are free to write code without encoding policies into their microservices, allowing administrators to change policies in a controlled rollout without redeploying individual services. This automates away the expensive, time-consuming coordination and bureaucracy traditionally required for IT governance and helps accelerate developer velocity. Recognizing that enterprises demand choice and openness, Anthos launched with hybrid support and will soon include multi-cloud functionality as well, with all options offering simplified management via single-pane-of-glass views, policy-driven controls, and a consistent experience across all environments, whether on Google Cloud Platform, in a corporate data center with Anthos deployed on VMware, or, after our coming update, in a third-party cloud such as Azure or AWS. Because Anthos is software-based, on-prem deployments don’t require stack refreshes, letting enterprises utilize existing hardware investments, ensuring developers and administrators have a consistent experience, regardless of where workloads are located or whose hardware they run on. We’re already seeing fantastic momentum with customers using Anthos. For example, KeyBank, a superregional bank that’s been in business for almost 200 years, is adopting Anthos after using containers and Kubernetes for several years for customer-facing applications. “The speed of innovation and competitive advantage of a container-based approach is unlike any technology we’ve used before,” said Keybank’s CTO Keith Silvestri and Director of DevOps Practices Chris McFee in a recent blog post, adding that the technologies also helped the bank spin up infrastructure on demand when traffic spiked, such as during Black Friday or Cyber Monday. KeyBank chose Anthos to bring this agility and “burstability” to the rest of its IT operations, including internal-facing applications, while staying as close as possible to the open-source version of Kubernetes. “We deploy Anthos locally on our familiar and high-performance Cisco HyperFlex hyperconverged infrastructure,” Silvestri and McFee noted. “We manage the containerized workloads as if they’re all running in GCP, from the single source of truth, our GCP console.” Anthos includes much more—such as Migrate for Anthos to auto-migrate virtual machines into containers in Google Kubernetes Engine (GKE) and an ecosystem of more than 40 hardware and software partners. But as the preceding attests, at the highest level, the platform helps enterprises to balance developer agility, operational efficiency, and platform governance by facilitating the decoupling central to successful digital transformation:Infrastructure is decoupled from the applicationsTeams are decoupled from one anotherDevelopment is decoupled from operations Security is decoupled from development and operationsSuccessful decoupling minimizes the need for manual coordination, cuts costs, reduces complexity, and significantly increases developer velocity, operational efficiency, and business productivity. Decoupling delivers a framework, implementation, and operating model to ensure consistency across an open, hybrid, and multi-cloud future—a future Anthos has been built to serve.Check out McKinsey’s report “Unlocking Business Acceleration in a Hybrid Cloud World” for more about how hybrid technologies can accelerate digital transformation, and tune in to our “Cloud OnAir with Anthos” session to learn even more about how Anthos is helping enterprises digitally transform—including special appearances by KeyBank and OpenText!
Quelle: Google Cloud Platform

Anthos simplifies application modernization with managed service mesh and serverless for your hybrid cloud

For decades, organizations built and ran applications in their own on-premises data centers. Then, they started deploying and running applications in the cloud. But, for most enterprises, the thought of moving all-in to the cloud was too daunting. They worried they would need different developers and tools for each environment, and that they wouldn’t have a consistent management interface to ensure the environments were compliant with their security policies. To address these challenges, we introducedAnthos, a services platform that brings applications into the 21st century, with the flexibility to run in any environment—whether it’s cloud-native or based on virtual machines. Today, we’re announcing new Anthos capabilities to further simplify your application modernization journey: Anthos Service Mesh, which connects, manages, and secures microservicesCloud Run for Anthos, which enables you to easily run stateless workloads on a fully managed Anthos environmentIn addition, Anthos Config Management now includes capabilities to help your teams automate and enforce org-specific policies. Binary Authorization, meanwhile, helps to ensure that  only validated, verified images are integrated into your managed build-and-release process..Tame microservices with Anthos Service MeshIncreasingly, many organizations consider microservices architectures to be an essential way to modernize their applications. But moving from monolithic applications to large numbers of microservices increases operational complexity. To address this, you can use a service mesh—an abstraction layer that provides a uniform way to connect, secure, monitor, and manage microservices. A service mesh uses high-performance and lightweight proxies to bring security, resiliency, and visibility to service communications, freeing your developers to do what they do best: build great applications. A service mesh helps you manage the lifecycle and policies for this intelligent data plane and gives you secure and easy-to-manage microservices-based applications. As a managed offering, Anthos Service Mesh in Beta makes it easy to add this abstraction layer to your environment. Built on Istio open APIs, it lets you easily manage and secure inter-service traffic  with a unified administrative interface, and provides uniform traffic controls that span them both. In addition, Anthos Service Mesh gives you deep visibility into your application traffic, thereby improving your development experience and making it easier to troubleshoot these complex environments.Deep visibility helps keep your applications running smoothly.Serverless flexibility and velocity across on-prem and cloudServerless computing provides you with a number of benefits: the ability to run workloads without having to worry about the underlying infrastructure, to execute code only when needed, to autoscale from zero to n depending on traffic, all wrapped around a simple developer experience. Today, we are excited to bring this experience to Anthos through Cloud Run for Anthos, now in beta. Based on Knative, an open API and runtime environment, Cloud Run for Anthos enables you to be more agile by letting you write code like you always do—without having to learn advanced Kubernetes concepts. It enforces best practices and provides deep integration with Anthos by offering advanced networking support, and enabling cloud accelerators, which means your workloads can all run in the same cluster. Cloud Run for Anthos delivers portability with consistency, so you can flexibly run your workloads on Google Cloud or on-premises – all with the same consistent experience. It helps you adopt cloud on your own terms by letting you adopt serverless wherever you are – even on-premises. Modernize application security to increase organizational agilityIn addition to simplifying the development and operations of modern applications, Anthos includes guardrails that provide security by default. Enterprises can automate their security operations by enforcing consistent policy across environments, isolating workloads with different risk profiles, and deploying only trusted workloads.With Anthos Service Mesh, you have uniform policies for enforcing service aware network security including encryption in transit, mutual authentication and powerful access controls. This allows your IT teams to implement zero trust security that moves across environments with your application without making application code changes, allowing you to focus on delivering critical business functions faster.Binary Authorization helps you build defined security checks into the development process earlier, making sure you deploy only trusted workloads in your environments. By ensuring workloads are assessed and validated before they are deployed, enterprises can have the confidence that these workloads can be trusted.Finally, using the new Policy Controller and Config Connector features of Anthos Config Management, you can enforce consistent security policies and controls continuously across your cloud environments, including Google Cloud, on-prem and other clouds. Learn more about how Anthos helps organizations modernize their approach to application security in our Anthos Security white paper.Expanding the Anthos partner ecosystemAnthos launched with more than 30 hardware, software and system integration partners ready to help customers adopt Anthos right out of the gate. Today, that number stands at more than 40, and partners report exceptional momentum for the platform. Atos, Cognizant, Deloitte, HCL, Infosys, TCS, and Wipro are some of the global systems integrators who are helping deliver Anthos to their clients, and they are doubling down on their efforts. “Deloitte has been working with Google long before the formal announcement of Anthos at Google Cloud Next in April,  said Tim O’Connor, Principal, Deloitte Consulting LLP. “Since then we’ve supercharged our investments and have been extending existing Anthos assets and building teams to bring this powerful and game-changing technology to the marketplace,” through a dedicated group of practitioners focused on hybrid enablement through Anthos.A complete platform for modernizing organizationsWith its comprehensive capabilities for container management, service mesh, security, monitoring and logging, as well as developer productivity, Anthos helps your entire organization benefit from  application modernization. For developers, Anthos simplifies application deployment with access to services like GCP Marketplace and Cloud Run. Operations teams benefit from improved resource utilization and reuse, and visibility into all available services—all from a single management plane. Meanwhile, Anthos lets security professionals roll out consistent policies across their deployments, encrypt sensitive traffic, and ensure that only trusted binaries are running in the environment. All the while, Anthos puts your organization on the path to the cloud, in the configuration and at the pace that works for you. For a technical deep dive into service mesh, download our new ebook, The Service Mesh Era: Architecting, Securing and Managing Microservices with Istio. And to understand how Anthos can take your cloud environment to the next level, check out A CIO’s guide to cloud success: decouple to shift your business into high gear.
Quelle: Google Cloud Platform

How APIs help National Bank of Pakistan modernize the banking experience

Editor’s note: Today we hear from Zohaib Ali Khan, head of mobile financial services, and Nadir Ikram, technical lead at the National Bank of Pakistan (NBP), the country’s largest government-owned bank. Read on to learn more about how NBP uses APIs to help implement digital banking and reduce the burden of legacy manual processes.NBP, Pakistan’s largest government-owned bank, serves private and commercial customers and also acts as the government treasury bank. This means that it handles all government transactions—including disbursements and cash collection. In the past, every government transaction had to be handled physically through the NBP branch network. But in a populous country like Pakistan, managing the huge volume of financial transactions is a big task, especially if a single bank is the only conduit. While we’re still in the early stages, here are a few ways that we’re actively working to find solutions that overcome these challenges. Using APIs to increase accessOur digital banking implementation team, which includes product developers and a small in-house think tank, is responsible for developing new technology and out-of-the-box solutions tailored to the requirements of different areas of the bank. Recently, we developed a plan to open up the NBP government mandate to other banks and third-party fintech partners. Under this new model, instead of relying solely on our own channels, customers can now transact through fintech apps and other approved Pakistani banks. To be able to roll out a solution that would be reliable, scalable, and secure enough to meet our needs, we adopted Google Cloud’s Apigee API management platform. This platform allows us to accelerate our product development, so that we can compete in the fintech space. It also gives our customers access to a wide range of banking services through our own and partner channels.As a government bank, NBP has to deal with a lot of procedural hurdles, which have slowed down our entry into the fintech market. Additionally, legacy systems required us to develop solutions for each particular channel and use case. APIs and API management increase the reusability of our services, while also bringing ease and speed of getting our services to market. In fact, we’ve seen the time it takes to offer a new solution reduced by 20%. Apigee not only helps us to achieve our go-to-market goals—it enhances our capacity to capture new and unique use cases across multiple channels. We appreciate the speed, agility, and security that Apigee brings us, along with its many out-of-the-box features. Reducing barriers to consumer servicesOne example of how we’re using Apigee is our passport collection use case. In Pakistan, to obtain a passport, citizens have to visit an NBP branch. This can involve waiting in long lines to deposit the government-required passport insurance fee. The sheer volume of these transactions was overwhelming our local branch teams, and costs for this type of manual transaction were high. Furthermore, the government had problems reconciling the fee collection with the passport transactions. To address these concerns, we developed an API that allows not only NBP branches, but also third-party banks and fintechs, to accept passport issuance fees.Customers can now visit the bank or fintech provider of their choice, reducing the load on NBP, while the government passport department inspectors can now easily reconcile these transactions. We also developed a bill payment solution with Apigee that gives customers the possibility of paying utility bills online. Previously, the NBP process was inefficient. We had an in-person bill payment mechanism operating in our 1,500+ bank branches. Now, we’ve integrated the payment API with the branch channel so that bill payment is automated, whether it takes place in a branch or online. Increasing the API footprint inside and outside the bankNow that we’ve implemented the Apigee platform, our partner ecosystem is benefiting from it and growing, too. We’ve enrolled a wide range of fintechs that are developing products in the corporate payment space. We’re also planning to partner with several incubation centers to provide our APIs via sandbox environments, once our Apigee developer portal goes live. Other fintechs will be enrolled through the incubation centers as well. Finally, alternative third-party partners, such as fintechs and banks, will consume our APIs and/or partner with us for product development.We’ve currently published 10 APIs and by the end of this year, we expect to have 25 to 30 live. We’re looking forward to implementing monetization, but not necessarily with revenue as the primary focus. As NBP is a government-owned bank, we have a responsibility to act as a catalyst for smaller players, and we see monetization expertise as an opportunity for fulfilling our mandate to help fintech startups grow and mature. In the long run, API revenues will certainly become more important, but the short-term goal is seeding innovation in the marketplace and providing the best possible retail banking experience for our customers, nationwide.To learn more about API management on Google Cloud, visit the Apigee page.
Quelle: Google Cloud Platform

Quantum Metric gets answers from customer data at light speed

Editor’s note: Today we’re hearing from the founder of Quantum Metric, a digital intelligence platform that analyzes huge amounts of digital customer data to improve the customer experience, enhance sales, and increase loyalty. The company credits a huge leap in innovation—along with a 10-fold increase in business—to their decision to adopt Google Cloud. Here’s more detail on how Quantum Metric uses Google Cloud’s BigQuery.At Quantum Metric, we’re in the business of bringing our customers business insights that are based on customer experience data and analytics for mid-market and Fortune 500 companies. Our software, powered by big data, machine intelligence, and Google Cloud, helps our customers identify, quantify, prioritize and measure opportunities to improve digital experiences. As companies move to a more agile product lifecycle, including continuous deployment and continuous integration, they’re finding that it’s critical to receive perpetual quantified feedback and insights from their data in real time to understand where the largest opportunities exist. Each year, billions of customer interactions are captured through browsers or mobile apps on PCs, tablets, and mobile devices. This data, fed into the Quantum Metric platform, can show if a customer had a password problem they couldn’t solve or struggled when trying to purchase something and abandoned their cart. It also can show if the customer tried in vain to complete an online change to their service provider’s subscription, to reach tech support, or couldn’t find the size or color they were looking for while shopping online. Most importantly, the Quantum Metric platform quantifies the business value of the issue, helping organizations prioritize where they can make the largest impact to their business.Success overwhelms our initial architectureInitially, the Quantum Metric experience analytics software ran on a MySQL open source relational database management system (RDBMS). The MySQL RDBMS worked great for simple queries, when there was a specific question to ask of the data. Soon, though, we knew we needed to offer more advanced data science capabilities. Our bigger customers wanted to ask questions across very large data sets—days, weeks, months, and years worth of data. They wanted to pose iterative questions using complex filters to answer their most challenging business questions. With more complex queries across more data, response times from our RDBMS went from 100 to 500 milliseconds to as long as 20 minutes. That delay was slowing down our ability and time to insights, which also reduced the value we could provide to our customers, since iterative exploration and analysis requires real-time query responses. Because of the need for real-time responses, there were certain questions that we just weren’t able to ask of the data. It became clear that we needed a much more robust data warehouse solution. There were also operational challenges with MySQL and massive-scale data ingestion. We spent a lot of time into the wee hours of the night and morning handling errors and recovering databases. We tried to address these challenges by sharding, partitioning, and indexing the data to optimize for the types of questions customers were asking. But the problems were escalating and happening more often, from once a month across the customer base to monthly for at least 20 different customers. We could tune the platform for today and tomorrow’s workload, with good guesses at where indexes could be used, but we simply couldn’t continue to horizontally scale MySQL in a cost-efficient and operationally efficient manner. Speed breeds innovationOnce we started exploring options that could better scale with our business, we looked at NoSQL technologies like Cassandra (a partitioned row-store database), MySQL’s Column Store (a columnar store database), and Vertica (a columnar store database)—each with unique ways of handling data storage and accessibility. But with high volumes of complex queries across large data stores, all of these solutions began to fail, bogged down with multiple, simultaneous users. We could have solved the problems with more raw compute and storage, but it would have been prohibitively expensive to run and require a large team to operate. We then decided to try BigQuery, and it was transformative. We connected our front end to BigQuery via APIs. Once data is 15 minutes old, it is automatically extracted, loaded, and transformed (ETL) to BigQuery. We continuously update the legacy MySQL RDBMS so its data is integrated with BigQuery data when queries require real-time data. Most query response times are within 100-200 milliseconds, matching what we initially experienced with MySQL. When traffic from our customers scales up, we can now scale on-demand to accommodate it, thanks to BigQuery’s hundreds of thousands of CPUs. Our customers no longer run into slow response times, and we’ve gained confidence that we can offer them—and their users—advanced insights and better experiences without delay. More importantly, with this scale of query power, we were able to build data science algorithms into the platform, which iteratively query BigQuery based on the results, and help quantify the impact of a specific issue to a specific segment of users. Adding these capabilities was possible because of the massive scale of BigQuery. In addition to new insights and fast response times, we wanted our customers to be able to ask complex questions using very simple language. For example: “Show me high-loyalty customers, located in specific geographic areas, who visited the web site at least five times, based on specific campaigns, and never booked a seat on a flight.” This was exactly the kind of query that was used by a major U.S. airline to understand the multi-million dollar impact of a failure affecting their most valuable customers: their high-loyalty members. And this was all done while maintaining the highest standard of care of customer data and privacy by default, using multiple layers of encryption of data in transit, at rest, and a unique military-grade encryption approach. This approach encrypts PII, including even session cookies, with a RSA-2048 key available only to a select few and used for use cases such as fraud analysis.   It’s no exaggeration to say that BigQuery has totally transformed our business. It provides the petabyte scale and speed we were missing, in addition to taking care of operational maintenance, a task that was burying our team with MySQL. We’re now able to support some of the largest companies in the world that require real-time, petabyte-scale analytics. That lets them serve more customers faster with higher quality, and take advantage of BigQuery’s power and scale to innovate. There are other cloud solutions that can address petabyte analytics, but the most unique value proposition of BigQuery was its on-demand scaling and operational management, with extremely cost-effective pay-as-you-go billing. While today we are at a scale where we have round-the-clock querying needs, our early days had very sporadic query loads where we needed instant scale, then a long lull of nothing. The unique business model of BigQuery’s pay-by-bytes-scanned allowed us to have access to a massive-scale querying platform without breaking the bank. Using BigQuery powers better customer experience and reduces purchasing frictionAmong the many features of Quantum Metric is the ability to replay online customer sessions. In the example below from a mobile e-commerce site, each action is displayed chronologically. Why did this customer’s transaction fail? Diving deeper, the session replay shows that the user tried to change the item quantity in the checkout cart, which resulted in a failed API call. Powered by BigQuery, Quantum Metric can then show how many other end users had this issue, with a simple click of “Show More Errors Like This.” With BigQuery’s massive scale, Quantum Metric will then quantify the impact of that issue, so companies can prioritize which issues need attention immediately. If this is the issue that’s impacting the business the most, our customer can use a single click to open a Jira ticket, forwarding the discovery to their product and engineering teams. Those teams can then re-engineer the experience in near-real time, addressing the failed API call and cutting out the frustrating time it takes for engineers to reproduce the issue.Once we had a powerful back end in BigQuery, we realized that Quantum Metric’s platform could be used to ask complex questions from vast datasets. We built some of the processes that data scientists use to formulate those queries right into our product.  For example, we added one series of processes to our platform to help customers understand whether a suspect issue is really impacting end-user experience. Is it something that should be prioritized and fixed? Does this really affect the user experience? Does it have financial impact? These and other questions can be pre-defined as a complex query in Quantum Metric to let our customer quickly gain insight on how an issue is impacting the business. Customers were blown away when they heard this was possible. It was the holy grail for what they were looking for in data science. It really sets us apart from our competition.Today, with every company heavily dependent on data, those companies that can uncover and act on insights fastest are the ones that will succeed. BigQuery gives us the data warehouse platform we need to provide our customers with fast, reliable technology tools. It frees us from having to deal with the minutiae of technology infrastructure operations, so we can focus on finding and extracting the magic in customer data. With the power and scale of BigQuery, combined with the real-time capture of every user experience with 100% fidelity, we’re able to offer a self-service analytics platform that provides insights into digital journey friction points and acts as the indisputable arbitrator of truth. Learn more about Google Cloud, and learn more about Quantum Metric.
Quelle: Google Cloud Platform

Experian: From credit bureau to technology company with APIs

Editor’s note:Today we hear from Dang Nguyen, API Platform Product Owner at Experian, on how the company uses the Apigee API management platform to digitally transform from a traditional credit bureau to a true technology and software provider. Read on to learn how Experian uses APIs to help businesses make smarter decisions and individuals take financial control. Chances are, when you think of Experian you think of a traditional credit bureau that provides credit reports. But Experian has transformed into a true technology and software provider. We gather, analyze, and process data in ways that other companies just can’t. Businesses use this data to make smarter decisions about credit and lending, as well as to prevent identity fraud and crime. We’re also able to use this data to help individuals take financial control of their own lives and access all kinds of financial services with products like Experian Boost.Transforming data delivery, transforming the enterpriseA big part of our digital transformation has been based on our API program. We approached APIs with a concrete goal in mind. We knew exactly how we wanted to transform the business, and we had a set plan to achieve that. For us, this meant establishing an API center of excellence as a first step. It’s sole purpose was to enable the business units to create their APIs quickly and correctly, then apply them properly. We then went out to the business units one by one so that we could train them to build their APIs in a customer-friendly way. We taught them the entire API process of building, giving access to developers internally and externally.This approach is fundamentally different from previous ones. As far back as the 1990s, our customers connected to us via software applications installed on their systems. As technologies evolved, our services to customers evolved, and we began supporting XML-based transactions and custom integrations with our partners. Some of these integrations actually included VPNs rather than going through HTTP connections. We did custom database schemas, one-off processes, and all kinds of custom development. This meant that we had a team just for our IT system processes. This team kept growing as Experian continued to acquire new companies. Each acquisition brought a new way of doing integrations and business. We had a real challenge in standardizing development practices, which led to a lot of isolated environments inside the company. We had disparate data repositories and non-standard client conductivity, which hampered innovation.Responding to customer demand for APIsWhen we first started, we had some concrete goals. We wanted to grow our ecosystem, develop a massive reach for transaction and content distribution, power a new business model, and drive innovation. Basically, we wanted to use APIs to transform our business into a platform, and we wanted to build an ecosystem that leveraged this API platform to develop new solutions. Our leadership also understood the importance of delivering information to our customers in the way they wanted to consume it. Our customers had told us that they didn’t want software, they just wanted access to the data—and APIs are the easiest, most secure way to grant that access. The Apigee API management platform as an enterprise solutionWe knew we needed an API management platform to enable this step forward. In addition to the documentation and discoverability, we also wanted a place to create APIs fast, and where we could get visibility into usage and other metrics. The Apigee API management platform from Google Cloud offered all of this, and more. From the robust feature set to advanced security to the developer portal to analytics – Apigee provides us everything we need to run an enterprise-class API program.Now that we have our API program up and running, integrations no longer take months. In some cases, it’s just a matter of minutes or seconds. Customers can simply look at our documentation on how to invoke APIs and begin consuming data in seconds. We started with this new model in our three largest markets: North America, the United Kingdom and Brazil. Later, we rolled it out to Singapore and Australia, while deploying an on-premises platform for some of our North American business units that needed to provide their APIs internally only. Next, we went to EMEA. At this point, we’ve deployed Apigee company-wide, giving us a flexible deployment model that maintains a centralized platform and processes.We continue to evangelize the program today, and we recently conducted a workshop with the Apigee team to train our EMEA business unit and get them onboarded to the platform. They were able to start developing API proxies right away, and they’re set to go into production with as many as nine of them. We also went live with three developer portals, which we call API hubs, in North America, the United Kingdom, and Brazil. As we expand, we don’t want to keep building up different developer portals for each region because then we’re going to have too many. Alternatively, we plan to combine them into a single global developer portal that will allow users to select geographies of interest where they’ll be presented with relevant information.Experian continues to evolve the types of products and services we offer. Thanks to Apigee, we have the flexibility, security, and technology to keep innovating and providing value to our business and our customers.
Quelle: Google Cloud Platform

Building an ecosystem of partners to help broadcasters transform their business

The cloud has made it possible for audiences to find the content they love anywhere, on any device, and as a result many broadcasters are looking to the cloud to help them grow and meet customer needs. Broadcasters are using the cloud to streamline content management workflows, modernize their video delivery infrastructure, and develop deeper relationships with audiences through data. We want to help broadcasters do exactly that, which is why we’ve worked with media and entertainment companies across the world to help them transform their workflows and bring their end users the best possible viewing experience across a multitude of devices. We’ve been incredibly inspired by all the ways Google Cloud customers like Sky, Viacom, Dish Network and many more are taking advantage of our globally reliable infrastructure, smart analytics capabilities, and ML and AI technology to better serve their audiences.To support these broadcasters and more, we’re building a rich ecosystem of technology and system integration partners to help accelerate the industry’s adoption of the cloud across all parts of the media supply chain, including content creation, management, distribution and monetization.Here’s a look at a few of the partners we’ve recently worked with and how we’re working with them to reach companies across the industry: Blackbird: Blackbird provides sports, esports, news and broadcast companies with a very fast cloud-based clipping, editing and distribution platform. This allows production personnel to quickly create clips and highlights from anywhere for multiple devices and platforms including web, broadcast, OTT and social.  Harmonic: Harmonic’s VOS cloud-native live video platform is a SaaS-based solution that unifies the entire media processing chain, from ingest to playout, graphics, transcoding, encryption and delivery, enabling content and service providers to quickly launch broadcast-quality OTT channels and run leaner, more agile and scalable operations.Make.TV: Specializing in live video, Make.TV simplifies signal acquisition from cameras and online sources, and routes and distributes these signals for playout to an unlimited number of social media, online and traditional broadcast endpoints. The latest joint efforts include Make.TV’s Live Video Cloud (LVC) running on Google Cloud Platform (GCP). LVC enables the active monitoring, routing, recording and management of live video at scale, to address the time, resource and reliability issues associated with live video, specifically for live news and sports production use cases.Wowza: Wowza Streaming Cloud, which helps deliver enterprise grade live streaming experiences, is fully integrated with Google Cloud. Streaming Cloud will be available in the Google Cloud Marketplace, and we’re working with Wowza to integrate our Video Intelligence APIs into their services in the future.IRIS.TV: IRIS.TV provides video personalization and contextual ad targeting solutions for media companies, allowing them to better engage, retain, and monetize audiences. IRIS.TV is integrated with Google Cloud, utilizing our machine learning capabilities to provide better recommendations for viewers.These partner announcements are part of a growing set of solutions and services that will help content creators and distributors deliver more engaging content experiences. To learn more, join us at IBC 2019 (in Hall 14, Booth #E01) in Amsterdam from September 13-17, where we will showcase our solutions and services from 15 select partners, alongside the latest technology innovations from Google Cloud, Android TV, Widevine, YouTube, Chrome and other Google product areas.
Quelle: Google Cloud Platform