Introducing Voucher, a service to help secure the container supply chain

Kubernetes helps developers build modern software that scales, but to do so securely, they also need a software supply chain with strong governance. From managed secure base images,  Container Registry vulnerability scanning to Binary Authorization, Google Cloud helps secure that pipeline, giving you the support and flexibility you need to build great software without being locked into a particular provider.Today, we are excited to announce a great open-source addition to the secure software supply chain tool box: Voucher. Developed by the Software Supply Chain Security team at Shopify to work with Google Cloud tools, Voucher evaluates container images created by  CI/CD pipelines and signs those images if they meet certain predefined security criteria. Binary Authorization then validates these signatures at deploy time, ensuring that only explicitly authorized code that meets your organizational policy and compliance requirements can be deployed to production.Voucher is open source from the get-go, following the Grafeas specification. The signatures it generates, or ‘attestations,’ can be enforced by either Binary Authorization or the open-source Kritis admission controller. Out of the box, Voucher lets infrastructure engineers use Binary Authorization policies to enforce security requirements, such as provenance (e.g., a signature that is only added when images are built from a secure source branch) and block vulnerable images (e.g., require a signature that is only applied to images that don’t have any known vulnerabilities above the ‘medium’ level). And because it’s open source, you can also easily extend Voucher to support additional security and compliance checks or integrate it with your CI/CD tool of choice. “At Shopify, we ship more than 8,000 builds a day and maintain a registry with over 330,000 container images. We designed Voucher in collaboration with the Google Cloud team to give us a comprehensive way to validate the containers we ship to production,” said Cat Jones, Senior Infrastructure Security Engineer at Shopify. “Voucher, along with the vulnerability scanning functionality from Google’s Container Registry and Binary Authorization, provides us a way to secure our production systems using layered security policies, with a minimum impact to our unprecedented development velocity. We are donating Voucher to the Grafeas open-source project so more organizations can better protect their software supply chains. Together, Voucher, Grafeas and Kritis help infrastructure teams achieve better security while letting developers focus on their code.”How Voucher simplifies a secure supply chain setupIn the past, if you wanted to gate deployments based on build or vulnerability findings, you needed to write, host and run your own evaluation logic (step 2a and 3a), as shown in the following process:Code is pushed to a repositoryA continuous integration (CI) pipeline tool, such as Cloud Build, builds and tests the container.Write custom code to sign images based on their build provenance (e.g. only sign images built from the production branch)The newly built container image is checked into Google Container Registry and undergoes vulnerability scanning.Write custom code to sign images based on vulnerability findingsBinAuthz verifies the image signatures as part of being deployed to GKE. To avoid privilege escalation, the signing steps should be hosted outside of the CI/CD pipeline (developers who can execute arbitrary code in a build step cannot gain access to the signing key or alter the signing logic). This puts a significant burden on DevOps teams to create and set up these kinds of signing tools. Voucher, however, automates a large portion of this setup — it comes  with a pre-supplied set of security checks, and all you have to do is specify your signing policies in Binary Authorization. Once started, it automates the attestation generation as shown below:Try it out!We’re honored that Shopify used Google Cloud tools to power Voucher, and we’re excited that they’ve decided to share it with developers at large. If you want  to try Voucher, you can find it on GitHub, or a click-to-deploy version on Google Cloud Marketplace. We’ve also created a step-by-step tutorial to help you launch Voucher on Google Cloud with Binary Authorization.
Quelle: Google Cloud Platform

Google Cloud AI digitizes StoryCorps archive: largest collection of human voices on planet

For many of us the holiday season will look different this year, separated from the people we love. If you’re in this boat too—mitigating the spread of the coronavirus—thank you and we hope the following story might offer an alternative, but helpful way to connect with friends and family. While we know virtual get-togethers can never fully match the intimacy of in-person conversations, they can keep us connected and maybe even preserve some special moments for future generations. In this spirit, we are sharing our collaboration with StoryCorps, a national non-profit organization dedicated to preserving humanity’s stories through 1:1 interviews. Over the past 17 years, StoryCorps has recorded with more than 600,000 people and sent those recordings to the U.S. Library of Congress where they are preserved for generations to come at the American Folklife Center. This is the world’s largest collection of human voices on the planet, but, it’s been relatively inaccessible. That’s when StoryCorps approached us to help make its rich archive of first-person history universally accessible and useful. StoryCorps + Google Cloud AIIn 2019, StoryCorps and Google Cloud partnered to unlock this amazing archive using artificial intelligence (AI) and create an open, searchable and accessible audio database for everyone to find and listen to first-hand perspectives from humanity’s most important moments. Diving into how this works: for an audio recording to be searchable, the audio file and “moments” or keywords within that file—needed to be tagged with terms for which you would search. First we used Speech-to-Text API to transcribe the audio file.Then Natural Language API identified keywords and their salience from the transcription.The transcript and keywords were loaded to an Elastic Search index.Resulting in a searchable transcript on the StoryCorps Archive.Here is an example of how these Cloud AI technologies work using an actual StoryCorps interview.Building empathy and understanding through connection StoryCorps’ mission is impressive. Not only is it preserving humanity’s stories, its aim is to “build connections between people and create a more just and compassionate world” by sharing those stories as widely as possible. This is where our path with StoryCorps crosses on a deeper level. Our mission for AI technology is one where everyone is accounted for, extending well beyond the training data in computer science departments. This deeper understanding could allow organizations in every sector to unlock new possibilities of what they have to offer while being inclusive, equitable and socially beneficial. But that’s our story to figure out and we’re working hard at it. Whatever you decide to do this holiday season, please stay safe. In the meantime, perhaps your family would like to use the StoryCorps platform or app to connect, preserve and share a story of your own.Related ArticlePicture what the cloud can do: How the New York Times is using Google Cloud to find untold stories in millions of archived photosThe New York Times is building a pipeline on Google Cloud Platform to preserve its extensive photo archive, store it in the cloud, and le…Read Article
Quelle: Google Cloud Platform

How we're advancing intelligent automation in network security

We’re always looking to make advanced security easier for enterprises so they can stay focused on their core business. Already this year, we’ve worked to strengthen DDoS protection, talked about some of the largest attacks we have stopped and made firewall defences more effective. We continue to push our pace of security innovation, and today we’re announcing enhancements to existing protections, as well as new capabilities to help customers protect their users, data, and applications in the cloud. 1. Using machine learning to detect and block DDoS Attacks with Adaptive ProtectionWe recently talked about how our infrastructure absorbed a 2.54 Tbps DDoS attack, the culmination of a six-month campaign that utilized multiple methods of attack. Despite simultaneously targeting thousands of our IPs, presumably in hopes of slipping past automated defenses, the attack had no impact.We recognize the scale of potential DDoS attacks can be daunting. By deploying Google Cloud Armor integrated into our Cloud Load Balancing service—which can scale to absorb massive DDoS attacks—you can protect services deployed in Google Cloud, other clouds, or on-premise from attacks. Cloud Armor, our DDoS and WAF-as-a-service, is built using the same technology and infrastructure that powers Google services.Today, we are excited to announce Cloud Armor Adaptive Protection—a unique technology that leverages years of experience using machine learning to solve security challenges plus deep experience protecting our own user properties against Layer 7 DDoS attacks. We use multiple machine learning models within Adaptive Protection to analyze security signals for each web service to detect potential attacks against web apps and services. This system can detect high volume application layer DDoS attacks against your web apps and services and dramatically accelerate time to mitigation. For example, attackers frequently target a high volume of requests against dynamic pages like search results or reports in web apps in order to exhaust server resources to generate the page. When enabled, we learn from a large number of factors and attributes about the traffic arriving at your services so we know what “normal” looks like. We’ll generate an alert if we believe there is a potential attack, taking into account all of the relevant context for your workload. In other words, where traditional threshold based detection mechanisms could generate a great deal of lower confidence alerts that would require investigation and triage only once an attack has accelerated to the detection threshold, Adaptive Protection produces high confidence signals about a potential attack much earlier, while the attack is still ramping up. Adaptive Protection won’t just surface the attack, but will actually provide context on why the system felt it was malicious and then provide a rule to mitigate the attack as well. This protection is woven into our cloud fabric and only alerts the operator for more serious issues with context, an attack signature, and a Cloud Armor rule that they can then deploy in preview or blocking mode. Rather than spending hours analysing traffic logs to triage the ongoing attack, application owners and incident responders will have all of the context they need to make a decision on whether and how to stop the potentially malicious traffic. Cloud Armor Adaptive Protection is going to simplify protection in a big way, and will be rolling out to the public in preview soon.Adaptive Protection suggested rule2. Better firewall rule management with Firewall Insights We have been making a number of investments into our network firewall to provide insights and simplify control that allow easier management of more complex environments. Firewall insights helps you optimize your firewall configurations with a number of detection capabilities, including shadowed rule detection to identify firewall rules that have been accidentally shadowed by conflicting rules with higher priorities. In other words, you can automatically detect rules that can’t be reached during firewall rule evaluation due to overlapping rules with higher priorities. This helps detect redundant firewall rules, open ports, and IP ranges and help operators to tighten the security boundary. It will also help surface to admins a sudden hit increases on firewall rules and drill down to the source of the traffic to catch an emerging attack.Within firewall insights you’ll also see metrics reports showing how often your firewall rules are active, including the last time they were hit. This allows security admins to verify that firewall rules are being used in the intended way, ensuring that firewall rules allow or block their intended connections. These insights can operate at massive volume and help remove human errors around firewall rule configuration or simply highlight rules that are no longer needed as an environment changes over time. Firewall insights will be generally available soon.Firewall Insights3. Flexible and scalable controls with Hierarchical Firewall PoliciesFirewalls are an integral part of almost any IT security plan. With our native, fully distributed firewall technology, Google Cloud aims to provide the highest performance and scalability for all your enterprise workloads.  Google Cloud’s hierarchical firewall policies, provide new, flexible levels of control so that you can benefit from centralized control at the organization and folder level, while safely delegating more granular control within a project to the project owner. Hierarchical firewalls provide a means to enforce firewall rules at the organization and folder levels in the GCP Resource Hierarchy.  This allows security administrators at different levels in the hierarchy to define and deploy consistent firewall rules across a number of projects so that they are applied to all VMs in currently existing and yet-to-be-created projects. Hierarchical firewall policies allow configuring rules at the Organization and Folder levels, in addition to firewall rules at the VPC level. Since leveraging Hierarchical Firewalls  requires fewer firewall rules, managing multiple environments becomes simpler and more effective. Further, being able to manage the most critical firewall rules in one place can help free up project level administrators from having to keep up with changing organization wide policies. Hierarchical firewall policies will be generally available soon.Hierarchical firewall policies4. New controls for Packet Mirroring Google Cloud Packet Mirroring allows you to mirror network traffic from your existing Virtual Private Clouds (VPCs) to third party network inspection services. With this service, you can use those third-party tools to collect and inspect network traffic at scale, providing intrusion detection, application performance monitoring, and better security visibility, helping you with the security and compliance of workloads running in Compute Engine and Google Kubernetes Engine (GKE). We are adding new filters to mirror packets that will be generally available soon. With traffic direction control, you can now mirror either the ingress or egress traffic, helping users better manage their traffic volume and reduce costs.Traffic Direction: New Ingress & Egress controls for Packet MirroringWith these enhancements, we are helping Google Cloud customers stay safe when using our network security products. For a hands-on experience on our Network Security portfolio, you can enroll in our network security labs here. You can also learn more about Google Cloud security in the latest installment of Google Cloud Security Talks, live today.Related ArticleExponential growth in DDoS attack volumesHow Google prepares for and protects against the largest volumetric DDoS attacks.Read Article
Quelle: Google Cloud Platform

The need for speed: Using C2 machines for your HPC workloads

Cloud opens many new possibilities for High Performance Computing (HPC). But while the cloud offers the latest technologies and a wide variety of machine types (VMs), not every VM is suited to the demands of HPC workloads. Google Cloud’s Compute-optimized (C2) machines are specifically designed to meet the needs of the most compute-intensive workloads, such as HPC applications in fields like scientific computing, Computer-aided Engineering (CAE), biosciences, and Electronic Design Automation (EDA), among many others.The C2 is based on the second generation Intel® Xeon® Scalable Processor and provides up to 60 virtual cores (vCPUs) and 240GB of system memory. C2s can run at a sustained frequency of 3.8GHz and offer more than 40% improvement compared to previous generation VMs for general applications. Compared to previous generation VMs, total memory bandwidth improves by 1.21X and memory bandwidth/vCPU improves by 1.94X.1 Here we take a deeper look at using C2 VMs for your HPC workloads on Google Cloud.Resource isolationTightly-coupled HPC workloads rely on resource isolation for predictable performance. C2 is built for isolation and consistent mapping of shared physical resources (e.g., CPU caches, and memory bandwidth). The result is reduced variability and more consistent performance. C2 also exposes and enables explicit user control of CPU power states (“C-States”) on larger VM sizes, enabling higher effective frequencies and performance.NUMA nodesIn addition to hardware improvements, Google Cloud has enabled a number of HPC-specific optimizations on C2 instances. In many cases, tightly-coupled HPC applications require careful mapping of processes or threads to physical cores, along with care to ensure processes access memory that is closest to their physical cores. C2s provide explicit visibility and control of NUMA domains to the guest operating system (OS), enabling maximum performance.AVX-512 supportSecond generation Xeon processors support Intel Advanced Vector Extension 512 (Intel AVX-512) for data parallelism. AVX-512 instructions are SIMD (Single Instruction Multiple Data) instructions, and along with additional and wider registers enable packing of 64 single-precision (or 32 double-precision) floating point operations into one instruction. This means that more can be done in every clock cycle, reducing overall execution time. The latest generation of AVX-512 instructions in the 2nd generation Xeon processor include DL Boost instructions that significantly improve performance for AI inferencing by combining three INT8 instructions into one—thereby maximizing the use of compute resources, utilizing the cache better, and avoiding potential bandwidth bottlenecks.Low- latencyHPC workloads often scale out to multiple nodes in order to accelerate time to completion. Google Cloud has enabled “Compact Placement Policy” on the C2, which allocates up to 1320 vCPUs placed in close physical proximity, minimizing cross-node latencies. Compact placements, in conjunction with Intel MPI library, optimizes multi-node scalability of HPC applications. You can learn more about best practices for ensuring low latency on multi-node workloads here.Development toolsAlong with the hardware optimizations, Intel offers a comprehensive suite of development tools (including performance libraries, Intel Compilers, and performance monitoring and tuning tools) to make it simpler to build and modernize code with the latest techniques in vectorization, multithreading, multi-node parallelization, and memory optimization. Learn more about Intel’s Parallel Studio XE here.Bringing it all together Combining all the improvements in hardware and optimizations done in Google Cloud stack, C2 VMs perform up to 2.10X better compared to previous generation N1 for HPC workloads for roughly the same size VM.2In many cases HPC applications can scale up to the full node. A single C2 node (60 vCPUs and 240GB) offers up to 2.49X better performance/price compared to a single N1 node (96 vCPUs and 360GB).3C2s are offered in predefined shapes intended to deliver the most appropriate vCPU and memory configurations for typical HPC workloads. In some cases, it is possible to further optimize performance or performance/price via a custom VM shape. For example, if a certain workload is known to require less than the default 240GB of a C2 standard 60vCPU VM, a custom N2 machine with less memory can deliver roughly the same performance at a lower cost. We were able to achieve up to 1.09X better performance/price by tuning the VM shape to the needs of several common HPC workloads.4Get started todayAs more HPC workloads start to benefit from the agility and flexibility of cloud, Google Cloud and Intel are joining forces to create optimized solutions for specific needs of these workloads. With the latest optimizations in Intel 2nd generation Xeon processors and Google Cloud, C2 VMs deliver the best solution for running HPC applications in Google Cloud, while giving you the freedom to build and evolve around your unique business needs. Many of our customers with need for high performance have moved their workloads to C2 VMs and confirmed our expectations.To learn more about C2 and the second generation of Intel Xeon Scalable Processor, contact your sales representative or reach out to us here. And if you’re participating in SC20 this week, be sure to check out our virtual booth, where you can watch sessions, access resources, and chat with our HPC experts.1. Based on internal analysis of our c2-standard-60 and n1-standard-96 machine types, using the STREAM Triad Best Rate benchmark.2. Based on internal analysis of our c2-standard-60 and n1-standard-96 machine types, using our Weather Research Forecasting (WRF) benchmark.3. Based on the High Performance Conjugate Gradients (HPCG) benchmark, analyzing Google Cloud VM Instance pricing on C2-standard-60 ($3.1321/hour) and N1-standard-96 ($4.559976) as of 10/15/20204. Based on GROMACS and NAMD benchmarks, analyzing Google Cloud VM Instance pricing on N2-custom-80 with 160GB ($3.36528) and C2-standard-60 ($3.1321/hour) as of 10/15/2020Related ArticleIntroducing Compute- and Memory-Optimized VMs for Google Compute EngineGoogle Cloud is the first public cloud provider to offer Compute-Optimized VMs and Memory-Optimized VMs based on Intel 2nd Generation Xeo…Read Article
Quelle: Google Cloud Platform

Empowering customers and the ecosystem with an open cloud

Every organization that moves to the cloud has a unique journey driven by many factors including evolving operating environments and regulatory requirements. For all organizations, including those experiencing growth across regions and dynamic market circumstances, we recommend an open cloud approach that ensures operational and technical consistency across public clouds or private data centers and effective management of infrastructure, applications, and data across the organization.We believe that an open cloud can meet the needs of diverse companies, providing choice, flexibility and openness. Our open cloud philosophy is grounded in the belief that customers need autonomy and control over their infrastructure. Giving customers options to build, migrate and deploy their applications across multiple environments both in the cloud and on-premises allows them to avoid vendor lock-in and innovate across environments faster. We are proud of our leadership in advancing an open cloud, and this commitment underpins and drives our contributions to the open source and open data communities, as well as our approach to building technology solutions. Advancing computing through open source Open source plays a critical role in an open cloud. Many companies have mission-critical workloads or sensitive data that have “survivability requirements” in the event that a provider is forced to suspend or terminate cloud services due to country or region policy changes. To move workloads to other clouds, it’s important to develop them using open source and open standards.At Google Cloud, we don’t think it’s possible to fully address survivability requirements with a proprietary solution. Instead, solutions based on open source tools and open standards are the route to addressing customer and policymaker concerns. More importantly, open source gives customers the flexibility to deploy—and, if necessary, migrate—critical workloads across or off public cloud platforms.Google has a long history of sharing technology through open source—from projects like Kubernetes, which is now the industry standard in container portability and interoperability in the cloud, to TensorFlow, a platform to help everyone develop and train machine learning models. As Google’s Chief Economist Hal Varian said, “Open data and open source are good not only for us and our industry, but also benefit the world at large.” Our belief in customer choice is fundamental to how we develop our technology and rooted in leveraging open source APIs and interoperable solutions. In addition, we partner with the leading organizations in the fields of data management and analytics to build products that combine the benefits of open source with managed cloud solutions.Another way we provide flexibility is hybrid and multi-cloud environments. Anthos, our hybrid and multi-cloud platform, is built on open technologies like Kubernetes, Istio, and Knative, enabling an ecosystem that fosters competition and that unlocks new partnerships. In this spirit, last week OVHcloud and Google Cloud announced a strategic partnership to jointly build a trusted cloud solution in Europe. This partnership will focus on delivering the best of Google Cloud technology innovation and value in the most agile way, and help European customers accelerate their business transformation in the cloud while meeting addressing their strict data security and privacy requirements.Break down data silos and uncover new insights with public datasetsCustomers rely on Google Cloud to get better insights from their data. Our data analytics solutions such as BigQuery help them harness the potential of that data. One of our newest analytics solutions, BigQuery Omni, allows customers to cost-effectively access and securely analyze data across multi-cloud environments. Those tools enable customers to make their own data more open—both in and out of their organization. As we help enable data accessibility and portability, our highest priority is to do so securely and responsibly. At the same time, through the Google Cloud Public Datasets program, we work with data providers to host 100+ high-demand public datasets to allow customers and the research community to discover unique insights for solving real business and societal problems. For example, earlier this year, we added critical COVID-19 public datasets to support the global response to the novel coronavirus.We also share our discoveries and tools with the community to help everyone share data safely and in a manner that advances the important work of researchers, developers and journalists. Teams at Google have released over 80 open datasets through our research site, and share other aggregated, anonymized product insights. Take for example YouTube-8M, a large-scale, labeled video dataset used by researchers to further computer vision and video understanding. In addition, with more than 31 million datasets, Dataset Search allows anyone to discover and filter relevant data sets in accordance with usage rights, formats, and other key parameters. And our Kaggle community of nearly 5 million users hosts 50,000 public datasets and 400,000 public notebooks to support machine learning and artificial intelligence research. Paving the way to an open cloud through continued collaborationGoogle Cloud will continue to build towards an open cloud and work with partners and policymakers to support our customers, open-source communities, and society at large. We are excited to see the important work organizations achieve through openness, and remain committed to supporting them through our continued contributions to open source and open data.
Quelle: Google Cloud Platform

Helping media companies create consumer streaming formats with Transcoder API

Media and entertainment companies across the world are in the midst of a transformational shift to direct-to-consumer (D2C) streaming experiences. With audiences sheltering in place in 2020, this shift has been accelerated as audiences have adopted new streaming services more readily in 2020 than ever before. With more content being streamed in higher resolutions and operations becoming more distributed, media companies today require a cost efficient, high speed, and scalable way to process and deliver video content to an ever increasing number of end devices and platforms.Google Cloud is committed to building industry products and solutions that help media companies simplify operations and distribute their content to the widest possible audience. We’re building a set of video-focused APIs that empower developers to easily build and deploy flexible high quality video experiences. Today we’re announcing the first of these products, the preview availability of the Transcoder API. The Transcoder API is an easy-to-use API for creating consumer streaming formats, including MPEG-4 (MP4), Dynamic Adaptive Streaming over HTTP (DASH, also known as MPEG-DASH), and HTTP Live Streaming (HLS). Many D2C streaming platforms use a multi-codec strategy, and the Transcoder API supports popular codecs, including H.264, VP9, and HEVC. This strategy allows providers to offer a better, high definition experience to more viewers. The API also supports full partitioning for fast encoding of large video files, meaning that entire hours-long movies can be prepared in minutes.Developers can get started quickly by submitting transcoding jobs through REST API, transcoding files in Google Cloud Storage, and using Google Cloud CDN or third party CDNs to effectively distribute content to audiences across the globe. To learn more about the Transcoder API, please visit the documentation and pricing pages.Related ArticleHelping media companies navigate the new streaming normalAs media and entertainment companies evolve their future plans as a result of COVID-19, they should keep new audience behaviors top of mi…Read Article
Quelle: Google Cloud Platform

A developer’s guide to Google Kubernetes Engine, or GKE

When people think about whether or not to deploy on a container management platform like Kubernetes, the decision often comes down to its operational benefits: better resource efficiency, higher scalability, advanced resiliency, security, etc. But Kubernetes is also beneficial to the software development side of the house. Whether it’s improved portability of your code, or better productivity, Kubernetes is a win for developers, not just operators.For one thing, as we argued in Re-architecting to cloud native: an evolutionary approach to increasing developer productivity at scale, Kubernetes makes it easier to adopt modern cloud-native software development patterns like microservices, which can give you:Increased developer productivity, even as you increase your team sizes. Faster time-to-market – Add new features and fix defects more quickly. Higher availability – Increase the uptime of your software, reduce the rate of deployment failures, and reduce time-to-restore in the event of incidents. Improved security – Reduce the attack surface area of your applications, and make it easier to detect and respond rapidly to attacks and newly discovered vulnerabilities.Better scalability – Cloud-native platforms and applications make it easy to scale horizontally where necessary—and to scale down too. Reduced costs – A streamlined software delivery process reduces the costs of delivering new features, and effective use of cloud platforms substantially reduces the operating costs of your services. Google of course invented Kubernetes, which Google Cloud offers as the fully managed service, Google Kubernetes Engine (GKE). But did you know that Google Cloud also offers a full complement of developer tools that are tightly integrated with GKE? Today, in honor of KubeCon, we’re revisiting a few blogs that will show you how to develop apps destined for GKE, how to deploy them safely and efficiently, and how to monitor and debug them once they’re in production. Developing for GKE: It all starts with youEven the most enterprise-y applications get their start in life on a developer’s laptop. The same goes for applications running on GKE. To make that possible, there’s a variety of tools you can use to integrate your local development environment with GKE. Developers are known for tricking out their laptops with lots of compute resources. Using Minikube, you can take advantage of GPUs, for example. There are also local development tools to help you containerize Java apps: Jib, and Skaffold. Jib helps to containerize your Java apps without having to install Docker, run a Docker daemon, or even write a Dockerfile, and is available as a plugin for Maven or Gradle. Then, you can use Skaffold to deploy those containerized Java apps to a Kubernetes cluster when it detects a change. Skaffold can even inject a new version of a file into a running container! Read about this in depth at Livin’ la vida local: Easier Kubernetes development from your laptop.Another popular tool among GKE developers is Cloud Code, which provides plugins for the popular Visual Studio and IntelliJ integrated development environments (IDEs) to simplify developing for GKE. For example, we recently updated Cloud Code to have much more robust support for Kubernetes YAML and Custom Resource Definitions (CRDs). Read more at Cloud Code makes YAML easy for hundreds of popular Kubernetes CRDs. Have a quick and dirty development task to do? Check out Cloud Shell Editor, which launches a full-featured, but self-contained, container development environment in your browser. Read more at New Cloud Shell Editor: Get your first cloud-native app running in minutes.Related ArticleLivin’ la vida local: Easier Kubernetes development from your laptopRunning applications in containers on top of Kubernetes is all the rage. However, the brave new world of containers isn’t always kind to …Read ArticleGet in the (pipe)lineEventually, you’ll be ready to push the apps you developed on your laptop to production. Along the way, you’ll probably want to make sure that the code has been properly tested, and that it passes requisite security and compliance tests. Google Cloud offers a variety of tools to help you push that code through that pipeline. Setting up an automated deployment pipeline to GKE doesn’t have to be hard. In Create deployment pipelines for your GKE workloads in a few clicks, learn how to use Cloud Build to create a pipeline from scratch, including selecting your source, build configuration, and Kubernetes YAML files. But before you do, make sure that the image that you’re deploying is secure. Binary Authorization provides a policy enforcement chokepoint to ensure only signed and authorized images are deployed in your environment. You can read more about it in Deploy only what you trust: introducing Binary Authorization for GKE.Even better, Artifact Registry has built-in vulnerability scanning. Once enabled, all container images built using Cloud Build are automatically scanned for OS package vulnerabilities as they’re pushed to Artifact Registry. Read more at Turbocharge your software supply chain with Artifact Registry.Monitor, Debug, Repeat: Remote development for GKE appsNow that your app is in production on a GKE cluster, your work is done, right? Wrong. For developers, getting an app to production is still just the beginning of the software lifecycle. Chances are, you have ideas about how to improve your app, and you’ll definitely want to monitor it for signs of trouble. GKE is tightly integrated with several monitoring, debugging, and performance management tools that can help ensure the health of your GKE app—so you can make them even better!When there’s a problem in your production environment, one of the first places you’ll want to look is your logs. You can do that with Cloud Logging and Cloud Monitoring, both enabled by default when you create a GKE cluster. To learn more about how to use Cloud Logging for GKE logs, use cases and best practices, check out Using logging for your apps running on Kubernetes Engine.Once you’ve found the culprit, find out how you can use Cloud Logging and Cloud Monitoring to debug your applications.We’re developers tooAs long-standing leaders of the open source community, including the Cloud Native Computing Foundation (CNCF) and Open Container Initiative (OCI), we’re always thinking about how industry developments impact your day-to-day as a GKE developer. For example, Docker’s recent announcements about new limits on pull requests prompted us to write this post on how to manage these restrictions in a GKE environment. In addition to making GKE the most scalable and robust container management platform, we’re deeply committed to making it the easiest to use and develop on. New to Kubernetes and GKE? Learn more with this free, hands-on training. And if you’re participating in KubeCon this week, be sure to stop by our (virtual) booth to meet an expert.Related ArticleYour guide to Kubernetes best practicesOur entire Kubernetes best practices blog series in one location.Read Article
Quelle: Google Cloud Platform

Four ways to generate value from your APIs

It’s been 20 years since Jerry Maguire hit movie theaters in 1996, and yet one line from the movie still resonates like no other. The constant banter between the agent, Jerry Maguire (Tom Cruise) and budding professional football player Rod Tidwell (Cuba Gooding Jr.) is fun, if not bombastic, with one central theme that Tidwell not only enthusiastically expresses but also insists Maguire express as well:“Show me the money.”Tidwell’s passion for football was far outweighed by his passion to become rich; indeed, one of the movie’s themes is that only once he got past playing for the money could he achieve the level of play required to drive real value to his team, his team’s owners, and ultimately himself.What does this have to do with API management? Digital product owners tend to follow a similar path: the knee-jerk reaction is that the easiest way to drive value from APIs is to charge for them. While this may be the easiest thing to do, more often than not, API value is best extracted from other indirect means. By creating a tri-partite value exchange—a proposition that satisfies end users, partner developers, and the company publishing the APIs— a great amount of untapped value may be mined. And just like the football player, the agent, and the team, putting some heart into the game can make all three of them winners.How to start deriving value from APIsBehold: here are the four best practices to derive value from APIs:1. Extend channel reachSuppose your application is great but targeted toward a specific set of users. What if there’s a set of users—perhaps even an entire channel—that cannot use it? Perhaps your application doesn’t integrate well with other corporate systems, or perhaps it isn’t available in certain markets, doesn’t accept certain currencies, or can’t support certain business models (such as pre-paid or post-paid). Creating an API product—that is, an API designed for developer consumption and productivity, not just integration between systems—is the single best way to make your application flexible enough that the functionality can be adopted into channels that aren’t being addressed by your current go-to-market approach. An excellent example of this is the Walgreens Photo Prints API. As photos have moved from digital cameras to mobile phones, a cadre of third-party photo applications has cropped up…yes, pun intended. These applications took great pictures and featured wonderful effects but offered no easy way to print the photos. By leveraging the API and the connection it facilitated to Walgreens photo printing facilities in stores nationwide, these apps can now use Walgreens stores as a venue for photo printing. This has enabled customers to quickly get prints of their favorite photos, helped developers to build richer apps, and let Walgreens photo services go well beyond the store, embedding a presence in a multitude of apps and handsets they wouldn’t have addressed without a productized API. Walgreens has turned their developer ecosystem into channel partners and now offers them much more than just photo printing services.In this model, the API product is offered for free, as there is an obvious value proposition to all three parties—API publisher, developer, and user. (Learn more about how Walgreens uses Apigee.) 2. Consider brand awareness and promotionYour application is lost in a sea of hundreds of other similar apps in an app store. What now? One way of driving awareness is to extend your branch reach/footprint via an API, then reward users or developers for sign-ups and usage, in order to proliferate the application to new surface areas, experiences, and form factors. Streaming services, for example, generally have an incentive to make their streaming players easy to integrate across a wide variety of devices, form factors, and digital experiences. This can in turn create an incentive among device-makers and app-makers to integrate the service, creating a potentially exponential increase in the value proposition for the end consumer and the developers integrating the streaming player API. And when this variety of viewing options for the user meets quality content from the service, the result can be a self-reinforcing cycle of more subscribers and increased reach across more consumer touchpoints. Developers, device-makers, and the service publishing the API create ways to make money, and end users get a steadily-improving and flexible service.Similarly, Brazilian retailer Magalu (formerly Magazine Luiza) leveraged APIs to achieve, as CTO Andre Fatala put it in a 2019 blog post, a “newfound ease and speed of spinning up new services and customer experiences and adjusting existing ones,” which let “everyone … work in small teams of five or six people that take care of segments of an application, whether it’s online checkout, physical store checkout, or order management.” The approach means Magalu “work[s] much more like a software company than a retail company now,” he said.With these new agility, the company has expanded its e-commerce strategy to third-party sellers and created a digital marketplace that lets merchants easily join the ecosystem via Magalu’s API platform. Whereas the company’s old legacy sales and distribution systems only supported 50,000 SKUs, the marketplace supports thousands of sellers and millions of SKUs, significantly expanding the brand’s reach.3. Enable customization to create new value propositionsIn a bid to create an ecosystem around its banking products, ABN Amro partnered with telecommunications company KPN, smart homes expert 50five, and Energizer—and begun leveraging its first-party payments app Tikkie, the Olisto IoT device triggering platform, and the Google Nest API—to create an entirely new value proposition around smart home solutions. Thanks to this collaboration, when the Nest Protect smart smoke alarm runs low on batteries, it can order replacements automatically, performing the payment through Tikkie and triggering the batteries to be delivered directly to the owner’s home. This capability significantly reduces risk of a smoke alarm not working due to battery failure—and with Google currently rolling out new Nest initiatives to replace the program under which ABN Amro’s solution was created, we’re excited to see what novel and convenient user value propositions our APIs enable in the future. These API partnerships let ABN Amro position their offering as a flexible platform, able to generate network effects to aggregate demand. As new value propositions are created in these many ecosystems, ABN Amro will already be integrated and available to meet the needs of the ecosystem. The company’s flexible, customizable platform will offer the path of least resistance for future similar API partnerships.In this model, the API product is offered for free, as there is an obvious value proposition to all three parties—the API publisher, the bank, is a more desirable destination to keep money (and do business with) because of its flexibility; the developer drives value and differentiation to their product by alleviating a significant consumer pain point; and the end user benefits from having a functioning smoke detector.4. Enable access to rare and valuable competenciesThere are situations in which the best way to generate value from an API is to charge for it. If the API product substitution risk is low—if your value proposition is rare, valuable, and is not being competed against by indirect competitors—there is an opportunity to generate direct value from the API by charging for access. A great example of this is telecommunications infrastructure APIs, such as telematics APIs for in-vehicle connectivity. Though there are other telecommunications providers likely competing, the threat of other non-telecom providers offering in-vehicle applications and connectivity—i.e., the product substitution risk—is very low. As a result, telecoms can charge for those API products, as they are rare and valuable. Reducing the importance of API value generation to simple revenue generation or cost savings frequently misses the key sources of untapped value available when targeting developers with an API program. This ability to inspire developers—not unlike Gooding’s Rod Tidwell character rediscovering the drive that inspires people to watch football—is the bedrock of the internet economy, the “demand aggregation” model that enables ecosystem effects and significant value generation. When considering an API productization approach, make sure to have all of these in mind as you map your path to success. To learn more, check out our ebook on API productization.Related ArticleGoogle named a leader in the 2020 Gartner Magic Quadrant for Full Life Cycle API ManagementFor the fifth year in a row, Google Cloud (Apigee) has been named a leader in Gartner’s Full Life Cycle API Management Magic Quadrant.Read Article
Quelle: Google Cloud Platform

The 10 most popular sessions from Google Cloud Next ‘20: OnAir

Google Cloud Next ‘20: OnAir looked a little different this year. Instead of a three-day conference, we launched a nine-week digital event series that brought together our global cloud community to discuss and collaborate on the most significant cloud technology challenges facing companies today. We may have gone virtual, but some things stayed the same—all of our sessions (over 200 to be exact) are now available on YouTube to watch and learn from. That’s a lot of content to sift through, so here’s a quick breakdown of our top 10 sessions from Next OnAir: 1. Supercharge Productivity With No-Code Apps Using AppSheetThe road to building new applications in the digital age comes with a tough choice for business and technology leaders: buy or build. Packaged software is often too rigid to meet unique requirements, and building custom apps takes up too much time and resources. But is there a third option? In this session, Santiago Uribe Montoya, Google Senior Product Manager, and Richard Glass, Director of Information Technology at KLB Construction, discuss how AppSheet makes it possible to automate processes while leveraging existing Google Workspace data to build mobile and desktop apps—without coding. 2. How PwC Migrated 275,000+ Users to Google WorkspaceEver wondered what it’s like to migrate to Google Workspace (formerly G Suite)? What about migrating over 275,000 users at a 150-year-old professional services firm operating in over 158 countries? PwC Global Change Director Adrienne Schutte, along with Google Technical Account Manager Regina Houston, shares the challenges, key lessons, management strategies, and long-term impact of PwC’s journey with Google Workspace.3. The Future of Meetings in Google Workspace: Vision and RoadmapThe new normal is here—and so is the new work normal. In this session, Smita Hashim, Director of Product Management for Meeting Solutions and Time Management, and Greg Funk, Group Product Manager, share Google Workspace’s vision for the future of meetings as teams navigate an increasingly video-first world. You’ll also get a sneak peek into how Google Workspace is transforming the lifecycle of a meeting and reimagining teamwork so that people can stay connected no matter where they are working from.4. Do it live! Fitbit’s Zero-Downtime Migration to GCPMoving a monolith without downtime is impossible, right? Think again. In this session, Fitbit’s Principal Software Engineer Sean Michael-Lewis explains how Fitbit migrated its production operations from managed hosting to Google Cloud Platform without impacting its real users. You’ll learn what made Fitbit’s migration challenge unique, how they created a user-centric migration plan, the technology and processes they used, and the key takeaways that have provided a foundation for their new multi-region architecture. 5. What’s New in BigQuery, Google Cloud’s Modern Data WarehouseData is at the heart of many business transformations today. Organizations want to make real-time decisions and future predictions that keep them competitive, but traditional data warehousing wasn’t designed to scale fast or process emerging data processing patterns. In this session, Sudhir Hasbe, Google Cloud Director of Product for Data Analytics, and Tino Tereshko, Google Product Manager, talk about how Google BigQuery addresses the needs of data-driven enterprises and share demos of the latest feature innovations.6. Communication in Google Workspace: The Future of Gmail, Chat, Meet, and MoreHow is communication changing as work goes remote and becomes more flexible? See how new improvements to Gmail, Chat, and Meet are making it easier for modern workers to communicate and collaborate anywhere they are working from—on the web or from an Android or iOS device. In this session, Tom Holman, Google Senior Product Manager, and Dave Loxton, Google Product Manager, share the latest updates, what’s up next, and why Google is more excited than ever about the future of these products in Google Workspace.7. Building Data Lakes on Google CloudTraditional approaches to building data lakes often land organizations with data swamps.   In this session, Google Product Manager Nitin Motgi discusses how Google Cloud makes it easy for enterprises to create and maintain data lakes, allowing customers to aggregate their data and analyze it using cloud-native and open source tools. He’ll also share the most common use cases for how companies use data lakes on Google Cloud.8. Data Catalog for Data Discovery and Metadata ManagementWouldn’t it be great to be able to easily search through your enterprise data assets with the same search technology that powers Gmail and Drive? Many enterprises struggle with data discovery and metadata management across disparate systems and silos. Shekhar Bapat, Google Product Manager, discusses how Data Catalog helps accelerate time to insight by providing discoverability, context, and governance for all of your data assets. He is joined by Shruti Thaker, Head of Alt Vendor Data & Alpha Capture at Blackrock, who shares how Data Catalog helped BlackRock create an effective metadata solution for its data assets. 9. Analytics in a Multi-Cloud World with BigQuery OmniWhile data is a critical component of decision making across organizations, for many, this data is scattered across multiple public clouds. So, how do you help analysts and data scientists handle data from all the tools, systems, and silos? Meet BigQuery Omni, a flexible, fully-managed, multi-cloud analytics solution that lets you analyze data across public clouds without ever leaving the familiar BigQuery user interface. In this session, Google Product Manager, Emily Rapp, shows you how to break down data silos right in your environment and run analytics in a multi-cloud world.10. Master Security and Compliance in the Public CloudUnlocking the promise of the public cloud often brings security and compliance challenges—especially if you’re a leading market-infrastructure provider under the highest level of supervisory scrutiny in Europe. In this session, Christian Tüffers, Senior Cloud Architect at Deutsche Boerse Group, and Grace Mollison, Google Head Cloud Solution Architect, discuss ways customers can work with Google Cloud to create security blueprints to help them deploy workloads that meet regulatory and compliance requirements. Do you want to watch more sessions? You can browse the full session roster from Google Cloud Next ‘20: OnAir here.Related ArticleGoogle Cloud Next ‘20: OnAir—Accelerating digital transformation in the cloudGoogle Cloud CEO Thomas Kurian shares his vision for the future of cloud as we kick off Google Cloud Next ‘20: OnAir.Read Article
Quelle: Google Cloud Platform

How BigQuery helped Theta Labs and NASA bring science and hope to streaming

Editor’s note: We’re hearing today from Theta Labs, a leading decentralized video streaming platform that is powered by users and decentralized on a new blockchain. With their peer-to-peer bandwidth sharing distributed ledger technology, Theta Labs has been able to revolutionize the livestream experience. By adopting Google Cloud, Theta Labs has been able to scalably meet a growing active user base on their blockchain platform which in turn, has helped them expand their strategic partnership with NASA, including hosting the latest SpaceX rocket launch.When we established Theta Labs back in 2016, the goal was to set up a streaming video service with an emphasis on rendering popular PC video games like League of Legends, CS:GO and Dota2 into immersive 360° virtual reality experiences. And yet, thanks to our unique approaches to streaming, video rendering, and patented blockchain video technology, we’ve grown into something so much bigger that we’ve even caught the attention of NASA. All of this was possible thanks to Google Cloud and their databases and analytics products such as BigQuery, Dataflow, Pub/Sub and Firestore. Reaching the heights of video streamingBack when we first launched Sliver.tv—now Theta.tv—we decided to differentiate ourselves from the competition by creating unique live streaming video experiences, especially for streamers and viewers in regions with little or no access to high-speed internet. Our blockchain-based peer-to-peer video delivery technology lets users share their bandwidth with others, letting our streamers reach audiences they never could before.It was this ability to reach more unique and remote viewers and to give larger audiences the opportunity to discover new things that caught NASA’s attention. NASA saw the potential in our service to spread interest in science and technology to an audience of mostly younger viewers. They gave us the privilege of becoming one of only four or five video services with direct access to NASA’s source video feed, and we recently collaborated to premiere NASA’s August Women’s Equality Day  broadcast.The biggest highlight of this partnership so far was the opportunity to livestream the SpaceX launch. In a year where we all needed a bit more hope, being able to bring the live launch of a space shuttle to a wider audience was an amazing experience, inspiring so many to reach for the stars.Video of the stars starts with the cloudTo facilitate an event as large as a space launch with so many viewers takes a powerful infrastructure. To do all of this with our unique peer-to-peer blockchain system that rewards viewers and streamers for sharing bandwidth, we needed Google Cloud’s reliable, scalable, and stable infrastructure. With the strength of Google Cloud and their help on creating auto-scaling DevOps solutions, we were able to reach more viewers than ever without hitting the VM caps that previously caused issues with latency and customer experience. Previously, we’d faced challenges like infrastructure scaling limitations to meet application demands, high costs, and too much of our time wasted on managing and maintaining solutions.Google Cloud offers us better scalability, so we’re no longer capped by the number of active streamers we could have on our platform. Google Cloud gave us:Performance, and flexibility of implementationBreadth of capabilities and supportAbility to ingest streaming data for real-time insightsRelationship & communication with the Google account teamExpansive feature optionsPrice point compared to offered features/servicesOur partnership with Google Cloud has also let us reach viewers in regions that normally would have trouble accessing streaming video. Edge computing allows most of the computation work to be done near the source, improving response times and bandwidth usage—a perfect synergy leveraging Google and Theta Network’s core strengths. And with Google Cloud’s over 1600 nodes, we are able to get closer to our users than ever before.Running analytics on our skyrocketing dataBeyond the video streaming, Google Cloud’s enterprise data warehouse BigQuery gave us the capacity to do the typically difficult—if not impossible—task of sorting real-time data from the blockchain system. We have built a real-time pipeline for the viewership data using Dataflow, Pub/Sub and BigQuery. A Dataflow job continuously pulls the data from a Pub/Sub topic and ingests into BigQuery. We’ve seen Pub/Sub quickly ingest roughly 12,000 to 14,000 blocks of data containing 60,000 to 200,000 transactions daily into BigQuery for real-time analysis. We also used Pub/Sub and Dataflow to create the listener/subscriber for the topic our ETL pipeline publishes, then ingest that into BigQuery tables. By running fast queries in BigQuery, we were able to uncover findings such as: How many people watched and shared a certain video stream in the past hour;How many donations were made to a streamer in total;Which livestream has the highest donation to viewer ratio;What was the most impactful moment during a livestream.Prior to BigQuery, finding this information required writing customized scripts to analyze the blockchain raw data and the analysis used to take hours or even days of engineering time. Now we can gain such insights in a little as a few seconds, and effectively in real time. Now we can gather information to let streamers, advertisers, and partners know when more viewers were online and engaged. This means that NASA and other content creators could better find and reach their audiences. Results that let us grow and scale to the moon and beyondGoogle Cloud helps us better to forecast how many concurrent users we need to support during livestream events and predict multi-variable reputation scores for our network of thousands of edge and guardian nodes to identify and address bad actors and under-performing nodes. Today, our BigQuery environment has 45GB of data, which contains almost 7.5 million blocks and 57 million transactions and counting. We migrated to Google Cloud in less than six months, and saw the return on investment almost immediately. We’re able to bring top-notch connectivity, scalability and security capabilities for our branded content partners like NASA, enterprise validator partners including Google and community members that run Theta edge and guardian nodes, and we’re reducing costs over time.All of this is just the beginning of the ways we’re looking to spread more entertainment, science, and hope during these dark times. And thanks to Google Cloud’s strength and scalability, we’ll be able to keep growing, reaching even more audiences and partners.Learn more about Theta Labs here.
Quelle: Google Cloud Platform