Introducing Anthos: An entirely new platform for managing applications in today's multi-cloud world

Let’s face it, even in the best of cases, enterprise IT can be rigid, complex and expensive. When we talk to customers with extensive on-prem investments, they tell us they want to take advantage of the cloud’s scalability, innovative services and geographic scope, but they’re worried about getting locked into the wrong provider. Why is it, they ask, that they still can’t write once, run anywhere?Today, we’re excited to introduce Anthos, Google Cloud’s new open platform that lets you run an app anywhere—simply, flexibly and securely. Embracing open standards, Anthos lets you run your applications, unmodified, on existing on-prem hardware investments or in the public cloud, and is based on the Cloud Services Platform that we announced last year.Now, we’re making Anthos’ hybrid functionality generally available both on Google Cloud Platform (GCP) with Google Kubernetes Engine (GKE), and in your data center with GKE On-Prem. Anthos will also let you manage workloads running on third-party clouds like AWS and Azure, giving you the freedom to deploy, run and manage your applications on the cloud of your choice, without requiring administrators and developers to learn different environments and APIs.Throughout it all, Anthos is a 100% software-based solution. You can quickly get up and running on your existing hardware—with no forced stack refresh. Anthos leverages open APIs, giving you the freedom to modernize any place, any time and at your own pace. Because Anthos is based on GKE, our managed Kubernetes service, you automatically get the latest feature updates and security patches.Introducing Anthos Migrate: Cloud migration and modernization made easyWe’re also excited to announceAnthos Migrate in beta, which auto-migrates VMs from on-premises, or other clouds, directly into containers in GKE with minimal effort. This unique migration technology lets you migrate and modernize your infrastructure in one streamlined motion, without upfront modifications to the original VMs or applications. Through this transformation, your IT team is free from managing infrastructure tasks like VM maintenance and OS patching, so it can focus on managing and developing applications. Migrating also lets you take advantage of other integrations within Anthos.Early Anthos customer success storiesGlobal enterprise customers in a number of industries are already using Anthos as a flexible, portable, software-based solution on which to build hybrid and multi-cloud environments.For HSBC, one of the largest banking and financial services organizations in the world, a managed cloud environment that reduces the complexity and costs of big data analytics is essential to its hybrid cloud strategy.”At HSBC, we needed a consistent platform to deploy both on-premises and in the cloud,” says Darryl West, Group CIO, HSBC. “Google Cloud’s software-based approach for managing hybrid environments provided us an innovative, differentiated solution that was able to be deployed quickly for our customers.”Siemens, the largest industrial manufacturing company in Europe, is excited for the insight GKE On-Prem will bring to their complex, hybrid environment.”Anthos is a great fit for us. It gives us a unified management view of our hybrid deployment and a consistent platform to run our workloads across environments,“ says Martin Lehofer, Head of Research, Siemens.Building a multi-cloud ecosystemMany of our customers have existing software and infrastructure investments, yet still want the freedom to invest in their cloud future. We’re working closely with our ecosystem of partners to support these customers, launching with more than 30 hardware, software and system integration partners ready to help customers leverage Anthos right out of the gate.“We know that hybrid and multi-cloud approaches represent the future for many of our customers,” says Kip Compton​, ​SVP, Cloud Platforms and Solutions​ at Cisco. “Our customers want to develop and deploy their applications anywhere—on-prem, in the public cloud, or in multiple public clouds—seamlessly and securely. We’re excited to make that possible by integrating Cisco’s industry-leading data center, networking, and security technologies with Anthos and growing our partnership with Google Cloud.” ​Using Anthos, Cisco will deliver the freedom of hybrid to enterprise customers, helping them get up and running quickly in the cloud based on integrations between Anthos and Cisco’s data center, networking, and security technologies, including Cisco HyperFlex, Cisco ACI, and Cisco Stealthwatch Cloud and Cisco SD-WAN. This combination offers businesses all the benefits of a fully-managed service like GKE combined with Cisco’s infrastructure capabilities.In addition, partners such as VMware, Dell EMC, HPE, Intel, and Lenovo have committed to delivering Anthos on their own hyperconverged infrastructure for their customers. By validating Anthos on their solution stacks, our mutual customers can choose hardware based on their storage, memory, and performance needs.System integrators are also on tap to help you modernize and extend your applications using Anthos. We’re excited that partners including Accenture, Arctiq, Atos, Cognizant, Deloitte, HCL Technologies, NTT Communications, Tata Consultancy Services, Wipro, and WWT are building services and solutions to help you to incorporate Anthos into your environment.Finally, we are working closely with enterprise software providers to integrate their offerings with Anthos’ unique capabilities. To date, more than 20 ISVs have already committed to integrating their software with Anthos:Streamlining multi-cloud managementAnthos also includes capabilities to help you automate policy and security at scale across your deployments: Anthos Config Management lets you create multi-cluster policies out of the box that set and enforce role-based access controls, resource quotas, and create namespaces, all from a single source of truth. It also works great with the open-source Istio service mesh, giving you a scalable foundation for policy enforcement, letting services establish trust, and encrypting traffic without code changes.Your favorite Kubernetes apps, ready and waiting in GCP MarketplaceThe GCP Marketplace offers Kubernetes applications for DevOps, security, databases and more. Active installations of Kubernetes apps grew by 65 percent this past quarter. These Kubernetes apps are generally available, with commercial and open-source options, and include prebuilt deployment templates, simple licensing, and one consolidated Google Cloud bill. You can deploy many of them to Anthos via GKE in the cloud, on-premises, and in multi-cloud scenarios.Meanwhile, if you build applications on GCP, we now offer a new Private Catalog, a hosted software catalog service for your IT solutions, to help you maintain compliance and governance, simplify internal solution discovery, and ensure that your developers only use approved and compatible apps. Currently in beta, Private Catalog lets you manage applications you created on GCP.Meeting developers where they are with updates to Cloud BuildCustom workers, a new feature in Cloud Build—our continuous integration and delivery (CI/CD) platform—help developers code, build, test, and deploy workloads on-prem and move them to the cloud when they’re ready. With custom workers, you can choose from on-prem source code, artifacts, and other build dependencies to create CI/CD pipelines in Anthos. This includes support for on-prem tools like GitHub Enterprise, BitBucket, GitLab, and Artifactory.To complement Anthos’ ability to help you modernize at your own pace, we have also made various advancements in the domain of hybrid API management.Your cloud, anywhereAnthos’ open-source approach makes it a safe choice for your cloud strategy. With partners like Cisco, Dell EMC, HPE, VMware, and many others, it’s broadly supported. And because Anthos is fully managed, even on-prem, you get the benefits of open source without the hassle of needing to operate it.If you can’t reduce the complexity of your IT, you can’t move faster. From container management to security to traffic management to software development, Anthos makes things simpler: simpler to operate, simpler to secure, simpler to modernize, and simpler to write.To learn more about the architecture behind Anthos, please see Application Modernization and the Decoupling of Infrastructure, Services and Teams, a white paper by distinguished Googlers, Eric Brewer and Jennifer Lin. Sign up here if you are interested in learning more about Anthos Migrate and Anthos’ multi-cloud capabilities.
Quelle: Google Cloud Platform

How to stay informed about Azure service issues

Azure Service Health helps you stay informed and take action when Azure service issues like outages and planned maintenance affect you, and provides a personalized dashboard that can help you understand issues that may be impacting resources in your Azure subscriptions.
Quelle: Azure

Announcing Cloud Run, the newest member of our serverless compute stack

Whether it’s to increase developer velocity, or to lower the operational overhead of managing infrastructure, using serverless compute can help developers focus on writing code that delivers business value. Today, we are excited to share some of the investments we are making at Google Cloud around serverless computing.First, we are announcing a new serverless compute platform for containerized apps with portability built-in:Cloud Run, a fully managed serverless execution environment.Cloud Run on Google Kubernetes Engine (GKE), bringing the serverless developer experience and workload portability to your GKE cluster.Knative, the open API and runtime environment bringing the serverless developer experience and workload portability to your existing Kubernetes cluster anywhere.We’re also making new investments in our Cloud Functions and App Engine platforms:New second generation runtimes.New open-sourced Functions Framework.Additional core capabilities, including connectivity to private GCP resources.Announcing Cloud Run: Serverless agility for containerized appsTraditional serverless offerings come with challenges such as constrained runtime support and vendor lock-in. Developers are often faced with a hard decision: choose between the ease and velocity that comes with serverless or the flexibility and portability that comes with containers. At Google Cloud, we think you should have the best of both worlds.Today, we are announcing the beta availability of a new serverless compute offering called Cloud Run that lets you run stateless HTTP-driven containers, without worrying about the infrastructure. Cloud Run is a fully serverless offering: It takes care of all infrastructure management including provisioning, configuring, scaling, and managing servers. It automatically scales up or down within seconds, even down to zero depending on traffic, ensuring you pay only for the resources you actually use.Veolia, a global leader in optimized water, waste, and energy management solutions, is already benefiting from Cloud Run:“Cloud Run removes the barriers of managed platforms by giving us the freedom to run our custom workloads at lower cost on a fast, scalable, and fully managed infrastructure. Our development team benefits from a great developer experience without limits and without having to worry about anything.” —Hervé Dumas, Group Chief Technology Officer, Veoliap.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px ‘Helvetica Neue’} p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px ‘Helvetica Neue’}Cloud Run is also available on GKE, meaning you can run serverless workloads on your existing GKE clusters. You can deploy the same stateless HTTP services to your own GKE cluster and simultaneously abstract away complex Kubernetes concepts.Using Cloud Run on GKE also gives you access to custom machine types, Compute Engine networks, and the ability to run side-by-side with other workloads deployed in the same cluster. It provides both the simplicity of deployment of Cloud Run and the flexibility of GKE. Customers such as Airbus Aerial are already using Cloud Run on GKE to process and stream aerial images.”With Cloud Run on GKE, we are able to run lots of compute operations for processing and streaming cloud-optimized aerial images into web maps without worrying about library dependencies, auto-scaling or latency issues.” —Madhav Desetty, Chief Software Architect, Airbus Aerialp.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px ‘Helvetica Neue’}We are continuing to strengthen our serverless portfolio through deep partnerships with industry leaders such as Datadog, NodeSource, GitLab, and StackBlitz. These partnerships provide integration support for Cloud Run across application monitoring, coding, and deployment stages.Enabling portability with KnativeWe recognize that you may want to run some workloads on-premises or across multiple clouds. Cloud Run is based on Knative, an open API and runtime environment that lets you run your serverless workloads anywhere you choose—fully managed on Google Cloud Platform, on your GKE cluster, or on your own self-managed Kubernetes cluster. Thanks to Knative, it’s easy to start with Cloud Run and move to Cloud Run on GKE later on. Or you can use Knative in your own Kubernetes cluster and migrate to Cloud Run in the future. By using Knative as the underlying platform, you can move your workloads across platforms, substantially reducing switching costs.Since it launched eight months ago, Knative has already reached version 0.5, with over 50 contributing companies and 400 contributors, and more than 3,000 pull requests. Click here to learn more about Knative and how you can get involved.New enhancements to Cloud FunctionsFor those developers looking to quickly and easily connect cloud services, we’ve got you covered. Google Cloud Functions is an event-driven serverless compute platform that lets you write code that responds to events, without worrying about the underlying infrastructure. Cloud Functions makes it simple and easy to connect to cloud services such as BigQuery, PubSub, Firebase, and many more.Today, we are also announcing a number of new and frequently requested features to help you adopt functions easily and seamlessly within your current environment:New language runtimes support such as Node.js 8, Python 3.7, and Go 1.11 in general availability, Node.js 10 in beta; Java 8 and Go 1.12 in alpha.The new open-source Functions Framework, available for Node.js 10, will help you take the first step towards making your functions portable. You can now write a function, run it locally and build a container image to run it in any container-based environment.Serverless VPC Access, which creates a VPC connector that lets your function talk to your existing GCP resources that are protected by network boundaries, without exposing the resources to the internet. This feature allows your function to use Cloud Memorystore as well as hundreds of third-party services deployed from the GCP Marketplace. It is available in beta starting today.Per-function identity provides security access at the most granular function level and is now generally available.Scaling controls, now available in beta, help prevent your auto-scaling functions from overwhelming backends that do not scale up as quickly in a serverless fashion.Functions provide agility and simplicity to make your developers more productive. But not all applications need to be broken down into granular functions. Sometimes you want to deploy large applications, while still leveraging the benefits of serverless.New second generation runtimes in App EngineGoogle pioneered serverless computing more than 11 years ago with App Engine, a serverless application platform for deploying highly scalable web and mobile apps. Since its inception, App Engine has evolved to meet developers where they are, whether it’s adding capabilities or support for new runtimes.Today, we are announcing support for new second generation runtimes: Node.js 10, Go 1.11, and PHP 7.2 in general availability and Ruby 2.5 and Java 11 in alpha. These runtimes provide an idiomatic developer experience, faster deployments, remove previous API restrictions and come with support for native modules. The above-mentioned Serverless VPC access also lets you connect to your existing GCP resources from your App Engine apps in a more secure manner without exposing them to the internet.Build full-stack serverless appsPerhaps the biggest benefit of developing applications with Google’s approach to serverless is the ease with which you can tap into a full stack of additional services. You can build end-to-end applications by leveraging services across databases, storage, messaging, data analytics, machine learning, smart assistants, and more, without worrying about the underlying infrastructure.Paired with Google Cloud’s flexible and open serverless compute offerings, these services make it easy to build comprehensive, full-stack solutions that don’t compromise on scale or performance. To learn more about Google Cloud’s serverless offerings, click here. We are excited to see all the great serverless applications you build on Google Cloud!
Quelle: Google Cloud Platform

Bringing the best of open source to Google Cloud customers

Google’s belief in an open cloud stems from our deep commitment to open source. We believe that open source is the future of public cloud: It’s the foundation of IT infrastructure worldwide and has been a part of Google’s foundation since day one. This is reflected in our contributions to projects like Kubernetes, TensorFlow, Go, and many more.Today, we’re taking our commitment to open source to the next level by announcing strategic partnerships with leading open source-centric companies in the areas of data management and analytics, including:ConfluentDataStaxElasticInfluxDataMongoDBNeo4jRedis LabsWe’ve always seen our friends in the open-source community as equal collaborators, and not simply a resource to be mined. With that in mind, we’ll be offering managed services operated by these partners that are tightly integrated into Google Cloud Platform (GCP), providing a seamless user experience across management, billing and support. This makes it easier for our enterprise customers to build on open-source technologies, and it delivers on our commitment to continually support and grow these open-source communities.Making open source even more accessible with a cloud-native experienceThe open-source database market is big, and growing fast. According to SearchDataManagement.com, “more than 70% of new applications developed by corporate users will run on an open source database management system, and half of the existing relational database installations built on commercial DBMS technologies will be converted to open source platforms or [are] in the process of being converted.”This mirrors what we hear from our customers—that you want to be able to use open-source technology easily and in a cloud-native way. The partnerships we are announcing today make this possible by offering an elevated experience similar to Google’s native services. It also means that you aren’t locked in or out when you are using these technologies—we think that’s important for our customers and our partners.Here are some of the benefits these partnerships will offer:Fully managed services running in the cloud, with best efforts made to optimize performance and latency between the service and application.A single user interface to manage apps, which includes the ability to provision and manage the service from the Google Cloud Console.Unified billing, so you get one invoice from Google Cloud that includes the partner’s service.Google Cloud support for the majority of these partners, so you can manage and log support tickets in a single window and not have to deal with different providers.To further our mission of making GCP the best destination for open source-based services, we will work with our partners to build integrations with native GCP services like Stackdriver for monitoring and IAM, validate these services for security, and optimize performance for users.Partnering with leaders in open sourceThe partners we are announcing today include several of the top-ranked databases in their respective categories. We’re working alongside these creators and supporting the growth of these companies’ technologies to inspire strong customer experiences and adoption. These new partners include:Confluent: Founded by the team that built Apache Kafka, Confluent builds an event streaming platform that lets companies easily access data as real-time streams. Learn more.  DataStax: DataStax powers enterprises with its always-on, distributed cloud database built on Apache Cassandra and designed for hybrid cloud. Learn more.Elastic: As the creators of the Elastic Stack, Elastic builds self-managed and SaaS offerings that make data usable in real time and at scale for search use cases, like logging, security, and analytics. Learn more.InfluxData: InfluxData’s time series platform can instrument, observe, learn and automate any system, application and business process across a variety of use cases. InfluxDB (developed by InfluxData) is an open-source time series database optimized for fast, high-availability storage and retrieval of time series data in fields such as operations monitoring, application metrics, IoT sensor data, and real-time analytics. Learn more.MongoDB: MongoDB is a modern, general-purpose database platform that brings software and data to developers and the applications they build, with a flexible model and control over data location. Learn more.Neo4j: Neo4j is a native graph database platform specifically optimized to map, store, and traverse networks of highly connected data to reveal invisible contexts and hidden relationships. By analyzing data points and the connections between them, Neo4j powers real-time applications. Learn more.Redis Labs: Redis Labs is the home of Redis, the world’s most popular in-memory database, and commercial provider of Redis Enterprise. It offers performance, reliability, and flexibility for personalization, machine learning, IoT, search, e-commerce, social, and metering solutions worldwide. Learn more.As we look to an open source-powered cloud future, we’re pleased to bring these partner technologies to you. Partnering with the companies that invest in developing open-source technologies means you get benefits like expertise in operating these services at scale, additional enterprise features, and shorter cycles in bringing the latest innovation to the cloud.   We’re looking forward to seeing what you build with these open source technologies. Learn more here about open source on GCP.
Quelle: Google Cloud Platform

Google Cloud announces new regions in Seoul and Salt Lake City

In three years, Google Cloud has opened 15 new regions and 45 zones across 13 countries. We continue to expand our global footprint to support our growing customers around the world. Today we’re announcing two new additions to our global infrastructure: new Google Cloud regions in Seoul, South Korea and Salt Lake City, Utah, USA—bringing the total number of global regions to 23 in 2020.Customers can expect to use the Seoul region in early 2020, followed by the Salt Lake City region shortly thereafter. Each new cloud region is designed for high availability with three zones from the start, and will include our portfolio of key Google Cloud Platform (GCP) products.Google Cloud region coming to South KoreaSouth Korea is a leader in telecommunication and information technology, and world-famous in the gaming industry. In South Korea, we’ve seen tremendous customer adoption from global companies like Samsung, Netmarble, TMON, and LG CNS. For example, Netmarble, South Korea’s largest gaming company, uses Google Cloud to support new game development, manage infrastructure, and infuse business intelligence throughout operations using GKE, BigQuery, and Cloud ML Engine. LG CNS uses Google Cloud to save millions of dollars each year by using AI to visually inspect its manufacturing lines to drive product quality.Seoul will be Google Cloud’s eighth region in Asia Pacific, and will help better serve both local customers looking to expand globally and multinational customers doing business in South Korea.Expanding our U.S. footprint with Salt Lake City cloud regionThe addition of Salt Lake City will bring the total Google Cloud regions within the continental United States to six, and underscores our tremendous growth in the U.S. Known for its healthcare, financial services and IT industries, Salt Lake City is a hub for data center infrastructure. This new region will enable customers in the Silicon Slopes area to easily run low-latency, hybrid cloud workloads.“Google Cloud’s expanding infrastructure in Salt Lake City is a welcome development as our growing business continues to scale to meet the needs of over 250 million customers,” said Dan Tournian, PayPal’s Vice President of Employee Technology & Experiences and Data Centers. “This new region will enable enhanced availability and performance for our customers, when every millisecond counts.”“Team Utah is delighted to welcome a Google Cloud region to Salt Lake,” said Theresa Foxley, President & CEO of EDCUtah, a private, non-profit organization that works with state and local governments and private industry to attract and grow competitive, high-value companies and spur the expansion of local Utah businesses. “This new region will improve cloud computing infrastructure for businesses operating in Utah, giving them faster access to Google Cloud products and services and bringing technical innovations even closer to where they do business. We look forward to welcoming the new region to Salt Lake City in 2020.”Organizations operating in the western US will soon be able to distribute their workloads across three western regions—Los Angeles, Oregon, and soon Salt Lake City—providing even higher connectivity in the west.What’s nextGoogle Cloud regions bring the cloud to organizations around the world, helping drive growth, differentiation, and innovation. In the coming weeks, the Osaka, Japan region will open to customers, and the Jakarta, Indonesia region is expected to launch in the first half of 2020.We look forward to welcoming you to these upcoming GCP regions, and we’re excited to see what you build with our platform. Stay tuned for more region announcements and launches this year. Visit our locations page for more information on cloud region availability, or contact sales to get started on GCP today.
Quelle: Google Cloud Platform

Choose your own environment with Apigee hybrid API management

Whether they connect existing on-premises applications to new cloud workloads, provide new customer experiences, or power an entire developer ecosystem, APIs are everywhere in today’s enterprise. And with more than two-thirds of enterprises adopting a multi-cloud strategy, APIs are increasingly distributed across private data centers and public clouds—sometimes even multiple public clouds.Today, we’re pleased to announce that we’re extending our hybrid offerings with Apigee hybrid (beta), a new deployment option for the Apigee API management platform that lets you host your runtime anywhere—in your data center or the public cloud of your choice. With Apigee hybrid, you get a single, full-featured API management solution across all your environments, while giving you control over your APIs and the data they expose and ensuring a unified strategy across all APIs in your enterprise.Apigee hybrid provides the following capabilities:Customer-managed runtime plane for all API traffic and an Apigee-hosted management plane for API lifecycle management.API design, security, publishing, analytics, and developer portals.Containerized deployment of the runtime in the environment of your choice.Asynchronous communication between the runtime and management plane.Helping customers scale their API programsHP Inc, a global leader in innovative personal computing devices, printers, 3D printing, and related services and solutions, has been an Apigee customer since 2011. HP’s ever-growing portfolio of APIs is hosted across distributed environments, and Apigee hybrid gives them flexibility, control, and robust API lifecycle management capabilities tailor-fit to the unique needs of large-enterprise. “Apigee hybrid is an exciting new dimension complementary to Apigee’s rich API-management offering in the cloud. HP’s digital evolution is accelerating and our goals necessitate a comprehensive API-management platform which works across disparate enterprise requirements and locales,” says Evan Scheessele, software platforms, API management lead for HP. “Apigee hybrid elegantly completes the story bridging between cloud and classical operations domains, enabling end-to-end management of all our businesses’ APIs through a single platform. Transparency and consistency is fundamental to the work of realizing a corporate API portfolio. Apigee’s single control plane for API management now offers reach to all HP’s APIs, regardless of whether they are hosted on our premises, in a private cloud or a public cloud. As a result, we are able to pursue consolidated management, shared standardized policies and verifiable security across our diverse API product teams.”To learn more about when and how to leverage hybrid API management, join our upcoming webcast with Product Manager Nandan Sridhar on April 25th at 10am PDT. You can read more about Apigee hybrid on our products page.
Quelle: Google Cloud Platform

How Unilever uses Google Cloud to optimize marketing campaigns

Home to more than 400 brands including Dove, Lipton, and Ben & Jerry’s, Unilever is one of the biggest consumer goods companies in the world—operating across more than 190 markets globally. At the company’s core is a goal to build a more sustainable business, both environmentally and socially. Unilever has the grand ambition of creating 1 billion one-to-one relationships with its consumers through meaningful and relevant dialogue. Google Cloud capabilities have been critical to supporting this goal.One important way Unilever is achieving this is through their People Data Centres (PDCs). Unilever PDCs exist in 27 markets and are focused on three areas: social and business analytics, consumer engagement centers, and consumer relationship management. Using a broad range of consumer data alongside Google Cloud AI tools such as translation, visual analytics, and natural language processing (NLP), Unilever has been able to generate insights faster than ever before and gain a deeper understanding of customer needs. They’ve also been able to use those tools and insights to build better and more effective advertising campaigns.Better marketing campaigns with Google Cloud AIUsing Google Cloud tools, Unilever has been able to add scale, reach, and personalization—a strong example of this is their recent ad campaign for Close-up toothpaste in Asia. Using search analytics, Unilever discovered that the second most searched-for term for their website users was “learning how to kiss.” Using this insight, the team created a three-day campaign for the brand around Valentine’s Day, deployed in six key Asian markets.To make the campaign a success, they needed to resonate with a target audience that spanned culturally diverse areas across Asia—from Thailand and Indonesia to India. To do it, they analyzed trending social media data to optimize their marketing assets in real time, and created culturally relevant content that spoke respectfully to each region and consumer individually.Using Google Cloud’s Cloud Vision API, Unilever analyzed user-generated content around their campaign hashtags and Valentine’s Day. These insights were then used to create and deploy six-second bumper ads daily during the campaign on Instagram, Facebook, and YouTube. The team used Natural Language API to monitor online comments about the campaign and ads. This then allowed Unilever to fine-tune the message delivery of ads that resonated most with their audience across those social channels.The numbers spoke for themselves, as the campaign touched nearly 500 million people and generated positive uplift in brand engagement and consideration.“Part of what makes Google Cloud a great partner is its brainpower and resources. The data-centric culture of test, learn and optimize has been a hallmark of our relationship, and it makes us confident that we’ll be seeing fantastic new products and applications from them in the future,” says Alex Owens, VP and Global People Data Centres at Unilever. “At Unilever, we believe in purpose-driven brands. Google Cloud’s products are rich in quality and purpose, and its people are committed to customers like us.”The Close-up toothpaste campaign was just one of many, and Unilever plans to explore how Google Cloud AI solutions can help support additional campaigns in the future. Learn more about those tools here.
Quelle: Google Cloud Platform

Bitnami Apache Airflow Multi-Tier now available in Azure Marketplace

A few months ago, we released a blog post that provided guidance on how to deploy Apache Airflow on Azure. The template in the blog provided a good quick start solution for anyone looking to quickly run and deploy Apache Airflow on Azure in sequential executor mode for testing and proof of concept study. However, the template was not designed for enterprise production deployments and required expert knowledge of Azure app services and container deployments to run it in Celery Executor mode. This is where we partnered with Bitnami to help simplify production grade deployments of Airflow on Azure for customers.

We are excited to announce that the Bitnami Apache Airflow Multi-Tier solution and the Apache Airflow Container are now available for customers in the Azure Marketplace. Bitnami Apache Airflow Multi-Tier template provides a 1-click solution for customers looking to deploy Apache Airflow for production use cases. To see how easy it is to launch and start using them, check out the short video tutorial.

We are proud to say that the main committers to the Apache Airflow project have also tested this application to ensure that it was performed to the standards that they would expect.

Apache Airflow PMC Member and Core Committer Kaxil Naik said, “I am excited to see that Bitnami provided an Airflow Multi-Tier in the Azure Marketplace. Bitnami has removed the complexity of deploying the application for data scientists and data engineers, so they can focus on building the actual workflows or DAGs instead. Now, data scientists can create a cluster for themselves within about 20 minutes. They no longer need to wait for DevOps or a data engineer to provision one for them.”

What is Apache Airflow?

Apache Airflow is a popular open source workflow management tool used in orchestrating ETL pipelines, machine learning workflows, and many other creative use cases. It provides a scalable, distributed architecture that makes it simple to author, track and monitor workflows.

Users of Airflow create Directed Acyclic Graph (DAG) files to define the processes and tasks that must be executed, in what order, and their relationships and dependencies. DAG files are synchronized across nodes and the user will then leverage the UI or automation to schedule, execute and monitor their workflow.

Introduction to Bitnami’s Apache Airflow Multi-tier architecture

Bitnami Apache Airflow has a multi-tier distributed architecture that uses Celery Executor, which is recommended by Apache Airflow for production environments.

It is comprised of several synchronized nodes:

Web server (UI)
Scheduler
Workers

It includes two managed Azure services:

Azure Database for PostgreSQL
Azure Cache for Redis

All nodes have a shared volume to synchronize DAG files.

DAG files are stored in a directory of the node. This directory is an external volume mounted in the same location in all nodes (both workers, scheduler, and web server). Since it is a shared volume, the files are automatically synchronized between servers. Add, modify or delete DAG files from this shared volume and the entire Airflow system will be updated.

You can also use DAGs from a GitHub repository. By using Git, you won’t have to access any of the Airflow nodes and you can just push the changes through the Git repository instead.

To automatically synchronize DAG files with Airflow, please refer to Bitnami’s documentation.

Bitnami’s secret sauce – Packaging for production use

Bitnami specializes in packaging multi-tier applications to work right out of the box leveraging the managed Azure services like Azure Database for PostgreSQL.

When packaging the Apache Airflow Multi-Tier solution, Bitnami added a few optimizations to ensure that it would work for production needs.

Pre-packaged to leverage the most popular deployment strategies. For example, using PostgreSQL as the relational metadata store and the Celery executor.
Role-based access control is enabled by default to secure access to the UI.
The cache and the metadata store are Azure-native PaaS services that leverage the additional benefits those services offer, such as data redundancy and retention/recovery options as well as allowing Airflow to scale out to large jobs.
All communication between Airflow nodes and the PostgreSQL database service is secured using SSL.

To learn more, join Azure, Apache Airflow, and Bitnami for a webinar on Wednesday, May 1st at 11:00 am PST. Register now.

Get Started with Apache Airflow Multi-Tier Certified by Bitnami today!
Quelle: Azure