All treats, no tricks with product recommendation reference patterns

In all things technology, change is the only constant. This year alone has brought more uncertainty than ever before, and the IT shadows have felt full of perils. With the onset of the pandemic, the way consumers shop has shifted faster than anyone could have predicted. The move to online shopping vs. brick and mortar stores was already happening, but it’s significantly accelerated just this year alone. Shoppers have quickly transitioned to online purchasing, resulting in increased traffic and varying fulfillment needs. Shopper expectations have evolved as well, with 66% of online purchasers choosing a retailer based on convenience, while only 47% choosing a retailer based on price/value, according to Catalyst and Kantar research.So the pressure is on for retailers to become digital and make sure shoppers are happy. But there’s no reason to be spooked out. Done right, you can serve customers better with an understanding of their customers’ purchasing behavior and patterns using predictive analytics. Deep, data-driven insights are important to ensuring customer demand and preferences are accurately met. To make it easier to treat (not trick) your customers to better recommendations, we recently introduced Smart Analytics reference patterns, which are technical reference guides with sample code for common analytics use cases with Google Cloud, including predicting customer lifetime value, propensity to purchase, product recommendation systems, and more. We heard from many customers that you needed an easy way to put your analytics tools into practice, and that these are some common use cases.Understanding product recommendation systemsProduct recommendation systems are an important tool for understanding customer behavior. They’re designed to generate and provide suggestions for items or content a specific user would like to purchase or engage with. A recommendation system creates an advanced set of complex connections between products and users, and compares and ranks these connections in order to recommend products or services as customers browse your website, for example. A well-developed recommendation system will help you improve your shoppers’ experience on a website and result in better customer acquisition and retention. These systems can significantly boost sales, revenues, click-through-rates, conversions, and other important metrics because personalizing a user’s preferences creates a positive effect, in turn translating to customer satisfaction, loyalty, and even brand affinity. Instead of building from scratch and reinventing the wheel every time, you can take advantage of these reference patterns to quickly start serving customers. It’s important to emphasize that recommender systems are not new, and you can build your own in-house or from any cloud provider. Google Cloud’s unique ability to handle massive amounts of structured and unstructured data, combined with our advanced capabilities in machine learning and artificial intelligence, provide a powerful set of products and solutions for retailers to leverage across their business.Using reference patterns for real-world casesIn this reference pattern, you will learn step-by-step how to build a recommendation system by using BigQuery ML (a.k.a. BigQu-eerie ML

Modernize your Java apps with Spring Boot and Spring Cloud GCP

It’s an exciting time to be a Java developer: there are new Java language features being released every 6 months, new JVM languages like Kotlin, and the shift from traditional monolithic applications to microservices architectures with modern frameworks like Spring Boot. And with Spring Cloud GCP, we’re making it easy for enterprises to modernize existing applications and build cloud-native applications on Google Cloud. First released two years ago, Spring Cloud GCP allows Spring Boot applications to easily utilize over a dozen Google Cloud services with idiomatic Spring Boot APIs. This means you don’t need to learn a Google Cloud-specific client library, but can still utilize and realize the benefits of the managed services:If you have an existing Spring Boot application, you can easily migrate to Google Cloud services with little to no code changes.If you’re writing a new Spring Boot application, you can leverage Google Cloud services with the framework APIs you already know.Major League Baseball recently started their journey to the cloud with Google Cloud. In addition to modernizing their infrastructure with GKE and Anthos, they are also modernizing with a microservices architecture. Spring Boot is already the standard Java framework within the organization. Spring Cloud GCP allowed MLB to adopt Google Cloud quickly with existing Spring Boot knowledge.”We use the Spring Cloud GCP to help manage our service account credentials and access to Google Cloud services.” – Joseph Davey, Principal Software Engineer at MLBSimilarly, bol.com, an online retailer, was able to develop their Spring Boot applications on GCP more easily with Spring Cloud GCP.”[bol.com] heavily builds on top of Spring Boot, but we only have a limited capacity to build our own modules on top of Spring Boot to integrate our Spring Boot applications with GCP. Spring Cloud GCP has taken that burden from us and makes it a lot easier to provide the integration to Google Cloud Platform.” – Maurice Zeijen, Software Engineer at bol.comDeveloper productivity, with little to no custom codeWith Spring Cloud GCP, you can develop a new app, or migrate an existing app, to adopt a fully managed database, create event-driven applications, add distributed tracing and centralized logging, and retrieve secrets—all with little to no custom code or custom infrastructure to maintain. Let’s look at some of the integrations that Spring Cloud GCP brings to the table. DataFor a regular RDBMS, like PostgreSQL, MySQL, and MS SQL, you can use Cloud SQL and continue to use Hibernate with Spring Data, and connect to Cloud SQL simply by updating the JDBC configuration. But what about Google Cloud databases like Firestore, Datastore, and the globally-distributed RDBMS Cloud Spanner? Spring Cloud GCP implements all the data abstractions needed so you can continue to use Spring Data, and its data repositories, without having to rewrite your business logic. For example, you can start using Datastore, a fully-managed NoSQL database, just as you would any other database that Spring Data supports.You can annotate a POJO class with Spring Cloud GCP annotations, similar to how you would annotate Hibernate/JPA classes:Then, rather than implementing your own data access objects, you can extend a Spring Data Repository interface to get full CRUD operations, as well as custom query methods.Spring Data and Spring Cloud GCP automatically implement the CRUD operations and generate the query for you. Best of all, you can use built-in Spring Data features like auditing and capturing data change events.You can find full samples for Spring Data for Datastore, Firestore, and Spanner on GitHub.MessagingFor asynchronous message processing and event-driven architectures, rather than manually provision and maintain complicated distributed messaging systems, you can simply use Pub/Sub. By using higher-level abstractions like Spring Integration, or Spring Cloud Streams, you can switch from an on-prem messaging system to Pub/Sub with just a few configuration changes.For example, by using Spring Integration, you can define a generic business interface that can publish a message, and then configure it to send a message to Pub/Sub:You can consume messages in the same way. The following is an example of using Spring Cloud Stream and the standard Java 8 streaming interface to receive messages from Pub/Sub by simply configuring the application:You can find full samples with Spring Integration and Spring Cloud Stream on GitHub.ObservabilityIf a user request is processed by multiple microservices and you would like to visualize that whole call stack across microservices, then you can add distributed tracing to your services. On Google Cloud, you can store all the traces in Cloud Trace, so you don’t need to manage your own tracing servers and storage.Simply add the Spring Cloud GCP Trace starter to your dependencies, and all the necessary distributed tracing context (e.g., trace ID, span ID, etc) is captured, propagated, and reported to Cloud Trace.This is it—no custom code required. All the instrumentation and trace capabilities use Spring Cloud Sleuth. Spring Cloud GCP supports all of Spring Cloud Sleuth’s features, so distributed tracing is automatically integrated with Spring MVC, WebFlux, RestTemplate, Spring Integration, and more.Cloud Trace generates a distributed trace graph. But notice the “Show Logs” checkbox. This Trace/Log correlation feature can associate log messages to each trace so you can see the logs associated with a request to isolate issues. You can use Spring Cloud GCP Logging starter and its predefined logging configuration to automatically produce the log entry with the trace correlation data.You can find full samples with Logging and Trace  on GitHub.SecretsYour microservice may also need access to secrets, such as database passwords or other credentials. Traditionally, credentials may be stored in a secret store like HashiCorp Vault. While you can continue to use Vault on Google Cloud, Google Cloud also provides the Secret Manager service for this purpose. Simply add the Spring Cloud GCP Secret Manager starter so that you can start referring to the secret values using standard Spring properties:In the applications.properties file, you can refer to the secret values using a special property syntax:You can find a full sample with Secret Manager on GitHub.More in the works, in open sourceSpring Cloud GCP closely follows the Spring Boot and Spring Cloud release trains. Currently, Spring Cloud GCP 1.2.5 works with Spring Boot 2.3 and Spring Cloud Hoxton release train. Spring Cloud GCP 2.0 is on its way and it will support Spring Boot 2.4 and the Spring Cloud Ilford release train.In addition to core Spring Boot and Spring Cloud integrations, the team has been busy developing new components to meet developers’ needs:Cloud Monitoring support with MicrometerSpring Cloud Function’s GCP Adapter for Cloud Functions Java 11Cloud Spanner R2DBC driver and Cloud SQL R2DBC connectors to enable scalable and fully reactive servicesExperimental Graal VM support for our client libraries, so you can compile your Java code into native binaries, to significantly reduce your startup times and memory footprint.Developer success is important to us. We’d love to hear your feedback, feature requests, and issues on GitHub, so we can understand your needs and prioritize our development work. Try it out!Want to see everything in action? Check out the Developer Hands-on Keynote from Google Cloud Next ‘20: On Air, where Daniel Zou shows how to leverage Spring Boot and Spring Cloud GCP when modernizing your application with Anthos, Service Mesh, and more:You can also easily try Spring Cloud GCP with many samples. Or, you can take the guided Spring Boot on GCP course on Qwiklab or Coursera. Last but not least, you can find out about detailed features and configurations in the reference documentation.Related ArticleAnnouncing Spring Cloud GCP 1.1: deepening ties to Pivotal’s Spring FrameworkHere at Google we have been working hard with Pivotal’s Spring team to integrate the Spring Framework and Google Cloud Platform (GCP). La…Read Article
Quelle: Google Cloud Platform

Cloud Storage object lifecycle management gets new controls

Managing your cloud storage costs and reducing the risk of overspending is critical in today’s changing business environments. Today, we’re excited to announce the immediate availability of two new Object Lifecycle Management (OLM) rules designed to help protect your data and lower the total cost of ownership (TCO) within Google Cloud Storage. You can now transition objects between storage classes or delete them entirely based on when versioned objects became noncurrent (out-of-date), or based on a custom time stamp you set on your objects. The end result: more fine grained controls to reduce TCO and improve storage efficiencies. Delete objects based on archive time Many customers who leverage OLM protect their data against accidental deletion with Object Versioning. However, without the ability to automatically delete versioned objects based on their age, the storage capacity and monthly charges associated with old versions of objects can grow quickly. With the non-current time condition, you can filter based on archive time and use it to apply any/all lifecycle actions that are already supported, including delete and change storage class. In other words, you can now set a lifecycle condition to delete an object that is no longer useful to you, reducing your overall TCO. Here is a sample rule to delete all the noncurrent object versions that became versioned (noncurrent) more than 30 days ago:This rule downgrades all the noncurrent object versions that became versioned (noncurrent) before January 31, 1980 in Coldline to Archive:Set custom timestampsThe second new Cloud Storage feature is the ability to set a custom timestamp in the metadata field to to assign a lifecycle management condition to OLM. Before this launch, the only timestamp that could be used for OLM was given to an object when writing to the Cloud Storage bucket. However, this object creation timestamp may not actually be the date that you care the most about. For example, you may have migrated data to Cloud Storage from another environment and want to preserve the original create dates from before the transfer. In order to set lifecycle rules based on dates that make more sense to you and your business case, you can now set a specific date and time and apply lifecycle rules to objects. All existing actions, including delete and change storage class are supported. If you’re running applications such as backup and disaster recovery applications, content serving, or a data lake, you can benefit from this feature by preserving the original creation date of an object when ingesting data into Cloud Storage. This feature delivers fine-grained OLM controls, resulting in cost savings and efficiency improvements, as a result of being able to set your own timestamps directly to the assets themselves. This sample rule deletes all objects in a bucket more than 2 years old since the specified custom timestamp:This rule downgrades all objects with custom timestamp older than May 27, 2019 in Coldline to Archive:The ability to use age or custom dates with Cloud Storage object lifecycle management is now generally available. To get started or for more information, visit the Cloud Storage Lifecycle Documentation page or navigate to the Google Cloud Console.Related ArticlePut your archive data on ice with new storage offeringThe new storage class called Archive, our coldest Cloud Storage offering yet, is now available for data backup and storage.Read Article
Quelle: Google Cloud Platform

Cloud Acceleration Program: More reasons for SAP customers to migrate to Google Cloud

The arrival of COVID-19 caused massive disruption for companies around the globe and made digital transformation a more urgent priority. That’s why it’s so important that enterprises running their businesses on SAP have the agility, uptime and advanced analytics that Google Cloud can offer. But given the drain on financial and human resources that the pandemic has caused, many organizations are worried about the risks of migrating to the cloud and have considered hitting the brakes on their cloud migrations, just when they should be pressing the gas pedal. Last year we launched the Cloud Acceleration Program (CAP), which has significantly helped SAP customers speed their transitions to the cloud. This first-of-its-kind program empowers customers with solutions from both Google Cloud and our partners to simplify their cloud migrations. Google Cloud is also providing CAP participants with upfront financial incentives to defray infrastructure costs for SAP cloud migrations and help customers ensure that duplicate costs are not incurred during migration. Here’s what customers are saying about the program:“We had plans to migrate to the cloud, but COVID brought into sharp focus the need to accelerate our SAP migration to the cloud. With help from Google Cloud and their Cloud Acceleration Program, we were able to get the skills and the funding to accelerate this effort dramatically. With our new strategic relationship with Google Cloud, we feel significantly better positioned for the future to take advantage of the elastic, scalable computing capabilities and vast amounts of innovation that [are] constantly being developed.”  —Maneesh Gidwani, CIO of FIFCO, a global food, beverage, retail and hospitality organization“The ability to leverage Managecore through the Cloud Acceleration Program dramatically reduced the risk and costs of our SAP migration to Google Cloud. The CAP program enabled Pegasystems to offset the upfront migration expense and significantly expedite our go-live process. With the help of Managecore we were able to focus on running our ERP business operations in the Cloud, rather than the technical elements of the project.”  —David Vidoni, Vice President of IT, Pegasystems, a CRM and BPM software developer company Cloud Acceleration Program Partners step up for SAP customersGoogle Cloud’s strongecosystem of partners are stepping up to the plate more than ever to help customers de-risk their SAP migrations to the cloud. By completing their migrations faster and with minimal cost, customers are now shifting their conversations from concerns about infrastructure and deployment, to higher-value topics such as optimizing costs and driving business value with analytics and machine learning tools.  “As one of the early partner participants in the Cloud Acceleration Program, we have been able to apply these significant resources to help multiple Enterprise customers in their SAP cloud migration engagements. CAP allows HCL to efficiently get tools and resources to the customer to ease their migration risk concerns & costs. Our customers are now engaging to drive strategic conversations on how they can leverage the SAP platform on the Google Cloud to drive new insights, improve business KPIs and create new business models with capabilities such as Google Cloud analytics and machine learning tools.”—Sanjay Singh, Senior VP & Global Head, HCL Google Ecosystem Unit”At NIMBL, we’ve seen both a great deal of interest in Google Cloud by our SAP customers as well as the significant results being realized by those who have deployed. A common concern for many other customers still on this journey – however – continues to be the overall disruption that a cloud migration may cause. Our migration expertise combined with the industry best tools and resources that Cloud Acceleration Program (CAP) offers, helps provide customers with a clear and confident path to the cloud. As a CAP partner, Google Cloud continues to set us up for success with the resources and support we need to deliver these critical customer deployments.”—Sergio Cipolla, Managing Partner, NIMBL Techedge GroupGoogle Cloud is a great place to run SAPAs pressures to transform increase for SAP enterprises, customers are looking to modernize on a smarter cloud.Google Cloud continues to be a great place to run SAP. Like no other for these unprecedented times, our Cloud Acceleration Program gets customers one step closer by reducing the complexities of migration and technical and financial risk management.Contact us to learn more aboutGoogle Cloud for SAP.
Quelle: Google Cloud Platform

Comparing containerization methods: Buildpacks, Jib, and Dockerfile

As developers we work on source code, but production systems don’t run source, they need a runnable thing. Starting many years ago, most enterprises were using Java EE (aka J2EE) and the runnable “thing” we would deploy to production was a “.jar”, “.war”, or “.ear” file. Those files consisted of the compiled Java classes and would run inside of a “container” running on the JVM. As long as your class files were compatible with the JVM and container, the app would just work.That all worked great until people started building non-JVM stuff: Ruby, Python, NodeJS, Go, etc. Now we needed another way to package up apps so they could be run on production systems. To do this we needed some kind of virtualization layer that would allow anything to be run. Heroku was one of the first to tackle this and they used a Linux virtualization system called “lxc” – short for Linux Containers. Running a “container” on lxc was half of the puzzle because still a “container” needed to be created from source code, so Heroku invented what they called “Buildpacks” to create a standard way to convert source into a container.A bit later a Heroku competitor named dotCloud was trying to tackle similar problems and went a different route which ultimately led to Docker, a standard way to create and run containers across platforms including Windows, Mac, Linux, Kubernetes, and Google Cloud Run. Ultimately the container specification behind Docker became a standard under the Open Container Initiative (OCI) and the virtualization layer switched from lxc to runc (also an OCI project).The traditional way to build a Docker container is built into the docker tool and uses a sequence of special instructions usually in a file named Dockerfile to compile the source code and assemble the “layers” of a container image.Yeah, this is confusing because we have all sorts of different “containers” and ways to run stuff in those containers. And there are also many ways to create the things that run in containers. The bit of history is important because it helps us categorize all of this into three parts:Container Builders – Turn source code into a Container ImageContainer Images – Archive files containing a “runnable” applicationContainers – Run Container ImagesWith Java EE those three categories map to technologies like:Container Builders == Ant or MavenContainer Images == .jar, .war, or .earContainers == JBoss, WebSphere, WebLogicWith Docker / OCI those three categories map to technologies like:Container Builders == Dockerfile, Buildpacks, or JibContainer Images == .tar files usually not dealt with directly but through a “container registry”Containers == Docker, Kubernetes, Cloud RunJava Sample ApplicationLet’s explore the Container Builder options further on a little Java server application. .  If you want to follow along, clone my comparing-docker-methods project:git clone https://github.com/jamesward/comparing-docker-methods.gitcd comparing-docker-methodsIn that project you’ll see a basic Java web server in src/main/java/com/google/WebApp.java that just responds with “hello, world” on a GET request to /. Here is the source:This project uses Maven with a minimal pom.xml build config file for compiling and running the Java server:If you want to run this locally make sure you have Java 8 installed and from the project root directory, run:./mvnw compile exec:javaYou can test the server by visiting: http://localhost:8080Container Builder: BuildpacksWe have an application that we can run locally so let’s get back to those Container Builders. Earlier you learned that Heroku invented Buildpacks to create standard, polyglot ways to go from source to a Container Image. When Docker / OCI Containers started gaining popularity Heroku and Pivotal worked together to make their Buildpacks work with Docker / OCI Containers. That work is now a sandbox Cloud Native Computing Foundation project: https://buildpacks.io/To use Buildpacks you will need to install Docker and the pack tool. Now from the command line tell Buildpacks to take your source and turn it into a Container Image:pack build –builder=gcr.io/buildpacks/builder:v1 comparing-docker-methods:buildpacksMagic! You didn’t have to do anything and the Buildpacks knew how to turn that Java application into a Container Image. It even works on Go, NodeJS, Python, and .Net apps out-of-the-box. So what just happened?  Buildpacks inspect your source and try to identify it as something it knows how to build. In the case of our sample application it noticed the pom.xml file and decided it knows how to build Maven-based applications. The –builder flag told it where to get the Buildpacks from. In this case, gcr.io/buildpacks/builder:v1 are the Container Image coordinates to Google Cloud’s Buildpacks. Alternatively you could use the Heroku or Paketo Buildpacks. The parameter comparing-docker-methods:buildpacks is the Container Image coordinates for where to store the output. In this case it stores on the local docker daemon. You can now run that Container Image locally with docker:docker run -it -ePORT=8080 -p8080:8080 comparing-docker-methods:buildpacksOf course you can also run that Container Image anywhere that runs Docker / OCI Containers like Kubernetes and Cloud Run.Buildpacks are nice because in many cases they just work and you don’t have to do anything special to turn your source into something runnable. But the resulting Container Images created from Buildpacks can be a bit bulky. Let’s use a tool called dive to examine what is in the created container image:dive comparing-docker-methods:buildpacksHere you can see the Container Image has 11 layers and a total image size of 319MB. With dive you can explore each layer and see what was changed. In this Container Image the first 6 layers are the base operating system. Layer 7 is the JVM and layer 8 is our compiled application. Layering enables great caching so if only layer 8 changes, then layers 1 through 7 do not need to be re-downloaded. One downside of Buildpacks is how (at least for now) all of the dependencies and compiled application code are stored in a single layer. It would be better to have separate layers for the dependencies and the compiled application.To recap, Buildpacks are the easy option that “just works” right out-of-the-box. But the Container Images are a bit large and not optimally layered.Container Builder: JibThe open source Jib project is a Java library for creating Container Images with Maven and Gradle plugins. To use it on a Maven project (like the one we from above), just add a build plugin to the pom.xml file:Now a Container Image can be created and stored in the local docker daemon by running:./mvnw compile jib:dockerBuild -Dimage=comparing-docker-methods:jibUsing dive we will see that the Container Image for this application is now only 127MB thanks to slimmer operating system and JVM layers. Also, on a Spring Boot application we can see how Jib layers the dependencies, resources, and compiled application for better caching:In this example the 18MB layer contains the runtime dependencies and the final layer contains the compiled application. Unlike with Buildpacks the original source code is not included in the Container Image. Jib also has a great feature where you can use it without docker being installed, as long as you store the Container Image on an external Container Registry (like DockerHub or the Google Cloud Container Registry). Jib is a great option with Maven and Gradle builds for Container Images that use the JVM.Container Builder: DockerfileThe traditional way to create Container Images is built into the docker tool and uses a sequence of instructions defined in a file usually named Dockerfile. Here is a Dockerfile you can use with the sample Java application:In this example, the first four instructions start with the AdoptOpenJDK 8 Container Image and build the source to a Jar file. The final Container Image is created from the AdoptOpenJDK 8 JRE Container Image and includes the created Jar file. You can run docker to create the Container Image using the Dockerfile instructions:docker build -t comparing-docker-methods:dockerfile Using dive we can see a pretty slim Container Image at 209MB:With a Dockerfile we have full control over the layering and base images. For example, we could use the Distroless Java base image to trim down the Container Image even further. This method of creating Container Images provides a lot of flexibility but we do have to write and maintain the instructions.With this flexibility we can do some cool stuff. For example, we can use GraalVM to create a “native image” of our application. This is an ahead-of-time compiled binary which can reduce startup time, reduce memory usage, and alleviate the need for a JVM in the Container Image. And we can go even further and create a statically linked native image which includes everything needed to run so that even an operating system is not needed in the Container Image. Here is the Dockerfile to do that:You will see there is a bit of setup needed to support static native images. After that setup the Jar is compiled like before with Maven. Then the native-image tool creates the binary from the Jar. The FROM scratch instruction means the final container image will start with an empty one. The statically linked binary created by native-image is then copied into the empty container.Like before you can use docker to build the Container Image:docker build -t comparing-docker-methods:graalvm .Using dive we can see the final Container Image is only 11MB!And it starts up super fast because we don’t need the JVM, OS, etc. Of course GraalVM is not always a great option as there are some challenges like dealing with reflection and debugging. You can read more about this in my blog, GraalVM Native Image Tips & Tricks.This example does capture the flexibility of the Dockerfile method and the ability to do anything you need. It is a great escape hatch when you need one.Which Method Should You Choose?The easiest, polyglot method: BuildpacksGreat layering for JVM apps: JibThe escape hatch for when those methods don’t fit: DockerfileCheck out my comparing-docker-methods project to explore these methods as well as the mentioned Spring Boot + Jib example.Related ArticleAnnouncing Google Cloud buildpacks—container images made easyGoogle Cloud buildpacks make it much easier and faster to build applications on top of containers.Read Article
Quelle: Google Cloud Platform

How one telecom reimagined IT to better serve customers with more personalized content

Telecommunications providers are under a lot of pressure. Customers increasingly expect one provider to meet all their telephony, digital entertainment, and broadband needs. And as the choice of providers increases and switching costs decrease, it’s harder than ever to create and maintain loyalty. At UPC Polska, we know this challenge all too well. As a leading telecommunications provider in Poland, we serve 1.5 million of customers with 3 million services each day, via an IT infrastructure built over the past 20 years. While we still run several business critical applications on premises, it also became increasingly clear to us that we could not develop, test, and deploy new features fast enough in our existing environment. As a result, we came to a stark realization: we had to transform our IT infrastructure to accelerate our feature release process, or risk losing customers. After considering several options, we selectedGoogle Cloud’s Anthos because it offered a uniform management experience across our hybrid environment and easy application modernization. We wanted to implement Anthos as soon as possible, but also knew we needed an experienced global systems integrator to help us do so securely and effectively. As a result, we turned to Accenture who helped us complete the project in just six weeks.Blending cultural and technology transformationOur customer service application allows us to build highly personalized relationships with over a million customers. Since rapid feature releases are critical to our commercial success, that application was one of the first targets for modernization with Anthos. Accenture came in, worked hard to understand our architecture, and provided the cloud-first strategy and assurance we needed to confidently migrate the app to the new hybrid environment. With the support of Google Cloud and Accenture, our team embraced the shift in management and development models from Waterfall to Agile. Although this was a difficult transition due to significant technological and infrastructure shifts and changes in processes, responsibilities, and ways of working, it ultimately increased speed-to-market on new features .To help ensure success for our DevOps team with this new approach, we deployed Anthos in our on-premises data centers. With Anthos, we can uniformly set policy at scale and continuously release features without worrying about security, vulnerability management or downtime across different environments. Our developers can then focus on writing better code, while operators use Anthos to effectively manage and run those applications anywhere. Accenture further drove the cloud-first DevOps culture shift we needed to make this all work, through training and support that quickly got our staff up to speed.The biggest advantage of working with Google Cloud and Accenture to deploy Anthos has been increased engagement among our staff. Teams are working passionately to achieve as much as possible because they can now focus on their core responsibilities rather than infrastructure management. Anthos helps us control which workloads, features, and data go into the cloud, and which are better suited for our on-premises infrastructure. Anyone working on this project today at UPC Polska would tell you that Anthos gives us the best of both worlds—the agility and speed of the cloud along with the power and comfort of still being able to use our traditional on-premises infrastructure.With the incredible collaboration between our team, Accenture, and Google Cloud, we have the development, testing, and production clusters we need integrated into our Agile development process. Now, both developers and operators enjoy increased scalability, stronger system resiliency, and more knowledge about containers.Making efforts countEverything we have done with Accenture and Google Cloud is driven by our commitment to creating, delivering, and improving the quality of services we offer to our 1.5 million customers. Personalization at that scale can be challenging, even with all the right technologies and DevOps strategies in place. Luckily, we have an impressive team and plenty of support through Google Cloud and Accenture. With our IT infrastructure and culture working together as part of a more Agile model powered by Anthos, the sky’s the limit for our personalization efforts, which frees us to dream up more ways to serve our customers. For example, we’re exploring projects like Software Defined Access Networks, cloud-based CRM, more personalized customer experiences, smart home technology, integrations between mobile and fixed networks, and an ever-growing portfolio of content and services. As we enter this new and fast-paced time in UPC Polska’s history, we look forward to working with Accenture and Google Cloud to better serve our customers.Read the full case study to learn more about how UPC partnered with Google Cloud and Accenture on this project.
Quelle: Google Cloud Platform

What you can learn in our Q4 2020 Google Cloud Security Talks

2020 has brought with it some tremendous innovations in the area of cloud security. As cloud deployments and technologies have become an even more central part of organizations’ security program, we hope you’ll join us for the latest installment of our Google Cloud Security Talks, a live online event on November 18th, where we’ll help you navigate the latest thinking in cloud security.We’ll share expert insights into our security ecosystem and cover the following topicsSunil Potti and Rob Sadowski will open the digital event with our latest Google Cloud security announcements.This will be followed by a panel discussion with Dave Hannigan and Jeanette Manfra from Google Cloud’s Office of the CISO on how cloud migration is a unique opportunity to dismantle the legacy security debt of the past two decades.Kelly Waldher and Karthik Lakshminarayan will talk about the new Google Workspace and how it can enable users to access data safely and securely while preserving individual trust and privacy.We will present our vision of network security in the cloud with Shailesh Shukla and Peter Blum, where we’ll talk about the recent innovations that are making network security in the cloud powerful but invisible, protecting infrastructure and users from cyber attacks. Sam Lugani and Ibrahim Damlaj will do a deeper dive on Confidential Computing, and more specifically Confidential GKE Nodes and how they can add another layer of protection for containerized workloads.You will also learn how Security Command Center can help you identify misconfigurations in your virtual machines, containers, network, storage, and identity and access management policies as well vulnerabilities in your web applications, with Kathryn Shih and Timothy Peacock.Anton Chuvakin and Seth Vargo will talk about the differences between key management and secret management to help you choose the best security controls for your use cases.Finally, we will host the Google Cloud Security Showcase, a special segment where we’ll focus on a few security problems and show how we’ve recently helped customers solve them using the tools and products that Google Cloud provides. We look forward to sharing our latest security insights and solutions with you. Sign-up now to reserve your virtual seat.
Quelle: Google Cloud Platform

Stateful serverless on Google Cloud with Cloudstate and Akka Serverless

In recent years, stateless middle-tiers have been touted as a simple way to achieve horizontal scalability. But the rise of microservices has pushed the limits of the stateless architectural pattern, causing developers to look for alternatives.Stateless middle-tiers have been a preferred architectural pattern because they helped with horizontal scaling by alleviating the need for server affinity (aka sticky sessions). Server affinity made it easy to hold data in the middle-tier for low-latency access and easy cache invalidation. The stateless model pushed all “state” out of the middle-tier into backing data stores. In reality the stateless pattern just moved complexity and bottlenecks to that backing data tier. The growth of microservice architectures exacerbated the problem by putting more pressure on the middle tier, since technically, microservices should only talk to other services and not share data tiers. All manners of bailing wire and duct tape have been employed to overcome the challenges introduced by these patterns. New patterns are now emerging which fundamentally change how we compose a system from many services running on many machines.To take an example, imagine you have a fraud detection system. Traditionally the transactions would be stored in a gigantic database and the only way to perform some analysis on the data would be to periodically query the database, pull the necessary records into an application, and perform the analysis. But these systems do not partition or scale easily. Also, they lack the ability for real-time analysis. So architectures shifted to more of an event-driven approach where transactions were put onto a bus where a scalable fleet of event-consuming nodes could pull them off. This approach makes partitioning easier, but it still relies on gigantic databases that received a lot of queries. Thus, event-driven architectures often ran into challenges with multiple systems consuming the same events but at different rates.Another (we think better) approach, is to build an event-driven system that co-locates partitioned data in the application tier, while backing the event log in a durable external store. To take our fraud detection example, this means a consumer can receive transactions for a given customer, keep those transactions in memory for as long as needed, and perform real-time analysis without having to perform an external query. Each consumer instance receives a subset of commands (i.e., add a transaction) and maintains its own “query” / projection of the accumulated state. For instance:By separating commands and queries we can easily achieve end-to-end horizontal scaling, fault tolerance, and microservice decoupling. And with the data being partitioned in the application tier we can easily scale that tier up and down based on the number of events or size of data, achieving serverless operations. Making it work with CloudstateThis architecture is not entirely uncommon, going by the names Event Sourcing, Command Query Response Segregation (CQRS), and Conflict-Free Replicated Data Types. (Note: for a great overview of this see a presentation titled “Cloudstate – Towards Stateful Serverless” by Jonas Bonér.) But until now, it’s been pretty cumbersome to build systems with these architectures due to primitive programming and operational models. The new Cloudstate open-source project attempts to change that by building more approachable programming and operational models.Cloudstate’s programming model is built on top of protocol buffers (protobufs) which enable evolvable data schemas and generated service interaction stubs. When it comes to data schemas, protobufs allow you to add fields to event / message objects without breaking systems that are still using older versions of those objects. Likewise, with the gRPC project, protobufs can be automatically wrapped with client and server “stubs” so that no code needs to be written for handling protobuf-based network communication. For example, in the fraud detection system, the protobuf might be:The `Transaction` message contains the details about a transaction and the `user_id` field enables automatic sharding of data based on the user.Cloudstate adds support for event sourcing on top of this foundation so developers can focus on just the commands and accumulated state that a given component needs. For our fraud detection example, we can simply define a class / entity to hold the distributed state and handle each new transaction. You can use any language, but we use Kotlin, a Google-sponsored language that extends Java.With the exception of a tiny bit of bootstrapping code, that’s all you need to build an event-sourced system with Cloudstate!The operational model is also just as delightful since it is built on Kubernetes and Knative. First you need to containerize the service. For JVM-based builds (Maven, Gradle, etc.) you can do this with Jib. In our example we use Gradle and can just run:This creates a container image for the service and stores it on the Google Container Registry. To run the Cloudstate service on your own Kubernetes / Google Kubernetes Engine (GKE) cluster, you can use the Cloudstate operator and a deployment descriptor such as:There you have it—a scalable, distributed event-sourced service! And if you’d rather not manage your own Kubernetes cluster, then you can also run your Cloudstate service in the Akka Serverless managed environment, provided by Lightbend, the company behind Cloudstate.To deploy the Cloudstate service on Lightbend Cloudstate simply run:It’s that easy! Here is a video that walks through the full fraud detection sample:You can find the source for the sample on GitHub: github.com/jamesward/cloudstate-sample-fraudAkka Serverless under the hoodAs an added bonus, Akka Serverless itself is built on Google Cloud. To deliver this stateful serverless cloud service on Google Cloud, Cloudstate needs a distributed durable store for messages. With the open-source Cloudstate you can use PostgreSQL or Apache Cassandra. The managed Akka Serverless service is built on Google Cloud Spanner due to its global scale and high throughput. Lightbend also chose to build their workload execution on GKE to take advantage of its autoscaling and security features.Together, Lightbend and Google Cloud have many shared customers who have built modern, resilient, and scalable systems with Lightbend’s open source and Google’s Cloud services. So we are excited that Cloudstate brings together Lightbend and Google Cloud and we look forward to seeing what you will build with it! To get started check out the Open Source Cloudstate project and Lightbend’s Akka Serverless managed cloud service.
Quelle: Google Cloud Platform

Preparing Google Cloud deployments for Docker Hub pull request limits

Docker Hub is a popular registry for hosting public container images. Earlier this summer, Docker announced it will begin rate-limiting the number of pull requests to the service by “Free Plan” users. For pull requests by anonymous users this limit is now 100 pull requests per 6 hours; authenticated users have a limit of 200 pull requests per 6 hours. When the new rate limits take effect on November 1st, they might disrupt your automated build and deployment processes on Cloud Build or how you deploy artifacts to Google Kubernetes Engine (GKE), Cloud Run or App Engine Flex from Docker Hub. This situation is made more challenging because, in many cases, you may not be aware that a Google Cloud service you are using is pulling images from Docker Hub. For example, if your Dockerfile has a statement like “FROM debian:latest”or your Kubernetes Deployment manifest has a statement like “image: postgres:latest” it is pulling the image directly from Docker Hub. To help you identify these cases, Google Cloud has prepared a guide with instructions on how to scan your codebase and workloads for container image dependencies from third-party container registries, like Docker Hub.We are committed to helping you run highly reliable workloads and automation processes. In the rest of the blog post, we’ll discuss how these new Docker Hub pull rate limits may affect your deployments running on various Google Cloud services, and strategies for mitigating against any potential impact. Be sure to check back often, as we will update this post regularly. Impact on Kubernetes and GKEOne of the groups that may see the most impact from these Docker Hub changes is users of managed container services. Like it does for other managed Kubernetes platforms, Docker Hub treats GKE as an anonymous user by default. This means that unless you are specifying Docker Hub credentials in your configuration, your cluster is subject to the new throttling of 100 image pulls per six hours, per IP. And many Kubernetes deployments on GKE use public images. In fact, any container name that doesn’t have a container registry prefix such as gcr.io is pulled from Docker Hub. Examples include nginx, and redis.Container Registry hosts a cache of the most requested Docker Hub images from Google Cloud, and GKE is configured to use this cache by default. This means that the majority of image pulls by GKE workloads should not be affected by Docker Hub’s new rate limits. Furthermore, to remove any chance that your images would not be in the cache in the future, we recommend that you migrate your dependencies into Container Registry, so that you can pull all your images from a registry under your control.In the interim, to verify whether or not you are affected, you can generate a list of DockerHub images your cluster consumes:You may want to know if the images you use are in the cache. The cache will change frequently but you can check for current images via a simple command:It is impractical to predict cache hit-rates, especially in times where usage will likely change dramatically. However, we are increasing cache retention times to ensure that most images that are in the cache stay in the cache.GKE nodes also have their own local disk cache, so when reviewing your usage of DockerHub, you only need to count the number of unique image pulls (of images not in our cache) made from GKE nodes: For private clusters, consider the total number of such image pulls across your cluster (as all image pulls will be routed via a single NAT gateway). For public clusters you have a bit of extra breathing room, as you only need to consider the number of unique image pulls on a per-node basis. For public nodes, you would need to churn through more than 100 unique public uncached images every 6 hours to be impacted, which is fairly uncommon.If you determine that your cluster may be  impacted, you can authenticate to DockerHub by adding imagePullSecrets with your Docker Hub credentials to every Pod that references a container image on Docker Hub.While GKE is one of the Google Cloud services that may see an impact from the Docker Hub rate limits, any service that relies on container images may be affected, including similar Cloud Build, Cloud Run, App Engine, etc.Finding the right path forward Upgrade to a paid Docker Hub accountArguably, the simplest—but most expensive—solution to Docker Hub’s new rate limits is to upgrade to a paid Docker Hub account. If you choose to do that and you use Cloud Build, Cloud Run on Anthos, or GKE, you can configure the runtime to pull with your credentials. Below are instructions for how to configure each of these services:Cloud Build: Interacting with Docker Hub imagesCloud Run on Anthos: Deploying private container images from other container registriesGoogle Kubernetes Engine: Pull an Image from a Private RegistrySwitch to Container RegistryAnother way to avoid this issue is to move any container artifacts you use from Docker Hub to Container Registry. Container Registry stores images as Google Cloud Storage objects, allowing you to incorporate container image management as part of your overall Google Cloud environment. More to the point, opting for a private image repository for your organization puts you in control of your software delivery destiny. To help you migrate, the above-mentioned guide also provides instructions on how to copy your container image dependencies from Docker Hub and other third-party container image registries to Container Registry. Please note that these instructions are not exhaustive—you will have to adjust them based on the structure of your codebase.Additionally, you can use Managed Base Images, which are automatically patched by Google for security vulnerabilities, using the most recent patches available from the project upstream (for example, GitHub). These images are available in the GCP Marketplace.Here to help you weather the changeThe new rate limits on Docker Hub pull requests will have a swift and significant impact on how organizations build and deploy container-based applications. In partnership with the Open Container Initiative (OCI), a community devoted to open industry standards around container formats and runtimes, we are committed to ensuring that you weather this change as painlessly as possible.
Quelle: Google Cloud Platform

IKEA: Creating a more affordable, accessible and sustainable future with help from the cloud

Editor’s note: Today we hear from Barbara Martin Coppola, Chief Digital Officer at IKEA Retail (Ingka Group). Barbara chats about how cloud technologies helped IKEA respond to COVID-19, and what new connected customer experiences lie ahead.A better home creates a better lifeWe are here to create a better everyday life for many people with big dreams, big needs and thin wallets. Today’s life at home is more important than ever, not only to accommodate people’s basic needs, but also to make space for home offices, remote education and multi-purpose entertainment and exercise environments.People are looking for products and services that offer value for money, that are convenient and easily available. Consumers are increasingly connecting with brands and companies that are making a positive impact and contributing to the environment. Life at home has never been as important as it is today, and IKEA is determined to create a more affordable, accessible and sustainable future for all.It goes without saying that the pandemic has affected societies and communities at large. During these times, people are looking for different ways to shop and have their items delivered. Online shopping has reached new heights, with experienced online shoppers buying more than ever before and new shoppers entering the online space for the very first time. During lockdowns, many of our IKEA stores catered to customers online only, leading to increased levels of growth in e-commerce and an acceleration of our digital transformation. Things that would normally take years or months were accomplished within weeks and days.An adaptation strategy was important for our business whilst undergoing this period of change. We transformed our current technology infrastructure, converted our closed stores into fulfilment centers and enabled contactless Click & Collect services whilst increasing the capacity to manage large web traffic volumes and online orders. By using Google Cloud, among other key serverless technologies, we were able to instantly scale our business globally, on the web and in our stores.With the use of technology, we focused on taking care of co-workers as our first priority. We modified ways of working and engineered a solution where IKEA staff could borrow equipment online for a home office environment set-up. We empowered employees with data and digital tools, automating routine tasks, building advanced algorithms to solve complex problems, placing more modern technology in stores and designing additional self-serve tools. Through cloud technology we trained our data models to assist our co-workers, creating more efficient picking routes, which in turn enriched our customer experience.During this time, we have also committed to accelerating our investments towards a sustainable business. We will invest EUR 600 million into companies, solutions and our own operations to enable the transition to a net-zero carbon economy. As part of that journey, our goal is to use digital tools to help enable circularity across our value chain. We believe that doing good business is good business—both for us and for our planet.Fulfilling customer needs for the future With a growth mindset, we’ll continue to listen, learn and adapt our business to meet our customers where they are. We want to create an experience unlike any other, with the uniqueness of IKEA at the core. We are currently working on better fulfilling customer needs using recommendations through AI, chatbots for simpler and better customer service and 3D visualization design tools to picture furniture in photo realistic rooms. We want to show that IKEA can truly touch every customer around the globe with home furnishing products that provide an unforgettable everyday life at home experience. To learn more, please tune into my fireside discussion with Eva Fors, Managing Director, Nordics, Google Cloud, about innovation, accelerating omni-channel capabilities, diversity and inclusion, and more. Ingka Group recently acquired Geomagical Labs, a spatial computing company, developing mixed reality (real + virtual) experiences to support consumer needs. Using an ordinary smartphone, consumers will be able to quickly capture their spaces and virtually play with IKEA products in their rooms before purchasing them. Discover more here. The IKEA logo and the IKEA wordmark are registered trademarks of Inter IKEA Systems B.V.
Quelle: Google Cloud Platform