How SAP users are achieving retail transformation with Google Cloud

The retail industry is in the midst of a transformation. Online commerce has emerged as a force to reckon with, commanding close to $6 trillion in market opportunity by 2022. With so much at stake, nearly half of all retailers are looking to the cloud to improve customer omnichannel experience and retail store performance. And retailers utilizing SAP solutions are no exception: 75% of retailers surveyed by the Americas’ SAP Users Group (ASUG whitepaper) expressed plans to increase digital investments in the next two years by at least 10% in order to accelerate digital transformation. Of those surveyed, 1 in 4 intend to increase investments significantly, by 50% or more.Retailers know what they need to offer to evolve today: a customer-focused, data-driven, seamless customer experience. But that journey is filled with technological roadblocks that are leaving even the largest retailers in limbo. For retailers innovating with SAP technologies, these roadblocks can present difficulties while migrating, deploying, and running new software that’s expensive and challenging to scale on legacy, on-premises infrastructure. Central to making the transformation journey a success is leveraging the public cloud and choosing the right public cloud service provider (CSP) — remember that not all clouds are created equal. Here at Google Cloud, we’ve helped SAP customers and retailers achieve transformation success by:Giving customers a simplified cloud journey with access to our Cloud Acceleration Program (CAP), and our robust partner community. Helping to accelerate innovation with industry-leading advanced analytics and AI/ML tools.Providing a scalable and elastic infrastructure to rightsize your applications and instances.Minimizing downtime with automated infrastructure maintenance with our Live Migration offering.Let’s take a look at how three retailers using SAP on Google Cloud were able to face their technology challenges head-on and bring their visions for digital transformation to life.Omnichannel: MediaMarktSaturn’s road to customer-centricityCustomers in the digital age expect personalized, seamless omnichannel experiences—from browsing online or via mobile to in-store and checkout. Most retailers are eager to deliver on this expectation, especially with rising technologies like AI, ML, and predictive analytics promising seamless omnichannel experiences. But contrast retail’s future tech landscape with today’s reality: 75% of SAP retail solution customers who participated in our recent ASUG study qualify as digital newcomers that are still in the early stages of transformation. In order to successfully offer personalized, customer-centric omnichannel experiences, retailers must generate customer insights in real time. However, this requires massive compute resources that are beyond the capabilities of most current on-premises infrastructures leveraging SAP.MediaMarktSaturn Retail Group, one of the world’s leading consumer electronics retailers, recently encountered data pipeline challenges that prevented the company from modernizing its omnichannel and retail strategies. MediaMarktSaturn was looking to unify its large data sets and infrastructure across its SAP solutions to generate accurate and relevant insights for both its business and its customers. However, MediaMarktSaturn’s legacy hardware infrastructure was not only incapable of handling the data volumes required to realize its omnichannel goal, but it was also unable to scale up and then back down again to accommodate varying levels of traffic without disruption. To overcome these technical and infrastructural hurdles, MediaMarktSaturn chose Google Cloud to help modernize and migrate its SAP workloads into the cloud. Together with Google Cloud, MediaMarktSaturn decided to leverage Google Kubernetes Engine (GKE), BigQuery, and BigTable to store, mine, cleanse, and analyze data to generate real-time, personalized insights that would better serve customers across all channels. The effort has so far yielded a 30% increase in conversion rates, due to optimizing their search technology and high-performance data handling. Looking to the future and equipped with the tools to modernize its retail strategy, MediaMarktSaturn has started to build analytics tools that explore price elasticity and price prediction based on multiple variables.Store operations: How Loblaw is delighting customers with seamless experiencesBuilding on the omnichannel experience, retailers are also rapidly modernizing store operations, outpacing the agility of their on-premise SAP infrastructure. With optimized express checkout, on-shelf and intelligent inventory management, and dynamic assortment planning on the retail tech horizon, it’s becoming increasingly critical that retail businesses have the foundation to build, test, and deploy the emerging technologies that are critical to compete. Retailers that delay infrastructural modernization in favor of layering new swaths of code on top of legacy systems risk creating a highly complex, coupled, and unscalable monolith that’s prone to downtime and data inaccuracies. Loblaw, Canada’s food and pharmacy leader and the nation’s largest retailer, recently encountered data pipeline issues similar to those at MMS while leveraging SAP Hybris in traditional on-prem environments. It had the goal of enabling personalized product recommendations on ecommerce platforms, but the technology was missing the mark, as the quality of suggestions and response latency had room for improvement. Loblaw also wanted to enable marketers to run promotions at any time, without requiring conversations with IT to prepare ecommerce systems. Loblaw decided to leverage public cloud because achieving its vision on-premises would require expanding its data centers and creating dedicated IT maintenance and operations teams. Rather than investing even more resources to support dated, inflexible technology solutions, Loblaw picked Google Cloud:“We thought, ‘Why don’t we offload all that effort to someone who’s doing it at scale, making the appropriate investments, and staying ahead in technology so that we can really focus our efforts on driving value to the customer,’ ” says Hesham Fahmy, Vice President of Technology at Loblaw.The first phase of Loblaw’s migration to the cloud involved its online grocery store, QuickShop, that leverages transaction data from SAP Hybris. Google Cloud offers a certified infrastructure for SAP Hybris, removing the administrative burden required to create an architectural foundation for modernization. Loblaw also uses BigQuery to run real-time analysis of customer data across the buying lifecycle to serve customers with more relevant offers. As a result of the partnership between Google Cloud and SAP, Loblaw has experienced a four-fold improvement in QuickShop’s performance, a three-fold increase in site capacity, and a 50% time savings for its Site Reliability Engineers, allowing the company to focus on further innovations in customer experience. Logistics, fulfillment, and delivery: MultiPharma’s path to serving customer needs with automated warehouses They may not get as much attention, but back-end operations are critical to retail success. Real-time, accurate, automated warehouse management is one of those workloads. From robotics and RFID tagging to on-demand inventory management, warehousing systems require a vast amount of data from all across a retailer’s ecosystem, both online and in-store. Much like the issues that come with developing omnichannel and store operations innovations, modernizing a company’s warehousing can strain legacy, on-prem infrastructure, causing inaccuracies, downtime, and unfulfilled orders. For pharmaceutical retailer MultiPharma, a key value proposition is prompt delivery of medication orders to pharmacists, even during periods of high demand. This required heavy investments in warehouse distribution, robotics, and automation—technologies that need scalable, elastic, and extensible infrastructure. MultiPharma originally satisfied this need with a legacy back-end SAP system and its own private cloud. But issues with cost and flexibility prompted the company to leverage SAP HANA and move to the public cloud. While the company considered several cloud services providers, MultiPharma selected Google Cloud for its superior VM solutions, flexible sizing, and pricing structures. MultiPharma phased the migration of SAP workloads into Google Cloud, the first of which involved creating a development environment for teams to conduct agile testing before finishing the product environment. Within the first phase, MultiPharma is already reaping benefits, including greater flexibility and increased resources that allow it to concentrate on further business innovations, such as optimizing ecommerce and customer-facing applications. As the retail industry continues to transform, retailers that embrace cloud technologies are increasingly positioned to take advantage of emerging opportunities. But in order for increased investments in digital transformations to pay off, retailers leveraging SAP need to ensure their infrastructure and data pipeline are ready for upcoming innovations. Although many enterprises may be tempted to temporarily solve this challenge by layering software in legacy, on-prem architecture, doing so almost certainly guarantees an inflexible, unscalable, inelastic, and costly monolith incapable of continuous modernization. Like MediaMarktSaturn, Loblaw, and MultiPharma, forward-thinking retailers should consider leveraging the cloud’s many offerings and managed services to not only remove the burden of infrastructure and data development and maintenance, but also to enable the best performance from their SAP and technology investments. To learn more about Google Cloud’s work with retailers utilizing SAP technologies and get key takeaways, read “Google Cloud Strategy Guide: 5 Learnings for Your SAP Retail Workloads.” You can also learn more about our SAP and retail industry solutions.
Quelle: Google Cloud Platform

Building more secure data pipelines with Cloud Data Fusion

For those of you working in data analytics, ETL and ELT pipelines are an important piece of your data foundation. Cloud Data Fusion is our fully managed data integration service for quickly building and managing data pipelines. Cloud Data Fusion is built on the open source project CDAP, and this open core lets you build portable data pipelines. A CDAP server might satisfy your need to run a few simple data pipelines. But when it comes to securing a larger number of business-critical data pipelines, you’ll often need to put a lot more effort into logging and monitoring those pipelines. You will also need to manage authentication and authorization to protect that data when you have servers running workloads for multiple teams and environments. These additional services can require a lot of maintenance effort from your operations team and take time away from development. The goal is running pipelines, not logging, monitoring, or the identity and access management (IAM) service.We designed Cloud Data Fusion to take care of most of this work for you. And since it’s part of Google Cloud, you can take advantage of built-in security benefits when using Cloud Data Fusion rather than self-managed CDAP servers:Cloud-native security control with Cloud IAM—Identity management and authentication efforts are taken care of by Cloud IdentityFull observability with Stackdriver Logging and Monitoring—Logs include pipeline logs and audit logsReduced exposure to public internet with private networkingLet’s take a look at these features in detail.Access control with Cloud IAM The number one reason to use Cloud Data Fusion over self-managed CDAP servers is that it integrates seamlessly with Cloud IAM. That lets you control access to your Cloud Data Fusion resources. With Cloud IAM, Cloud Data Fusion is able to easily integrate with other Google Cloud services. You can also use Cloud Identity for users and groups management and authentication [such as multi-factor authentication (MFA)], instead of implementing or deploying your own.There are two predefined roles in Cloud Data Fusion: admin and viewer. As a practice of the IAM principle of least privilege, the admin role should only be assigned to users who need to manage (create and delete) the instances. The viewer role should be assigned to users who only need to access the instances, not manage them. Both roles can access the Cloud Data Fusion web UI to create pipelines and plugins.Assign roles and permissions to groups with users instead of assigning them to users directly whenever possible. This helps you control users’ access to Cloud Data Fusion resources in a more organized manner, especially when you assign permissions to the groups repeatedly on multiple projects.Read more about the two Cloud Data Fusion roles and their corresponding permissions.Private IP instanceThe private IP instance of Cloud Data Fusion connects with your Virtual Private Cloud (VPC) privately. Traffic over this network does not go through the public internet, and reduces potential attack surface as a result. You can find more about setting up private IP for Cloud Data Fusion.VPC Service ControlsWe’re also announcing beta support for VPC Service Controls to Cloud Data Fusion. You can now prevent data exfiltration by adding a Cloud Data Fusion instance to your service perimeter. When configured with VPC-SC, any pipeline that reads data from within the perimeter will fail if it tries to write the data outside the service perimeter.Stackdriver LoggingStackdriver Logging and Monitoring are disabled by default in Cloud Data Fusion, but we recommend you enable these tools for observability.With the extra information provided by the logs and metrics, you can not only investigate and respond to incidents faster, but understand how to manage your particular infrastructure and workloads more effectively in the long run. There are a range of logs that can help you run your Cloud Data Fusion pipelines better.Pipeline logsThese are generated by your pipelines in Cloud Data Fusion. They are useful for understanding and troubleshooting your Cloud Data Fusion pipelines. You can find these logs in the Cloud Data Fusion UI as well as in the Stackdriver logs of the Dataproc clusters that execute the pipelines.Admin activity audit logsThese logs record operations that modify the configuration or metadata of your resources. Admin activity audit logs are enabled by default and cannot be disabled.Data access audit logsData access audit logs contain API calls that read the configuration or metadata of the resources, as well as user-driven API calls that create, modify, or read user-provided resource data.Admin activity audit logs and data access audit logs are useful for tracking who accessed or made changes to your Cloud Data Fusion resources. In case there’s any malicious activity, a security admin will be able to find and track down the bad actor in the audit logs.These Google Cloud features can give you extra control and visibility into your Cloud Data Fusion pipelines. Cloud IAM helps you to control who can access your Cloud Data Fusion resources; private instance minimizes exposure to public internet; and Stackdriver Logging and Monitoring provides information about your workloads, changes in permission, and access to your resources. Together, they create a more secure solution for your data pipeline on Google Cloud.Learn more about Cloud Data Fusion.
Quelle: Google Cloud Platform

OpenShift 4.3: New Improved Topology View

The topology view in the Red Hat OpenShiftConsole’s Developer’s Perspective provides a visual representation of the application structure. It helps developers to clearly identify one resource type from the other, as well as understand the overall communication dynamics within the application. 
Launched with the 4.2 release of OpenShift, topology view has already earned a spotlight in the cloud-native application development arena. The constant feedback cycles and regular follow-ups on the ongoing trends in the developer community have helped in shaping a great experience in the upcoming release. This blog focuses on a few features in the topology view added for OpenShift 4.3.
1. Toggle between the list view and the graph view
In response to the user community, the topology view now comes with a toggle button to quickly switch between the list view and the graph view for a given project. While the graph view comes in very handy in use cases that require cognizance of the role played by individual components in the application architecture, list views could be helpful for more data-focused and investigative tasks. Introduction of this toggle would enable seamless navigation through views irrespective of the contrast in use-cases.

2. Menu for contextual actions
The topology view has a list of components available as a part of the graph. There are various kinds of resource types, connectors, groupings, individual items such as event sources, each one of which supports a different set of action in context. Users could access this exclusive menu for each of the listed items by performing a right-click over them, which further opens a dropdown list with all the available actions. Users could click anywhere outside the menu to make it disappear from the view.

3. Creating a binding between resources
The topology view allows for creating a connection between any given pair of resources by simply dragging a handle from the origin nodes and dropping it over a target node. It reduces the cognitive load on the developer by doing a smart assessment of whether an operator-managed backing service is available for creating the intended binding. In the absence of an operator managed backing service, an annotation-based connection is created. 

4. Real-time visualization of pod transition
The topology view in 4.3 provides convenient and upfront access to scale up/down and increase/decrease your pod count via the side panel. Similarly, users could also start rollout or recreate pods for a given node from the contextual menu(accessed through a right-click or from the actions button on the side panel). Upon performing the associated interaction from the side panel to accomplish any of the mentioned actions, users would get to see a real-time visualization of the transitions that the pods go through.

5. Deleting an applications
Topology view now supports deleting an application from the graph view. By invoking the contextual menu the given application grouping, either by performing a right-click or through the side panel, users could access the delete action. Upon confirming the action, the application group—comprised of components with the associated label, as defined by the Kubernetes-recommended labels—are deleted.

6.  Visualization of Event Sources sink 
The topology view shows elements from Knative Eventing, namely event sources, which help give a developer quick insight into which event sources will trigger their application by looking at it visually.

7. Viewing Knative Services and associated revisions
Users are now enabled to view Knative Service and the associated Revisions/Deployments in the topology view. The revisions in a service which are in the active traffic block would be displayed as a group on the topology view, along with the information on their traffic consumption.

With the continuous evolution of Kubernetes related technology and the introduction of new practices and integrations, OpenShift is constantly updated to reflect the progression. 
Learn More
Interested in learning more about application development with OpenShift? Here are some resources which may be helpful:

Red Hat resources on application development on OpenShift: developers.redhat.com/openshift

Provide Feedback

Join our OpenShift Developer Experience Google Group, participate in discussions or attend our Office Hours Feedback session
Drop us an email with your comments about the OpenShift Console user experience.

 

 
The post OpenShift 4.3: New Improved Topology View appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Our Favourite Picks from the KubeCon Europe 2020 Schedule

Last Wednesday, the CNCF released the KubeCon Europe 2020 schedule. There are so many talks at KubeCon it can be daunting even to decide what to go to see! Here are some talks by the team at Docker, and some others we think will be particularly interesting. Looking forward to seeing you in Amsterdam!

Simplify Your Cloud-Native Application Packaging and Deployments – Chris Crone

Chris is an engineer in our Paris office and is also co-executive director of the CNAB project. CNAB (Cloud Native Application Bundle) is a specification for bundling up cloud-native applications, which can consist of multiple containers, into a single object that can be pushed to a registry. Open source projects using CNAB, like Docker App or Porter allow you to package apps that would normally require multiple tools like Terraform, Helm, and shell to deploy, into a single tooling agnostic packaging format. These packages can then be shared using existing container registries and used with other CNAB compliant tools. This can really simplify cloud-native development.

Sharing is Caring! Push your Cloud Application to an OCI Registry – Silvin Lubecki & Djordje Lukic

Did you know that you can store anything into a container registry? Did you ever wonder what black magic is behind multi-architecture images? The OCI Image specification is a standard purposely generic enough to enable use cases other than “just” container images.

This talk will give an overview of how images in registries work, and how you can push CNAB applications and other custom resources into a registry. It will also cover our battle scars with the different interpretations of the OCI spec by the mainstream registries. 

How to Work in Cloud Native Security: Demystifying the Security Role – Justin Cormack, Docker

Working in security can be intimidating and the shortage of people in the space makes hiring difficult. But especially in cloud-native environments, security is something everyone must own. If you’ve ever asked yourself, “what does it take to work in security in a cloud-native environment? How can you move into security from a dev or an ops position? Where should you start and what should you learn about?” then this talk is for you. I decided to submit this talk as my journey into working in security was fairly accidental, and I realised that this is true for many people. I meet a lot of people interested in getting into security, through the CNCF SIG Security and elsewhere, and hope I can give help and encouragement.

More interesting talks

I wrote about the work the community is doing in the CNCF on Notary v2 last week. If you found this interesting and want to learn more, we have an introductory session, with me and Omar Paul from Amazon, which will give a beginner’s view and a working session for in-depth work with Steve Lasker from Microsoft and me.

If you want even more on container signing, Justin Cappos and Lukas Puehringer from New York University have a session on securing container delivery with TUF and another on supply chain security with in-toto.

The containerd community continues to grow and innovate. Phil Estes from IBM and Derek McGowan from Docker are covering the Introduction to containerd, while Akihiro Suda and Wei Fu are doing the containerd deep dive. Also on the containerd theme, the great teachers Bret Fisher and Jerome Petazzoni are giving a tutorial: Kubernetes Runtimes: Translating your Docker skills to containerd.

Dominique Top and Ivan Pedrazas run the London Docker meetup and are both lovely people who have built up a great community. Learn from them with 5 Things you Could do to Improve your Local Community.

Lastly, my friend Lee Calcotte always gives great talks, and this one about how to understand the details of traffic control appeals to my geek side: Discreetly Studying the Effects of Individual Traffic Control Functions.
The post Our Favourite Picks from the KubeCon Europe 2020 Schedule appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/