Protect your organization from account takeovers with reCAPTCHA Enterprise

As more enterprises are requiring customers to create accounts to do things like access services or make a purchase, attackers have increased their focus on account takeovers. These attackers are highly motivated and can be extremely evasive when trying to avoid detection during campaigns. For example, bad actors often attempt to hide their activities by acting during normal traffic times to blend in with genuine customer activity. reCAPTCHA Enterprise can help protect your websites from fraudulent activity like this. Last week, we talked about how reCAPTCHA Enterprise can help keep your end users safe against a variety of attacks, including fraudulent transactions, scraping, synthetic accounts, and account takeovers. Today, we’re going to take a deeper look at how reCAPTCHA Enterprise can help you combat account takeovers and hijacking. Account takeover and hijacking basicsAccount takeovers and hijacking are when a bad actor uses a stolen or leaked credential to login and take over a legitimate user’s account. Account takeovers happen when an attacker uses someone else’s login credentials, successfully gets into his or her account, and then starts to perform fraud, such as the transferring of money or gift card and purchase fraud. How do these bad actors obtain stolen credentials? There are a number of ways, but the easiest is simply to purchase them from the dark web or other sources. This can be done extremely inexpensively, and in the last several years, billions of account records have been leaked from breaches. With exponential growth anticipated for credentials available after a data breach, that number will only continue to increase. When a malicious actor has a large set of these stolen or purchased credentials, it’s not financially feasible for them to manually attempt to login to an account. So, they rely on automated credential stuffing attacks to login and verify the accounts before they manually perform fraud on the accounts. This process of validating stolen credentials typically requires three parts: a list of potential credentials and accountsa distributed botnet (large swaths of infected “zombie” machines)some type of automation software or toolkit to orchestrate the attacking botnet Since these credentials have a long list of potential username and password combinations, attackers usually use a botnet to see which logins are correct. Botnets generally attack through proxy servers or ephemeral addresses that can be hard to blacklist or block, which also allows attackers to quickly change where the attacks are originating from. Determined attackers will pivot and attempt to evade detection as quickly as possible if they realize they’ve been noticed. Account takeover and hijacking attacks have been on the rise over the last years, and they are very costly to the organizations that are targeted. According to a study by Javelin Strategy & Research, billions of dollars are spent each year cleaning up and containing the stolen accounts to try to combat fraudulent activity. How reCAPTCHA can helpDue to the growing sophistication of attacks, it has become increasingly difficult for security teams to manage the line between letting valid customers in and keeping out fraudulent attackers and bots. reCAPTCHA Enterprise is here to help. reCAPTCHA Enterprise is a frictionless fraud detection service that leverages our experience from more than a decade of defending the internet with reCAPTCHA and data for our network of four million sites. A simple JavaScript snippet enables reCAPTCHA Enterprise to verify that requests on your webpages are coming from real humans. This is done through behavioral analysis that uses site-specific training and models. reCAPTCHA Enterprise will detect malicious requests and give you actionable insights to help protect your enterprise. reCAPTCHA Enterprise gives you the granularity and flexibility to help protect your webpages in the way that makes the most sense to your business. Our enterprise API provides risk scores for an interaction with your site. With 1.0 being a likely good interaction and 0.0 likely being an abusive one, you can decide which action to take based on that score. This means there’s no one-size-fits-all approach to managing your risk, you can have different levels of protection for different web pages. For example, a suspected fraudulent request on a login page could force a two-factor authorization challenge, while you could just block the request on a less valuable webpage.Using reCAPTCHA Enterprise, you can train your site specific model by sending reCAPTCHA IDs back to Google labeled as false positives or false negatives. SDKs are available for both iOS and Android to provide the same controls for your mobile applications. The danger of bot-led account takeover and hijacking attacks are on the rise, costing organizations large amounts of money and consuming the time of valuable internal resources in security, legal, and fraud teams. reCAPTCHA Enterprise can help detect these botnets and give you the insights you need to block the requests while allowing real users into your website and their account. To learn more about how you can help protect your enterprise from account takeovers and hijacking, visit our documentation. To get started with reCAPTCHA today, contact sales.
Quelle: Google Cloud Platform

How to run SAP on Google Cloud if high availability is high priority

Over the past few months, businesses across every industry have faced unexpected challenges in keeping their enterprise IT systems safe, secure, and available to users. Many have experienced sudden spikes or drops in demand for their products and services, and even more have shifted almost overnight to a home-based workforce. Even enterprises that experienced the stress of these changes and came through with flying colors may be wondering whether their current approach to protecting the availability of these applications is as robust as it needs to be.This question can be especially urgent for companies that run their SAP enterprise applications in on-premises environments. These organizations are often already struggling with running business-critical SAP instances on-premises because they can be complex and costly to maintain. But they see the on-prem option—backed up with major investments in high-availability (HA) systems and infrastructure—as the best way to ensure the security and availability of these essential applications. They know just how much their users depend on these systems and how disruptive it can be to deal with unplanned outages. However, IT organizations charged with running on-premises SAP landscapes, in many cases, must also manage a growing number of other business-critical applications—all while under pressure to do more with less.For many organizations, this is an unsustainable approach. In fact, according to a 2018 survey looking at trends in HA solutions, companies at the time were already struggling to hold the line with on-premises application availability:95% of the companies surveyed reported at least occasional failures in the HA services that support their applications.98% reported regular or occasional application performance issues.When HA application issues occured, companies surveyed spent 3–5 hours, on average, to identify and fix the problem.Things aren’t getting easier for these companies. Today’s IT landscape is dominated by risk, uncertainty, and the prospect of belt-tightening down the road. At the same time, it’s especially important now to keep your SAP applications—the software at the heart of your business—secure, productive, and available at all times.At Google Cloud, we’ve put a lot of thought into solving the challenges around high-availability for SAP environments. We recognized this as a potential make-or-break issue for customers. And we prioritized giving them a solution: a reliable, scalable, and cost-effective SAP environment, built on a cloud platform designed to deliver high-availability and performance.3 levels that define the SAP availability landscapeUnderstanding how to give SAP customers the best possible high-availability solution starts with recognizing that “availability” means different things to different customers, depending on their business needs, budgets, SAP application use cases, and other factors. That’s why we look at the SAP high availability (HA) landscape in terms of three levels, each with its own costs, benefits, and trade-offs to consider within an overall availability strategy.Level 1: InfrastructureFor some customers, simply moving an SAP system from on-premises hardware to Google Cloud infrastructure can deliver big improvements in uptime. Google Cloud has two built-in capabilities that are especially important to achieving this goal and together can reduce or even eliminate downtime due to hardware failures:Live Migration. When a customer’s VM instances are running on a host system that needs scheduled maintenance, Google Live Migration moves the VM instance from one host to another, without triggering a restart or disrupting the application. This is a built-in feature that every Google Cloud user gets at no additional cost. It works seamlessly and automatically, no matter how large or complex a user’s workloads happen to be. Google Cloud conducts hardware maintenance, applies security patches and updates, globally, without telling a single customer to restart their VM, all with the power of Live Migration. Host auto restart. When an unplanned shutdown affects a user’s VM instances, this feature swings into action, automatically restarting the VM instance on a different host. When necessary, it calls up a user-defined startup script to ensure that the application running on top of the VM restarts at the same time. The goal is to ensure the fastest possible recovery from an unplanned shutdown, while keeping the process as simple and reliable as possible for users.  Level 2: DatabaseEvery SAP environment depends on a central database system to store and manage business-critical data. Any SAP high-availability solution must consider how to maintain the availability and integrity of this database layer. In addition, SAP systems support a variety of database systems—many of which employ different mechanisms to achieve high-availability performance. By supporting and documenting the use of HA architectures for SAP HANA, IBM Db2, MaxDB, SAP ASE, and Microsoft SQL Server, Google Cloud gives customers the freedom to decide how to balance the costs and benefits of HA database systems for their SAP environments.Level 3: Application serverSAP’s NetWeaver architecture helps users avoid app-server bottlenecks that can threaten HA uptime requirements. Google Cloud takes that advantage and runs with it by giving customers the high-availability compute and networking capabilities they need to protect against the loss of data through syncronizationand to get the most reliability and performance from NetWeaver.5 ways Google Cloud supports high-availability SAP systemsThere are many other ways Google Cloud can help maximize SAP application uptime, even in the most challenging circumstances. Consider a few examples, and keep in mind how tough it can be for enterprises, even larger ones, to implement similar capabilities at an affordable cost:1. Geographic distribution and redundancy. Google Cloud’s global footprint currently includes 22 regions, divided into 67 zones and over 130 points of presence. By distributing key Google Cloud services across multiple zones in a region, most SAP users can achieve their availability goals without sacrificing performance or affordability. For example:Compute Engine instance groups can be distributed and managed across the available zones in a region.Compute Engineregional persistent disks are synchronously replicated across zones in a region.2. Powerful and versatile load-balancing capabilities. For many enterprises, load balancing and distribution is another key to maintaining the availability of their SAP applications. Google Cloud meets this need with a range ofload-balancing options, including global load balancing that can direct traffic to a healthy region closest to users. Google Cloud Load Balancing reacts instantaneously to changes in users, traffic, network, backend health, and other related conditions. And, as a software-defined service, it avoids the scalability and management issues many enterprises encounter with physical load-balancing infrastructure.3. Tools that keep developers focused and productive. Google Cloud’sserverless platform includes managed compute and database products that offer built-in redundancy and load balancing. It allows a company’s SAP development teams to deploy code without worrying about the underlying infrastructure. Google Cloud alsosupports CI/CD through native tools and integrations with popular open source technologies, giving modern DevOps organizations the tools they need to deliver software faster and more securely.4. Flexible, full-stack monitoring. Google Cloud Monitoring gives enterprises deep visibility into the performance, uptime, and overall health of their SAP environments. It collects metrics, events, and metadata from Google Cloud, Amazon Web Services, hosted uptime probes, application instrumentation, and even application components such as Cassandra, Nginx, Apache Web Server, Elasticsearch, and many others. Cloud Monitoring uses this data to power flexible dashboards and rich visualization tools, which helps SAP teams identify and fix emerging issues before they affect your business.5. Making the most of an SAP system’s inherent HA capabilities. Every SAP instance already includes some very powerful HA technologies, and one of our most important jobs is to ensure that Google Cloud fully supports these built-in capabilities. Let’s look at two examples of how we do this:At the database level, Synchronous SAP ​HANA System Replication​ (HSR) is one of the most important application-native technologies for ensuring HA for any SAP HANA system. It works by replicating data continuously from a primary system to a secondary system, and it can be preloaded into memory to allow for a rapid failover if there’s a disaster.Google Cloud supports and complements HSR by allowing the use of synchronous replication for SAP instances that reside in any zone within the same region. That means users can place their primary and secondary instances in different zones, keeping them geographically separated and protected against failure on an entire zone.At the application level, the SAP architecture allows the use of multiple NetWeaver app server instances to maintain high-availability performance. Yet there’s still a single point of failure to contend with: the SAP NetWeaver global file system, which must be available to all SAP NetWeaver instances in a HA system.Google Cloud offers two ways to address this issue. The first uses a high-availability shared storage solution, such as NetApp Cloud Volumes​. The second uses Google Cloud’s support of replicated zonal persistent disks to replicate the SAP global file system between the nodes in an HA cluster. Both of these approaches ensure that a file system failure won’t put a business’s high-availability SAP environments at risk.Explore your HA optionsWe’ve only scratched the surface when it comes to understanding the many ways Google Cloud supports and extends HA for SAP instances. For an even deeper dive, our white paper, “SAP on Google Cloud: High Availability”goes into more technical detail on how you can set up a high-availability architecture for SAP landscapes using Google Cloud services.
Quelle: Google Cloud Platform

How SAP on Google Cloud is helping Multipharma manage change

In this period of uncertainty, businesses everywhere are facing increased pressures and rapid change. Multipharma is no exception. As Belgium’s largest pharmaceutical retailer, Multipharma is feeling the impact of change as much as anyone, now operating at the core of a global pandemic. While deeply focused on its core pharmacy business with warehouse and production centers for medicine preparation and repackaging, Multipharma is also a retailer. To thrive as a retailer while managing changing customer expectations and intense competition, Multipharma determined they needed to replace their legacy merchandising system with a modern, industry specific solution for their retail landscape. Multipharma chose SAP S/4HANA Retail, electing to deploy it in the cloud for elasticity and scalability it offers.  In this blog post, we’ll explore how Mulitpharma have positioned themselves ready for change in the complex retail pharmaceutical market and why they chose Google Cloud as their hyperscaler partner for this critical transformation.The Need for ChangeWith over 270 retail outlets, 1,700 employees, a state of the art warehouse and three product packaging and preparation centers, Multipharma is the largest pharmaceutical retailer in Belgium. For Multipharma, providing excellent service to their diverse customer base is a top priority, and over their 60 year history, they’ve expanded their pharmaceutical offering to include a full set of services such as nicotine management, weight loss coaching, and other wellbeing, health and beauty services. To compete against new upstarts and industry disruptors with mail-order and online capabilities,  Multipharma determined they need to rethink how they deliver complete care for their customers. They needed to be prepared with a comprehensive retail platform that could deliver on retail core processes end to end—from master data entry and product listing, through promotion execution and price maintenance, to point-of-sale accounting. For this task, Multipharma chose SAP S/4HANA for Retail. Where to run S/4 was the next question. Multipharma knew that managing their own SAP hardware was neither cost effective nor a good choice for the agility, security and availability they needed. When it became clear that their current private cloud provider was not up to the task, Multipharma kicked off an effort to find a better alternative. Why Google Cloud for SAPMultipharma’s executive and IT teams performed an extensive evaluation of the top three public cloud providers for their comprehensive SAP deployment. After careful review, Multipharma selected Google Cloud because it could deliver on all key criteria (see figure below).“Setting up SAP Test and Deployment environments on Google Cloud is really easy,” says Kevin Moens, Lead Enterprise Architect at Multipharma. “In terms of business impact, that means we can be more flexible and move fast when we need to, delivering new initiatives efficiently.” Moens also calls out Google Cloud’s security features: “Using Google Cloud supports us in meeting our European data storage regulatory requirements. Its encryption of data in transit and at rest was a decisive factor in our choice of cloud provider.” Ready for business challenges of the future The first step in Multipharma’s journey is to migrate its legacy retail system to SAP S/4HANA. Together, Google Cloud, SAP, and Multipharma’s IT teams architected a three-layered solution. The business layer will house S4/HANA Retail, SAP CAR (Customer Activity Repository) and BW on HANA for BI. The integration layer will house Process Orchestration, Data Services and SAP Landscape Transformation Replication Server (SLT).  For the management layer, SAP Solution Manager and Landscape Management (LaMa) will be deployed. This complete architecture is expected to go live later in 2020. Running these key SAP applications in Google Cloud will afford them many strategic advantages:Reduced costs. The pay-per-use model offers Multipharma many ways to significantly reduce costs, such as the ability to turn off non production systems at night and on the weekends.Increased flexibility. Google Cloud’s fully virtualized infrastructure positions Multipharma to quickly adapt to changing business conditions such as a global health crisis as well as pivot to services like in-home medication delivery. Real-time inventory management. For both physical stores and omnichannel, Multipharma will have complete visibility into its stock positions. Security. Google Cloud encrypts data data in transit and at rest by default—a key consideration for Multipharma. High availability. Cloud Load Balancing lets Multipharma balance its compute resources over a single or multiple regions, and plus Live Migration which offers them zero downtime infrastructure maintenance.Additionally, Multipharma intends to take advantage of Google Cloud Pub/Sub, a fully-managed real-time messaging service that allows you to send and receive messages between independent applications. Along with Google App Engine, they will leverage Pub/Sub for message integration between point-of-sale systems and their new SAP landscape for custom integration monitoring. “We chose Google Cloud not only because it offers per-second billing, but also because of the investment Google is making in innovation,” says Kevin Moens, Lead Enterprise Architect at Multipharma. “We want the flexibility to access additional security and infrastructure services in the future.” By running SAP S/4HANA for Retail on Google Cloud, Multipharma will be well positioned to leverage their extensive retail footprint and customer proximity to explore new services such as at-home delivery. They also plan to deepen their omnichannel experiences (in-store and through digital customized patient coaching), as well as business and marketing activities around medication stock and inventory. Watch and learn more about how Multipharma transformed its retail processes with Google Cloud. Or hear from more customers innovating with SAP on Google Cloud.
Quelle: Google Cloud Platform

Key considerations for building a migration factory to Google Cloud

Cloud migration is one of the biggest organizational changes that technologists will go through this decade, with a profound impact on a business’s ability to innovate and its overall economics. Migrating to the cloud gives organizations an unique opportunity to not only improve their flexibility, reduce costs and focus on their core competencies, but ultimately, to fully transform how they operate. At a minimum, the business advantages of migrating to the cloud include:Purchasing and consuming resources on a pay-as-you-go basis, and increasing or decreasing them as needed for optimal utilizationConverting capital expenses into operating expensesEnabling rapid innovation without the expense and complexities of hardware procurement and infrastructure managementEnjoying faster time to marketWhen done right, it is a gradual migration of data, applications, infrastructure, and other business elements, resulting in business transformation. But cloud migration isn’t a one-and-done proposition. To succeed, it requires careful analysis, planning, and execution of a comprehensive organizational and technical strategy that meets your overall business goals. Working with customers, we here on the Google Cloud Professional Services team have developed a migration factory, a methodology for performing large-scale migrations to the cloud. Designed for migrating enterprise applications, migration factory offers an  organizational structure and set of processes that helps you create a scaled team with the right skills and understanding of your organization, and set clearly defined goals that are closely measured through the life of the program. For each application that you want to move, the migration factory approach takes an end-to-end view of the project, including:Building the business case for migrating the specific applicationAssessing the current application and infrastructure estateDeveloping a migration planThis set of strategic activities helps inform the migration path for your applications, describes the migration execution, and helps establish post-migration operations principles. All of this makes your migration simpler and more effective.Click to enlargeIn our experience, a thoughtful and well-designed migration factory can help with several common challenges:Unclear goals: Lack of cohesive vision and cloud adoption strategy across the organization; these can be short-term, long-term or both.Lack of sponsorship: Not having the right level of investments of skills, talent, time and effort. Poor migration planning: Embarking on a cloud journey without understanding the complexity of your existing application estate.Wrong technology choice: A failure to properly review the workloads to be migrated and choose the right cloud product and service model. With the right technology choice, businesses can successfully migrate to the cloud to derive cost savings, competitive advantage along with rapid innovation.Unclear delivery and operational model: To be successful, you need the right mix of people, process and technology across the organization—before, during and after the migration.In addition, a migration factory model can help organizations derive a number of benefits:Velocity: With a mature operational model and plus automation, migration factory projects can proceed very quickly.Reduced costs: Having mature and efficient processes streamlines the actual migration. Google Cloud customers that follow a migration factory approach can see significantly lower costs. Reduced risk: Identifying the risk tolerance of the workloads being migrated up-front helps shorten overall downtime and/or unplanned downtime during the migration—so you can get back to software development sooner.Quality: Well-defined and automated processes result in more consistent and error-free migrations.A foundation for larger cloud-native initiatives: The learnings, best practices and standards you gain as a part of the migration process position your team to take on more complex and challenging cloud initiatives down the road.Migrating to the cloud can feel like a big undertaking, but with the proper preparation, it can be immensely rewarding. To learn more about how to design and build a cloud migration factory we’ve written a whitepaper ”Building a Large Scale Migration Program with Google Cloud,” filled with practical guidance, strategies and implementation details to help you get started. The Google Cloud Professional Services team or one of our Google Certified partners are also ready to help. Contact your Google Cloud account manager for more information.
Quelle: Google Cloud Platform

Google Cloud and NVIDIA’s enhanced partnership accelerates computing workloads

Companies from startups to multinationals are striving to radically transform the way they solve their data challenges. As they continue to manage increasing volumes of data, these companies are searching for the best tools to help them achieve their goals—without heavy capital expenditures or complex infrastructure management.Google Cloud and NVIDIA have been collaborating for years to deliver a powerful platform for machine learning (ML), artificial intelligence (AI), and data analytics to help you solve your complex data challenges. Organizations use NVIDIA GPUs on Google Cloud to accelerate machine learning training and inference, analytics, and other high performance computing (HPC) workloads. From virtual machines to open-source frameworks like TensorFlow, we have the tools to help you tackle your most ambitious projects. For instance, Google Cloud’s Dataproc now lets you use NVIDIA GPUs to speed up ML training and development by up to 44 times and reduce costs by 14 times.To continue to help you meet your goals, we’re excited to announce forthcoming support for the new NVIDIA Ampere architecture and the NVIDIA A100 Tensor Core GPU. Google Cloud and the new A100 GPUs will come with enhanced hardware and software capabilities to enable researchers and innovators to further advance today’s most important AI and HPC applications, from conversational AI and recommender systems, to weather simulation research on climate change. We’ll be making the A100 GPUs available via Google Compute Engine, Google Kubernetes Engine, and Cloud AI Platform, allowing customers to scale up and out with control, portability, and ease of use. In addition, Google Cloud’s Deep Learning VM images and Deep Learning Containers will bring pre-built support for NVIDIA’s new generation of libraries to take advantage of A100 GPUs. The Google Cloud, NVIDIA, and TensorFlow teams are partnering to provide built-in support for this new software in allTensorFlow Enterprise versions, so TensorFlow users on Google Cloud can use the new hardware without changing any code or upgrading their TensorFlow versions. Avaya makes customer connections with Google Cloud and NVIDIAAvaya, a leading global provider of unified communications and collaboration, uses Google Cloud and NVIDIA technology to address customers’ critical business challenges. Avaya Spaces, a born-in-the-cloud video collaboration solution, runs on Google Cloud and is deployed in multiple data centers globally. With COVID-19 changing the way we work, this solution has been especially helpful to organizations as they shift to social distancing and working from home. “Moving our video processing over to NVIDIA T4s on Google Cloud opens up new innovation opportunities for our platform. Our direction is to infuse real-time AI capabilities in our user experience to create unique value for our end users,” says Paul Relf, Senior Director of Product Management, Cloud Collaboration at Avaya. “We are heavy users of Google Cloud and the value-added capabilities that are available to us. We are also keenly interested in the new AI capabilities coming from NVIDIA and how we can leverage the combined ecosystem to create better outcomes for our Avaya Spaces users.” There is a wide range of use cases for NVIDIA on Google Cloud solutions, across industries and company sizes. We spoke about some of the AI platform uses—from edge computing to graphics visualization—at NVIDIA’s GTC Digital event. You can check out some of the on-demand sessions we think are particularly interesting below:Building a Scalable Inferencing Platform in GCPGoogle Cloud AutoML Video and Edge DeploymentGPipe: Efficient Training of Giant Neural Networks Using Pipeline ParallelismJAX: Accelerating Machine-Learning Research with Composable Function Transformations in PythonArtificial and Human Intelligence in HealthCareIf you’re interested in learning more about the new A100 GPUs on Google Cloud, fill out this form and we’ll be in touch.
Quelle: Google Cloud Platform

Helping manufacturers during and after COVID-19

The impact of COVID-19 has touched every industry—including manufacturing, which relies heavily on skilled, hands-on workers and complex supply chains. According to a March 2020 survey by the U.S. National Association of Manufacturers, four out of five U.S. manufacturing companies expect to be financially impacted by COVID-19, more than half think they’ll need to change how they operate, and over a third anticipate supply chain disruptions. COVID-19 has also compelled manufacturers to address important issues facing their employees, including remote work and social distancing. It’s a critical time for manufacturers to increase the agility and digitization of their supply chains and operations; here’s how cloud can help businesses resume under new norms.Enabling automationWhile the full impact of COVID-19 is still unknown, many factories are already experiencing decreases in workforce capacity and resources. Manufacturers face tough questions around how to quickly understand this new landscape and develop new operating procedures and automation initiatives that enable them to adapt quickly and enable their employees to safely work on site.We want to help manufacturers address these challenges by offering tools that automate processes, remotely monitor systems, and extend their capabilities beyond the factory floor. Using Vision AI, for example, manufacturers can train machine-learning models to visually inspect goods and processes for quality and compliance, without putting human inspectors at risk. By connecting operational technology (OT) and information technology (IT) via the cloud, operators can monitor and control specific machines or plants remotely, using dashboards and performance views. GlobalFoundries, a leader in the semiconductor manufacturing industry, is already using AutoML Vision to build a visual inspection solution that can detect random defects in wafer map and scanning electron microscope (SEM) images, which are essential elements in the semiconductor manufacturing process. AutoML Vision reads in the images of wafers and sample defects, and trains customized models to detect these defects. AutoML Vision could successfully classify 80% of the images based on a limited amount of training data in the initial pass itself. This fast path to high accuracy let GlobalFoundries quickly move to production, start realizing benefits, and scale up. The foundry subsequently improved quality and customer satisfaction, and 40% of the manual inspection workload has already been successfully shifted to the visual inspection solution built based on AutoML Vision.Supporting remote workGoogle Meet, our premium, secure video meetings solution, can help manufacturers hold daily meetings, training, and onboarding without the need to be on site, while G Suite tools like Google Docs, Sheets, and Slides let teams collaborate on document authoring remotely. Google Meet can also assist with virtual training and the safe and secure on-boarding of new hires.KAESER Compressors, one of the world leaders in compressed air products, accelerated their deployment of G Suite when they needed to rapidly convert their teams to remote work as a result of COVID-19. “We were deeply impressed by how quickly our employees adopted G Suite and we really believe in cloud’s benefits,” says Falko Lameter, CIO at KAESER Compressors. “We have access to more memory, better machines, and more advanced technology, such as machine learning. Google Cloud has shown its commitment to innovation in these areas, and we are looking forward to scaling our collaboration in the future.”Energy solutions provider Viessmann was able to keep up its production, and G Suite has helped them to convert their employees to remote work within a 48-hour span. Since then they have conducted roughly 60,000 Google Meet conferences per month. Additionally, Google Sheets was established as the primary IT dashboard for monitoring KPIs for Viessmann’s IT infrastructure. Being one of the first manufacturing companies that chose Chromebooks granted employees who work from home peace of mind, as their accounts are secure and protected through Google’s modern authentication technologies. Viessmann even developed a ventilator in significantly less time than usual. G Suite helped the involved engineers to collaborate and exchange on the project with ease and thus was essential to speed up the launch process. “The G Suite communications and collaboration infrastructure is self-maintaining, and it has made each of us more independent,” says Alexander Pöllmann, Head of Intranet and Collaboration Services at Viessmann. “We can work together from everywhere across borders and time zones, on any device. This kind of flexibility makes our workforce more agile, and ultimately, happier and more productive.” Koenig & Bauer, the world’s oldest printing press manufacturer, migrated to G Suite in early 2020 to increase productivity and collaboration amongst teams. In light of COVID-19, the timely switch to G Suite helped Koenig & Bauer significantly to keep workplaces connected. Teams in different locations—even at home—have access to tools like video conferencing, calendar functions, word processing, and spreadsheet calculations, and can now share files, all with a single click and on one single platform.1 “Before we moved to G Suite, we had a very heterogeneous, non-collaborative IT environment and the resource consumption was pretty heavy,” says Jürgen Tuffentsammer, CIO of Koenig & Bauer. “Since the migration, our team collaboration has improved and we can share and execute against innovative ideas so much faster than before.”Managing volatility in supply and demandThe COVID-19 pandemic has disrupted global supply chains and distribution channels. Now more than ever, accurately forecasting demand and optimizing supply in this continuously evolving environment requires integrating data from multiple sources and analyzing it in real time. It’s a job analytical tools like BigQuery were designed for. By applying smart analytics and AI, manufacturers can better predict demand and adapt their operations to meet it.AI can also help mitigate problems with last-mile delivery, which account for more than half of all shipping costs. The ability to optimize routes using real-time weather and traffic data, as well as to deploy ML models to predict where new pickups are likely to come from, will enable companies to minimize operating expenses while maximizing services.Last month, Missouri Governor Mike Parson announced the launch of a new tool developed by Google Cloud to help health care providers connect with Missouri manufacturers and suppliers of personal protective equipment (PPE). The Missouri PPE Marketplace tool is a joint effort between the state and the Missouri Hospital Association, built to help those manufacturers that have shifted production to PPE enter the healthcare market and connect with buyers.Additionally, over the past month, the state’s Department of Economic Development (DED) has gathered interest from more than 200 PPE manufacturers and suppliers, inviting them to register in the system. State healthcare agencies and the Missouri Hospital Association are reaching out to healthcare providers across the state to ensure they have access and can connect directly with suppliers through the new tool.Optimizing IT spendMoving to the cloud is an important way businesses can optimize their technology spend and find new efficiencies.  For manufacturers, this can translate into more customers served, more issues resolved, and more adaptability for the overall business. The cloud offers the potential for drastically reduced data costs and infrastructure savings, plus increased performance, simplicity, and scalability across IT environments.  The cloud can support manufacturers in many ways, from improving safety, to weathering market uncertainties, to preparing for the future. For example, we recently joined the Rolls-Royce EMER²GENT alliance, which aims to help foster global economic recovery by identifying early signs of rebounding economies (we’re providing access to our public data sets and BigQuery as part of the effort). Whether it’s modernizing infrastructure, increasing agility, or digitizing supply chains and operations, we’re focused on providing the solutions that will help enable manufacturers to operate during the pandemic and beyond.Visit our website to learn more about manufacturing on Google Cloud.1. Source: Koenig & Bauer Annual Report 2019
Quelle: Google Cloud Platform

Announcing Google Cloud VMware Engine: Accelerating your cloud journey

VMware technologies form the cornerstone of many customer’s enterprise IT environments, and those same enterprises are eager to run their VMware environments in the cloud to scale quickly and benefit from cloud services.Last summer, we announced support for customers to run VMware workloads on Google Cloud, and we have made significant progress since then. In the fall we acquired CloudSimple to provide customers a fully integrated VMware-based solution, and today we’re proud to announce another significant milestone—Google Cloud VMware Engine, an integrated first-party offering with end-to-end support to migrate and run your VMware environment in Google Cloud. This fully managed service is expected to be generally available this quarter out of two US regions, expanding into additional Google Cloud regions globally in the second half of the year. Introducing Google Cloud VMware EngineThe service delivers a fully managed VMware Cloud Foundation stack—VMware vSphere, vCenter, vSAN, NSX-T, and HCX for cloud migration—in a dedicated environment on Google Cloud’s highly performant and reliable infrastructure to support enterprise production workloads. With this service, you can migrate or extend your on-premises workloads to Google Cloud in minutes by connecting to a dedicated VMware environment directly through the Google Cloud Console. This allows you to seamlessly migrate to the cloud without the cost or complexity of refactoring applications, and run and manage workloads consistently with your on-premises environment. By running your VMware workloads on Google Cloud, you reduce your operational burden while benefiting from scale and agility, and maintain continuity with your existing tools, policies, and processes. Importantly, you can quickly meet your business needs by creating a VMware SDDC environment on Google Cloud in a matter of minutes, enabling you to scale business critical applications on-demand. The service is VMware Cloud Verified, the highest level of validation for VMware-based cloud services, to help enable compatibility and operational continuity across on-premises and cloud environments. “VMware and Google Cloud are working together to help power customers’ multi-cloud strategies, and the new Google Cloud VMware Engine will enable our mutual customers to drive digital transformation and business resiliency using the same VMware Cloud Foundation running in their data centers today,” said Ajay Patel, senior vice president and general manager, cloud provider software business unit at VMware. “Google Cloud VMware Engine enables organizations to quickly deploy their VMware environment in Google Cloud, delivering scale, agility and access to cloud-native services while leveraging the familiarity and investment in VMware tools and training.” A differentiated VMware experienceGoogle Cloud VMware Engine is built on Google Cloud’s highly performant, scalable infrastructure with fully redundant and dedicated 100Gbps networking, providing 99.99% availability to meet the needs of your most demanding enterprise workloads. Cloud networking services such as Interconnect and VPN ease access from your on-premises environments to the cloud and high-bandwidth connectivity to cloud services optimize for performance and flexibility while minimizing costs and operational overhead. End-to-end, one stop support is integrated to provide a seamless experience across this service and the rest of Google Cloud.Google Cloud VMware Engine is designed to minimize your operational burden, so you can focus on your business. We take care of the lifecycle of the VMware software stack and manage all related infrastructure and upgrades. Customers can continue to leverage IT management tools and third-party services consistent with their on-premises environment. We’re partnering closely with leading storage, backup, and disaster recovery providers such as NetApp, Actifio, Veeam, Zerto, Cohesity, and Dell Technologies to ensure support for third-party solutions, ease the migration journey, and enable business continuity.An integrated Google Cloud experienceIn addition to the ease of migration, you can benefit from full access to innovative Google Cloud services such as BigQuery, Cloud Operations, Cloud Storage, Anthos, and Cloud AI. Billing, identity management, and access control are also fully integrated into Google Cloud to unify the experience with other Google Cloud products and services. As you look to migrate and modernize workloads over time, these cloud-native services allow you to streamline management, surface new data insights, and deliver new and innovative services to your customers. Unlocking business valueOver the past few months, we’ve engaged with numerous customers through our early access program. Customers have experienced first-hand the rapid and simple migration that Google Cloud VMware Engine enables as they look to extend or migrate workloads into the cloud. Capital markets infrastructure provider Deutsche Börse Group was impressed by the ease and simplicity of migrating VMware workloads to Google Cloud.“As one of the world’s largest market infrastructure providers, implementing innovative and resilient solutions for financial markets is key when it comes to maintaining efficient, stable and most important secure operations,” says Dr. Christoph Böhm, Member of the Executive Board and Chief Information Officer, Deutsche Börse Group. “As a long-term VMware customer we are keen to extend our large landscape towards hyperscaling options, keeping existing control planes and lifecycle management stable. Google Cloud VMware Engine allows us now to quickly extend our VMware environment to Google Cloud, one of Deutsche Börse’s public cloud partners, increasing our business agility and building even higher levels of resiliency. The steps we have gone through so far together are hugely encouraging, giving us innovative and flexible ways in running hybrid cloud scenarios.”QAD, a leading ERP software provider, is also excited about the benefits of running VMware on Google Cloud. “With Google Cloud VMware Engine, we are able to quickly extend our VMware-based platform to Google Cloud to meet our goal of being rapid, agile and effective,” says Scott Lawson, Director, IT Architecture at QAD. “As a leading ERP software provider, partnering with Google Cloud and VMware allows us to reduce our operational burden, improve our disaster recovery capabilities to ensure consistent availability for our customers, and benefit from native Google Cloud services to continuously innovate.” Enabling customer success through our partner ecosystemWe’re proud to partner closely with regional and global system integrators to simplify and enable the success of our mutual customers’ cloud migration journey. Our partners such as Deloitte, Atos, and WWT are committed to building cloud services to help customers adopt Google Cloud VMware Engine and accelerate their digital transformation through native Google Cloud services. Partners can play an essential role to accelerate migration and help you achieve faster time-to-value.  “As customers look to simplify their cloud migration journey, we’re committed to build cloud services to help customers benefit from the increased agility and efficiency of running VMware workloads on Google Cloud,” said Bob Black, Dell Technologies Global Lead Alliance Principal, Deloitte Consulting LLP. “By combining Google Cloud’s technology and Deloitte’s business transformation experience, we can enable our joint customers to accelerate their cloud migration, unify operations, and benefit from innovative Google Cloud services as they look to modernize applications.” Partners also see Google Cloud VMware Engine as a key offering to help customers accelerate their cloud journey. “Running VMware workloads on Google Cloud is a priority for many enterprise customers as they look to benefit from the scale and agility of the cloud while maintaining consistency across hybrid and multi cloud environments,” said Peter Cutts, SVP, Digital Transformation Officer, Atos Cloud Enterprise Solutions. “We are excited for the opportunity to reinforce our partnership with Google Cloud by combining all the value Atos brings to VMware and Google to provide a differentiated experience while enabling customers to benefit from turnkey offerings including cloud native services such as BigQuery, AI & machine learning.”“As a Google Cloud Premier Partner, we are excited about the addition of Google Cloud VMware Engine to the ever-growing list of services already driving value to our mutual customers,” said Michael Taylor, Chief Technology Officer, World Wide Technology. “Hybrid cloud strategies continue to be a focal point for our customers and this offering substantially accelerates the timeframe for organizations to move their workloads to the cloud and modernize their infrastructure.Getting started Google Cloud VMware Engine is expected to be generally available to customers this quarter in the North Virginia (us-east4) and Los Angeles (us-west2) regions. We plan for the service to be available globally in eight additional regions—London, Frankfurt, Tokyo, Sydney, Montréal, São Paulo, Singapore, and Netherlands—in the second half of the calendar year.We are excited for this milestone and committed to delivering an optimum platform to run your VMware workloads alongside Google Cloud services to solve business problems and innovate in new areas. You can find more information including product features and resources on our website. We also invite you to join us for our upcoming webinar where we will provide a more detailed overview of the service, dive into key use cases, and discuss how you can accelerate your cloud migration journey. We look forward to connecting with you.
Quelle: Google Cloud Platform

Cloud cost optimization: principles for lasting success

Cloud is more than just a cost center. Moving to the cloud allows you to enable innovation at a global scale, expedite feature velocity for faster time to market, and drive competitive advantage by quickly responding to customer needs. So it’s no surprise that many businesses are looking to transform their organization’s digital strategy as soon as possible. But while it makes sense to adopt cloud quickly, it’s also important to take time and review key concepts prior to migrating or deploying your applications into cloud. Likewise, if you already have existing applications in the cloud, you’ll want to audit your environment to make sure you are following best practices. The goal is to maximize business value while optimizing cost, keeping in mind the most effective and efficient use of cloud resources.We’ve been working side by side with some complex customers as they usher in the next generation of applications and services on Google Cloud. When it comes to optimizing costs, there are lots of tools and techniques that organizations can use. But tools can only take you so far. In our experiences, there are several high-level principles that organizations, no matter the size, can follow to make sure they’re getting the most out of the cloud. In this blog post, we’ll take a look at some of these concepts, so you can effectively right-size your deployments. Then we’ll also consider the three kinds of cloud cost optimization tools, and provide a framework for how to prioritize cost optimization projects. Finally, if you want more, including prescriptive advice about optimizing compute, networking, storage and data analytics costs on Google Cloud, we’ve regrouped some of most popular blogs on the topic into an all-in-one downloadable ebook, “Understanding the principles of cost optimization.”  Cost optimization with people and processesAs with most things in technology, the greatest standards are only as good as how well they are followed. The limiting factor, more often than not, isn’t the capability of the technology, but the people and processes involved. The intersection of executive teams, project leads, finance, and site reliability engineers (SREs) all come into play when it comes to cost optimization. As a first step, these key stakeholders should meet to design a set of standards for the company that outline desired service-level profitability, reliability, and performance. We highly recommend establishing a tiger team to kickstart this initiative.Using cloud’s enhanced cost visibilityA key benefit of a cloud environment is the enhanced visibility into your utilization data. Each cloud service is tracked and can be measured independently. This can be a double-edged sword: now you have tens of thousands of SKUs and if you don’t know who is buying what services and why, then it becomes difficult to understand the total cost of ownership (TCO) for the application(s) or service(s) deployed in the cloud. This is a common problem when customers make the initial shift from an on-premises capital expenditures (CapEx) model to cloud-based operational expenditures (OpEx). In the old days, a central finance team set a static budget and then procured the needed resources. Forecasting was based on a metric such as historic growth to determine the needs for the next month, quarter, year, or even multiple years. No purchase was made until everyone had the opportunity to meet and weigh in across the company on whether or not it was needed. Now, in an OpEx environment, an engineering team can spin up resources as desired to optimally run their services. We see that for many cloud customers, it’s often something of a Wild West—where engineering spins up resources without standardized guardrails such as setting up budgets and alerts, appropriate resource labeling and frequent cadence to view cost from an engineering and finance perspective. While that empowers velocity, it’s not really a good starting position to effectively design a cost-to-value equation for a service—essentially, the value generated by the service—much less optimize spending. We see customers struggling to identify the cost of development vs. production projects in their environments due to lack of standardized labelling practices. In other cases, we see engineers over-provisioning instances to avoid performance issues, only to see considerable overhead during non-peak times. This leads to wasted resources in the long run. Creating company-wide standards for what type of resources are available and when to deploy them is paramount to optimizing your cloud costs. We’ve seen this dynamic many times, and it’s unfortunate that one of the most desirable features of the cloud—elasticity—is sometimes perceived as an issue. When there is an unexpected spike in a bill, some customers might see the increase in cost as worrisome. Unless you attribute the cost to business metrics such as transactions processed or number of users served, you really are missing context to interpret your cloud bill. For many customers, it’s easier to see that costs are rising and attribute that increase to a specific business owner or group, but they don’t have enough context to give a specific recommendation to the project owner. The team could be spending more money because they are serving more customers—a good thing. Conversely, costs may be rising because someone forgot to shut down an unneeded high-CPU VM running over the weekend—and it’s pushing unnecessary traffic to Australia. One way to fix this problem is to organize and structure your costs in relation to your business needs. Then, you can drill down into the services using Cloud Billing reports to get an at-a-glance view of your costs. You can also get more granular cost views of your environment by attributing costs back to departments or teams using labels, and by building your own custom dashboards. This approach allows you to label a resource based on a predefined business metric, then track its spend over time. Longer term, the goal isn’t to understand that you spent “$X on Compute Engine last month,” but that “it costs $X to serve customers who bring in $Y revenue.” This is the type of analysis you should strive to create.Billing Reports in the Google Cloud console let you explore granular cost detailsOne of the main features of the cloud is that it allows you to expedite feature velocity for faster time to market, and this elasticity is what lets you deploy workloads in a matter of minutes as opposed to waiting months in the traditional on-premises environment. You may not know how fast your business will actually grow, so establishing a cost visibility model up front is essential. And once you go beyond simple cost-per-service metrics, you can start to measure new business metrics like profitability as a performance metric per project. Understanding value vs. costThe goal of building a complex cloud system isn’t merely to cut costs. Take your fitness goals as an analogy. When attempting to become more fit, many people fixate on losing weight. But losing weight isn’t always a great key indicator in and of itself. You can lose weight as an outcome of being sick or dehydrated. When we aim for an indicator like weight loss, what we actually care about is our overall fitness or how we look and feel when being active, like the ability to play with your kids, live a long life, dance—that sort of thing. Similarly, in the world of cost optimization, it’s not about just cutting costs. It’s about identifying waste and ensuring you are maximizing the value of every dollar spent. Similarly, our most sophisticated customers aren’t fixated on a specific cost-cutting number, they’re asking a variety of questions to get at their overall operational fitness: What are we actually providing for our customers (unit)? How much does it cost me to provide that thing and only that thing?How can I optimize all correlated spend per unit created? In short, they have gone ahead and created their own unit economics model. They ask these questions up front, and then work to build a system that enables them to answer these key questions as well as audit their behavior. This is not something we typically see in a crawl state customer, but many of those that are in the walk state are employing some of these concepts as they design their system for the future.Implementing standardized processes from the get-goEnsuring that you are implementing these recommendations consistently is something that must be designed and enforced systematically. Automation tools like Terraform and Cloud Deployment Manager can help create guardrails before you deploy a cloud resource. It is much more difficult to implement a standard retroactively. We have seen everything from IT Ops shutting off or threatening to shut off untagged resources to established “walls of shame” for people who didn’t adhere to standards. (We’re fans of positive reinforcement, such as a pizza, or a trophy, or even a pizza trophy.) What’s an example of an optimization process that you might want to standardize early on? Deploying resources, for one. Should every engineer really be able to deploy any amount of any resource? Probably not. We see this as an area where creating a standard up front can make a big difference.Structuring your resources for effective cost management is important too. It’s best to adopt the simplest structure that satisfies your initial requirements, then adjust your resource hierarchy as your requirements evolve. You can use the setup wizard to guide you through recommendations and steps to create your optimal environment. Within this resource hierarchy, you can use projects, folders, and labels to help create logical groupings of resources that support your management and cost attribution requirements.Example of a resource hierarchy for cloudIn your resource hierarchy, labeling resources is a top priority for organizations interested in managing costs. This is essentially your ability to attribute costs back to a specific business, service, unit, leader, etc. Without labeling resources, it’s incredibly difficult to decipher how much it costs you to do any specific thing. Rather than saying you spent $36,000 on Compute Engine, it’s preferable to be able to say you spent $36,000 to deliver memes to 400,000 users last month. The second statement is much more insightful than the first. We highly recommend creating standardized labels together with the engineering and finance teams, and using labels for as many resources as you can. Review and repeat for best resultsAs a general practice, you should meet regularly with the appropriate teams to review usage trends and also adjust forecasting as necessary. The Cloud Billing console makes it easy to review and audit your cloud spend on a regular basis, while custom dashboards provide more granular cost views. Without regular reviews and appropriate unit economics, as well as visibility into your spend, it’s hard to move beyond being reactive when you observe a spike in your bill.If you’re a stable customer, you can review your spending less frequently, as the opportunities to tweak your strategies will be reliant on items like new Google Cloud features vs. a business change on your product roadmap. But if you’re deploying many new applications and spending millions of dollars per month, a small investment in conducting more frequent cost reviews can lead to big savings in a short amount of time. In some cases, our more advanced customers meet and adjust forecasts as often as every day. When you’re spending millions of dollars a month, even a small percentage shift in your overall bill can take money away from things like experimenting with new technologies or hiring additional engineers. To truly operate efficiently and maximize the value of the cloud takes multiple teams with various backgrounds working together to design a system catered to your specific business needs. Some best practices are to establish a review cadence based on how fast you are building and spending in the cloud. The Iron Triangle is a commonly used framework that measures cost vs. speed vs. quality. You can work with your teams to set up an agreed-upon framework that works for your business. From there, you can either tighten your belt, or invest more.The tools of the cost optimization tradeOnce you have a firm grasp on how to approach cost optimization in the cloud, it’s time to think about the various tools at your disposal. At a high level, cost management on Google Cloud relies on three broad kinds of tools. Cost visibility—this includes knowing what you spend in detail, how specific services are billed, and the ability to display how (or why) you spent a specific amount to achieve a business outcome. Here, keep in mind key capabilities such as the ability to create shared accountability, hold frequent cost reviews, analyze trends, and visualize the impact of your actions on a near-real-time basis. Using a standardized strategy for organizing your resources, you can accurately map your costs to your organization’s operational structure to create a showback/chargeback model. You can also use cost controls like budget alerts and quotas to keep your costs in check over time.Resource usage optimization—this is reducing waste in your environment by optimizing usage. The goal is to implement a specific set of standards that draws an appropriate intersection between cost and performance within an environment. This is the lens to look through when reviewing whether there are idle resources, better services on which to deploy an app, or even whether launching a custom VM shape might be more appropriate. Most companies that are successful at avoiding waste are optimizing resource usage in a decentralized fashion, as individual application owners are usually the best equipped to shut down or resize resources due to their intimate familiarity with the workloads. In addition, you can use Recommender to help detect issues like under- or over-provisioned VM instances or idle resources. Enabling your team to surface these recommendations automatically is the aim of any great optimization effort.Pricing efficiency—this includes capabilities such as sustained use discounts, committed use discounts, flat-rate pricing, per-second billing or other volume discounting features that allow you to optimize rates for a specific service. These capabilities are best leveraged by more centralized teams within your company, such as a Cloud Center of Excellence (CCoE) or FinOps team that can lower the potential for waste while optimizing coverage across all business units. This is something to continue to review both pre-cloud migration as well as regularly once you go live. Considering both people and processes will go a long way toward making sure your standards are useful and aligned to what your business needs. Similarly, understanding Google Cloud’s cost visibility, resource usage optimization, and pricing efficiency features will give you the tools you need to optimize costs across all your technologies and teams.How to prioritize recommendationsWith lots of competing initiatives, it can be difficult to prioritize cost optimization recommendations and ensure your organization is making the time to review these efforts consistently. Having visibility into the amount of engineering effort as well as potential cost savings can help your team establish its priorities. Some customers focus solely on innovation and speed of migration for years on end, and over time their bad optimization habits compound, leading to substantial waste. These funds could have gone towards developing new features, purchasing additional infrastructure, or hiring more engineers to improve their feature development velocity. It’s important to find a balance between cost and velocity and understand the ramifications of leaning too far in one direction over another. To help you prioritize one cost optimization recommendation over another, it’s a good idea to tag recommendations with an estimate of two characteristics:Effort: Estimated level of work (in weeks) required to coordinate the resources and implement a cost optimization recommendation.Savings: Amount of estimated potential savings (in percentage per service) that you may realize by implementing a cost optimization recommendation.While it’s not always possible to estimate with pinpoint accuracy how much a cost savings measure will save you before testing, it’s important to try and make an educated guess for each effort. For instance, knowing that a certain change could potentially save you 60% on your Cloud Storage for project X should be enough to help with the prioritization matrix and establishing engineering priorities with your team. Sometimes you can estimate actual savings. Especially with purchasing options, a FinOps team can estimate the potential savings by taking advantage of features like committed use discounts for a specific amount of their infrastructure. By performing this exercise, you want the team to be able to make informed decisions on where engineering is going, so they can focus their energy from a culture standpoint. From principles to practiceOptimizing cloud costs isn’t a checklist, it’s a mindset; you’ll have the best results if you think strategically and establish strong processes to help you stay on track. But there are also lots of service-specific steps you can take to getting your bill under control. For more tactical advice, check out these posts on how to save on your Google Cloud compute, storage, networking, data analytics, and serverless applications. Or, for a handy reference, download our “Understanding the principles of cost optimization” ebook, which regroups several of these topics in one place.
Quelle: Google Cloud Platform

Anthos in depth: Transforming your legacy Java applications

In many organizations, legacy applications can hold back business initiatives—and the business processes that rely on them. While new applications are being developed using cloud-native technologies, most existing applications are still large monolithic apps that run on proprietary application servers, incurring high licensing costs, slow release cycles, and vendor lock-in. And the lion’s share of those apps are written in Java. At Google Cloud, we’ve developed prescriptive guidelines to help you modernize your Java applications, for immediate operational cost savings, reduced dependencies on proprietary software, and increased software delivery speed. A key part of that path is Anthos, an open application platform which can help you modernize your existing apps with containerized microservices alongside VMs, plus integration into tools for policy management and modern application development and delivery. In this way, Anthos lets you innovate faster and deliver new experiences to your customers, while reducing risk and costs.Organizations looking to modernize a Java application typically have the following two goals in mind: Adopt modern development frameworks and architectures. Many modern Java applications are already built on open frameworks like Spring Boot. These frameworks allow enterprises to iterate fast and release features faster. At the same time, they can help developers adopt microservices-based development methods, where application “monoliths” are split into numerous smaller, more manageable problem domains. This microservices approach allows development teams to work independently on their domain without affecting others, so that they can release features faster and provide a better experience for customers.Move to containers and adopt DevOps practices – Organizations are trying to move away from traditional virtualization platforms and adopt modern container-based platforms—even for their legacy Java applications. Containers make efficient use of compute resources (compared to VMs) and can easily move between environments like multiple clouds and on-prem data centers. Here again, Anthos can help. Anthos lets enterprises build, deploy, and manage containerized applications anywhere in a secure, consistent manner. Anthos Config Management lets you define, automate, and enforce policies across environments, and  Anthos Service Mesh lets you securely connect both containerized and non-containerized (i.e., VM-based) applications.  Anthos also enables enterprises to secure their hybrid and multi-cloud deployments by providing consistent controls across any environment. From there, adopting containerization and a consistent, policy-based platform like Anthos can help you achieve DevOps goals of creating more secure and reliable applications, faster. Part of that is implementing modern CI/CD practices, which creates orchestrated and automated software delivery pipelines, to improve velocity and reliability, while reducing risk.Your paths to Java app modernization Here at Google Cloud, we see enterprise Java applications fall in one of three categories, and each requires that you take a unique approach to modernization.Click to enlargeApplications using modern frameworks – These are newer apps written on a modern Java framework like Spring Boot. The next step for these applications is to take advantage of Google Cloud managed services. Spring Cloud GCP lets your developers easily adopt Google Cloud services. Build tools like Jib and Cloud Native Buildpacks provide an easy way to containerize these applications using well known developer build tools (like Maven and Gradle). Once containerized you can manage these applications using Anthos. Packaged software – These are commercial off-the-shelf applications to which you don’t have the source code. For these types of applications, Migrate for Anthos allows you to move VMs from on-premises data centers to either VMs running in Google Cloud or to containers running on Anthos GKE, a cloud-agnostic implementation of GKE that runs in multiple environments. This lets you quickly adopt a modern platform, whether on-prem or in the cloud, and is especially useful if you have a large number of applications, as they can be migrated with minimal effort. Existing traditional applications – These are your Java applications developed in-house using older frameworks and often, running on commercial proprietary application servers. GCP provides a two-step process for modernizing these applications. First, use Migrate for Anthos to quickly migrate existing applications running in VMs to containers on Anthos. You can also migrate off of commercial app servers to open-source app servers to reduce or eliminate licensing costs. Second, refactor and rewrite migrated applications to use modern frameworks like Spring Boot and Spring Cloud GCP and move to a microservice development model.Google Cloud partners will help accelerate your modernization journeyGoogle Cloud and our partners are committed to modernizing and running your key workloads in whatever environment is best suited for your business. “Application modernization with Anthos enables significant improvements in scalability, high availability for business critical applications, and lowers operational cost,” said Volodymyr Yelchev, VP of Critical Services at SoftServe. “We see the lion’s share of the existing applications in enterprises written in Java. Significant re-architecturing and refactoring is required to meet new business objectives. Using Migrate for Anthos from Google Cloud, SoftServe has developed a proven, end-to-end approach for legacy, VM-based Java stack transformation to a modern, containerized application set. Our approach reduces complexity and improves speed of deployment within a short period of time.” If you’re looking to get started, we have partners that are eager to help you in your modernization journey. Getting startedThe need to improve operational cost savings by adopting cloud-native practices is more important than ever. Anthos can meet you where you are with your modernization journey.  According to the Forrester Total Economic Impact study, customers adopting Anthos can achieve a range of up to 4.8x Return on Investment (ROI) within three years1. If you have existing Java applications that need to be modernized we are here to help. Please reach out to your account team or fill out this form so we can schedule an application assessment workshop with you.1. New Technology Projection: The Total Economic Impact™ Of Anthos, September 2019, a commissioned study conducted by Forrester Consulting on behalf of Google. *Based on the customer interview
Quelle: Google Cloud Platform

New WAF capabilities in Cloud Armor for on-prem and cloud workloads

No matter where your applications are deployed, it’s important for admins to be able to quickly and easily scale security across the entire infrastructure. Google Cloud Armor is the web-application firewall (WAF) and DDoS mitigation service that helps users defend their web apps and services at Google scale at the edge of Google’s network. Last November, we introduced, as beta, new WAF capabilities and increased telemetry through the Security Command Center. Since then we’ve seen rapid adoption from customers looking to deploy Google Cloud-native offerings to defend and maintain the availability of their applications. As a result, we recently made the WAF generally available to all customers, including features such as:Geo-based access control Pre-configured WAF rules for SQL injection (SQLi) and Cross-Site Scripting (XSS) defense A custom rules language for custom Layer 7 (L7) filtering policies Security Command Center integration”At ATB Financial, security is a top priority,” says Innes Holman, Head of Technology Strategy and Architecture at ATB Financial. “With Google Cloud Armor, we can safely deploy workloads in the cloud. It protects our applications at scale while helping meet ATB’s security and compliance requirements.”What’s newToday, we’re also announcing the general availability of Cloud Armor support for Cloud CDN for origin server protection, as well as support for hybrid deployments, to help protect applications and services whether they’re deployed on Google Cloud, in a hybrid deployment, or in a multi-cloud architecture.Cloud Armor for Cloud CDN: origin server protectionWeb applications and websites often serve both static and dynamic content. While enabling Cloud CDN helps optimize the way static content is served, a client request for dynamic content still needs to reach the application server for processing and response. A CDN can typically scale to serve cached content in the face of an attack, but origin servers frequently need an upstream WAF to prevent unwelcome requests from overloading limited resources. Enterprises frequently have a security and compliance need to apply WAF rules and L7 filtering policies to reduce risk and ensure the availability of the application server. To fulfill this need, you can now configure Cloud Armor security policies to help protect backend services with Cloud CDN enabled. When a security policy is attached to a CDN-enabled backend service, Cloud Armor will enforce the policy for all requests destined to the origin server, including cache-misses and dynamic requests bypassing the cache. To get started, in your Google Cloud Load Balancing (GCLB) configuration, enable a backend service for Cloud CDN and then expand Advanced Configurations to attach a Cloud Armor security policy:Cloud Armor for hybrid and multi-cloud deploymentsCloud Armor, in addition to Cloud CDN and the Cloud Load Balancers, can now be used to front applications that are not deployed on Google Cloud. Enterprise workloads are increasingly complex and often are deployed with infrastructure on-prem and in the cloud, or spanning multiple infrastructure providers. Whether such hybrid architectures are a permanent fixture of an enterprise’s operations or part of a migration plan, security teams have a need to apply consistent security controls regardless of where the application is deployed—even internet-facing applications deployed on premise need to be protected from attacks from the internet.Users can now leverage the full scale and scope of Google’s edge infrastructure, including Cloud Armor, to help protect workloads that are deployed anywhere as long as they are accessible over the public internet. To get started, configure a GCLB backend service to point at an Internet Network Endpoint Group (NEG). Next, attach a Cloud Armor security policy to that backend service and configure one or more rules to filter Layer 7 traffic targeting the protected application.Next stepsWith Google Cloud Armor’s recent releases, Google Cloud customers can now utilize a native enterprise-grade WAF and DDoS mitigation service, leveraging the full scale of Google’s edge network to help defend their applications from DDoS attacks and mitigate risk from targeted application attacks. The support for hybrid deployment and CDN-enabled workloads means you have the option of deploying Google Cloud edge services—including Google Cloud Armor, Cloud CDN, and Cloud Load Balancing—to help protect applications and websites, whether they’re deployed on Google Cloud, on premise, or with other cloud providers, while maintaining a uniform edge and consistent set of policies and access controls. To learn more, check out the resources below:Cloud Armor documentation and resourcesCloud Armor security policy overview WAF rule tuning guideLanguage specificationInternet NEG documentationCDN origin protection documentation
Quelle: Google Cloud Platform