The Google Cloud DevOps Awards: Final call for submissions!

DevOps continues to be a major business accelerator for our customers and we continually see success from customers applying DevOps Research and Assessment (DORA) principles and findings to their organization. This is why the first annual DevOps Awardsis targeted to recognize customers shaping the future of DevOps with DORA. Share your inspirational story, supported by examples of business transformation and operational excellence, today. With inputs from over 32,000 professionals worldwide and seven years of research, the Accelerate State of DevOps Report is the largest and longest running DevOps research of its kind. The different categories of DevOps Awards map closely to the practices and capabilities that drive high performance, as identified by the report. Organizations, irrespective of their  size, industry, and region are able to apply to one or all ten categories. Please find the categories and their descriptions below:Optimizing for Speed without sacrificing stability: This award recognizes one  Google Cloud customer that has driven improvements in speed without sacrificing quality. Embracing easy-to-use tools to improve remote productivity: The research showcases how high performing engineers are 1.5 times more likely to have easy to-use tools. To be eligible for this award, share your stories on how easy to use DevOps tools have helped you improve engineer productivity.Mastering effective disaster recovery: This award winner will be awarded to demonstrate how a robust, well-testeddisaster recovery (DR) plan can  protect business operations.Leveraging loosely coupled architecture: This award recognizes one customer that successfully transitioned from a tightly coupled architecture to service-oriented and microservice architectures.Unleashing the full power of the Cloud: This award recognizes a Google Cloud customer leveraging all five capabilities of cloud computing to improve software delivery and organizational performance. Specifically, these five capabilities include: – On demand self-service- Broad network access- Measured service- Rapid elasticity- Resource pooling.Read more about the five essential characteristics of cloud computingMost improved documentation quality: This award recognizes one customer that has successfully integrated documentation into their DevOps workflow using Google Cloud Platform tools.Reducing burnout during COVID-19: We will recognize one customer that implemented effective processes to improve work/life balance, foster a healthy DevOps culture, and ultimately prevent burnout.Utilizing IT operations to drive informed business decisions: This award will go to one customer that employed DevOps best practices to break down silos between development and operations teams.Driving inclusion and diversity in DevOps: To highlight the importance of a diverse organization, this award honors one Google Cloud customer that: Prioritizes diversity and inclusion initiatives for their organization to transform and strengthen their business. -orCreates unique solutions to help build a more diverse, inclusive, and accessible workplace for your customer, leading to higher levels of engagement, productivity, and innovation.Accelerating DevOps with DORA: This award recognizes one customer that has successfully integrated the most DORA practices and capabilities into their workflow using Google Cloud Platform tools.This is your chance to show your innovation globally and become a role model for the industry to improve. Winners will receive invitations to roundtables and discussions, press materials, website and social badges, special announcements and even a trophy award.We are excited to see all your great submissions. Applications are open until January 31st, so apply for what best suits your company and stay tuned for our awards show in February 2022!For more information on the awards visit our webpageand check out The Google Cloud DevOps Awards Guidebook.
Quelle: Google Cloud Platform

Data governance in the cloud – part 1 – People and processes

In this blog, we’ll cover data governance as it relates to managing data in the cloud. We’ll discuss the operating model which is independent of technologies whether on-prem or cloud, processes to ensure governance, and finally the technologies that are available to ensure data governance in the cloud. This is a two part blog on data governance. In this first part, we’ll discuss the role of data governance, why it’s important, and processes that need to be implemented to run an effective data governance program. In the second part, we’ll dive into the tools and technologies that are available to implement data governance processes, e.g. data quality, data discovery, tracking lineage, and security.For an in-depth and comprehensive text on Data governance, check Data Governance: People, Processes, and Tools to Operationalize Data Trustworthiness.What is Data Governance?Data governance is a function of data management which creates value for the organization by implementing processes to ensure high data quality, and provides a platform that makes it easier to share data securely across the organization while ensuring compliance with all the regulations. The goal of data governance is to maximize the value derived from data, build user trust, and ensure compliance by implementing required security measures.Data governance needs to be in place from the time a factoid of data is collected or generated and until the point in time at which that data is retired. Along the way, in this full lifecycle of the data, data governance focuses on making the data available to all stakeholders in a form that they can readily access and use in a manner that generates the desired business outcomes (insights, analysis), and if relevant, conforms to regulatory standards. These regulatory standards are often an intersection of industry (e.g. healthcare), government (e.g. privacy), and company (e.g. non-partisan) rules and codes of behavior. See more details here.Why is Data Governance Important?In the last decade, the amount of data generated by users using mobile phones, health & fitness and IOT devices, retail beacons etc. have caused an exponential growth in data. At the same time, the cloud has made it easier to collect, store, and analyze this data at a lower cost. As the volume of data and adoption of cloud continues to grow, organizations are challenged with a dual mandate to democratize and embed data in all decision making while ensuring that it is secured and protected from unauthorized use. An effective data governance program is needed to implement this dual mandate to make the organization data driven on one hand and securing data from unauthorized use on the other. Organizations without an effective data governance program will suffer from compliance violations leading to fines, poor data quality which leads to lower quality insights impacting business decisions, challenges in finding data which results in delayed analysis and missed business opportunities, poorly trained data models for AI which reduces the model accuracy and benefits of using AI.An effective data governance strategy encompasses people, processes, and tools & technologies. It drives data democratization to embed data in all decision making, builds user trust, increases brand value, reduces the chances of compliance violations which can lead to substantial fines, and loss of business.Components of Data GovernancePeople and Roles in Data GovernanceA comprehensive data governance program starts with a data governance council composed of leaders representing each business unit in the organization. This council establishes the high level governing principles on how the data will be used to drive business decisions. The council with the help of key people in each b business functions identify the data domains, e.g. customer, product, patient, and provider. The council then assigns data ownership and stewardship roles for each data domain. These are senior level roles and each owner is held accountable and accordingly rewarded for driving the data goals set by the data governance council. Data owners and stewards are assigned from business, for example customer data owner may be from marketing or sales, finance data owner from finance, while HR data owner from HR.The role of IT is that of data custodian. IT ensures the data is acquired, protected, stored, and shared according to the policies specified by data owners. As data custodians, IT does not make the decisions on data access or data sharing. IT’s role is limited to managing technology to support the implementation of data management policies set by data owners.Processes in Data GovernanceEach organization will establish processes to drive towards the implementation of goals set by the data governance council. The processes are established by data owners and data stewards for each of their data domains. The processes focus on the following high level goals:1. Data meets the specified data quality standards – e.g. 98% completeness, no more than 0.1% duplicate values, 99.99% consistent data across different tables, and what constitutes on-time delivery2. Data security policies to ensure compliance with internal and external policiesData is encrypted at rest and on wireData access is limited to authorized users onlyAll sensitive data fields are redacted or encrypted and dynamically decrypted only for authorized usersData can be joined for analytics in de-identified form, e.g. using deterministic encryption or hashingAudits are available for authorized access as well as unauthorized attempts3. Data sharing with external partners is available securely via APIs4. Compliance with industry and geo specific regulations, e.g. HIPAA, PCI DSS, GDPR, CCPA, LGPD5. Data replication is minimized6. Centralized data discovery for data users via data catalogs7. Trace data lineage to identify data quality issues, data replication sources, and help with auditsTechnologyImplementing the processes as specified in the data governance program requires use of technology. From securing data, retaining and reporting audits, to automate monitoring and alerts, multiple technologies are integrated to manage data life cycle.In Google Cloud, a comprehensive set of tools enables organizations to manage their data securely and drive data democratization. Data Catalog enables users to easily find data from one centralized place across Google Cloud. Data Fusion tracks lineage so data owners can trace data at every point in the data life cycle and fix issues that may be corrupting data. Cloud Audit Logs  retain audits needed for compliance. Dataplex provides intelligent data management, centralized security and governance, automatic data discovery, metadata harvesting, lifecycle management, and data quality with built-in AI-driven intelligence.We will discuss the use of tools and technologies to implement governance in part 2 of this blog.
Quelle: Google Cloud Platform

Megatrends drive cloud adoption—and improve security for all

We are often asked if the cloud is more secure than on-premise infrastructure. The quick answer is that, in general, it is. The complete answer is more nuanced and is grounded in a series of cloud security “megatrends” that drive technological innovation and improve the overall security posture of cloud providers and customers.An on-prem environment can, with a lot of effort, have the same default level of security as a reputable cloud provider’s infrastructure. Conversely, a weak cloud configuration can give rise to many security issues. But in general, the base security of the cloud coupled with a suitably protected customer configuration is stronger than most on-prem environments. Google Cloud’s baseline security architecture adheres to zero-trust principles—the idea that every network, device, person, and service is untrusted until it proves itself. It also relies on defense in depth, with multiple layers of controls and capabilities to protect against the impact of configuration errors and attacks. At Google Cloud, we prioritize security by design and have a team of security engineers who work continuously to deliver secure products and customer controls. Additionally, we also take advantage of industry megatrends that increase cloud security further, outpacing the security of on-prem infrastructure.These eight megatrends actually compound the security advantages of the cloud compared with on-prem environments (or at least those that are not part of a distributed or trusted partner cloud). IT-decision makers should pay close attention to these megatrends because they’re not just transient issues to be ignored once 2023 rolls around—they guide the development of cloud security and technology, and will continue to do so for the foreseeable future.At a high level, these eight megatrends are:Economy of scale: Decreasing the marginal cost of security raises the baseline level of security. Shared fate: A flywheel of increasing trust drives more transition to the cloud, which compels even higher security and even more skin-in-the-game from the cloud provider.Healthy competition: The race by deep-pocketed cloud providers to create and implement leading security technologies is the tip of the spear of innovation. Cloud as the digital immune system: Every security update the cloud gives the customer is informed by some threat, vulnerability, or new attack technique often identified by someone else’s experience. Enterprise IT leaders use this accelerating feedback loop to get better protection.Software-defined infrastructure: Cloud is software defined, so it can be dynamically configured without customers having to manage hardware placement or cope with administrative toil. From a security standpoint, that means specifying security policies as code, and continuously monitoring their effectiveness.Increasing deployment velocity: Because of cloud’s vast scale, providers have had to automate software deployments and updates, usually with automated continuous integration/continuous deployment (CI/CD) systems. That same automation delivers security enhancements, resulting in more frequent security updates.Simplicity: Cloud becomes an abstraction-generating machine for identifying, creating and deploying simpler default modes of operating securely and autonomically. Sovereignty meets sustainability: The cloud’s global scale and ability to operate in localized and distributed ways creates three pillars of sovereignty. This global scale can also be leveraged to improve energy efficiency.Let’s look at these megatrends in more depth. Economy of scale: Decreasing marginal cost of securityPublic clouds are of sufficient scale to implement levels of security and resilience that few organizations have previously constructed. At Google, we run a global network, we build our own systems, networks, storage and software stacks. We equip this with a level of default security that has not been seen before, from our Titan security chips which assure a secure boot; our pervasive data-in-transit and data-at-rest encryption; and make available confidential computing nodes that encrypt data even while it’s in use. We prioritize security, of course, but prioritizing security becomes easier and cheaper because the cost of an individual control at such scale decreases per unit of deployment. As the scale increases, the unit cost of control goes down. As the unit cost goes down, it becomes cheaper to put those increasing baseline controls everywhere. Finally, where there is necessary incremental cost to support specific configurations, enhanced security features, and services to support customer security operations and updates, then even the per-unit cost of that will decrease. It may be chargeable but it is still a lower cost than on-prem services whose economics are going in the other direction. Cloud is, therefore, the strategic epitome of raising the security baseline by reducing the cost of control. The measurable level of security can’t help but increase.Shared fate: The flywheel of cloud expansionThe long-standing shared responsibility model is conceptually correct. The cloud provider offers a secure base infrastructure (security of the cloud) and the customer configures their services on that in a secure way (security in the cloud). But, if the shared responsibility model is used more to allocate responsibility when incidents occur and less as a means of understanding mutual collective responsibility, then we are not living up to mutual expectations or responsibility.  Taking a broader view of a “shared responsibility” model, we should use such a model to create a mutually beneficial shared fate. We’re in this together. We know that if our customers are not secure, then we as cloud providers are collectively not successful. This shared fate extends beyond just Google Cloud and our customers—it affects all the clouds because a trust issue in one impacts the trust in all. If that trust issue makes the cloud “look bad,” then current and potential future customers might shy away from the cloud, which ultimately puts them in a less-secure position. This is why our security mission is a triad of Secure the Cloud (not only Google Cloud), Secure the Customer (shared fate) and Secure the Planet (and beyond). Further, “shared fate” goes beyond just the reality of shared consequences. We view this as a philosophy of deeply caring about customer security, which gives rise over time to elements like:Secure-by-default configurations. Our default configurations ensure security basics have been enabled and that all customers start from a high security baseline, even if some customers change that later. Secure blueprints. Highly opinionated configurations for assemblies of products and services in secure-by-default ways, with actual configuration code, so customers can more easily bootstrap a secure cloud environment.Secure policy hierarchies. Setting policy intent at one level in an application environment should automatically configure down the stack so there’s no surprises or additional toil in lower-level security settings.Consistent availability of advanced security features. Providing advanced features to customers across a product suite and available for new products at launch is part of the balancing act between faster new launches and the need for security consistency across the platform. We reduce the risks customers face by consistently providing advanced security featuresHigh assurance attestation of controls. We provide this through compliance certifications, audit content, regulatory compliance support, and configuration transparency for ratings and insurance coverage from partners such as our Risk Protection Program. Shared fate drives a flywheel of cloud adoption. Visibility into the presence of strong default controls and transparency into their operation increases customer confidence, which in turn drives more workloads coming onto cloud. The presence of and potential for more sensitive workloads in turn inspires the development of even stronger default protections that benefit customers. Healthy competition: The race to the top The pace and extent of security feature enhancement to products is accelerating across the industry. This massive, global-scale competition to keep increasing security in tandem with agility and productivity is a benefit to all.For the first time in history, we have companies with vast resources working hard to deliver better security, as well as more precise and consistent ways of helping customers manage security. While some are ahead of others, perhaps sustainably so, what is consistent is that cloud will always lead on-prem environments which have less of a competitive impetus to provide progressively better security. On-prem may not ever go away completely, but cloud competition drives security innovation in a way that on-prem hasn’t and won’t.Cloud as the digital immune system: Benefit for the many from the needs of the few(er) Security improvements in the cloud happen for several reasons:The cloud provider’s large number of security researchers and engineers postulate a need for an improvement based on a deep theoretical and practical knowledge of attacks.A cloud provider with significant visibility on the global threat landscape applies knowledge of threat actors and their evolving attack tactics to drive not just specific new countermeasures but also means of defeating whole classes of attacks. A cloud provider deploys red teams and world-leading vulnerability researchers to constantly probe for weaknesses that are then mitigated across the platform. The cloud provider’s software engineers often incorporate and curate open-source software and often support the community to drive improvements for the benefit of all. The cloud provider embraces vulnerability discovery and bug bounty programs to attract many of the world’s best independent security researchers.And, perhaps most importantly, the cloud provider partners with many of its customer security teams, who have a deep understanding of their own security needs, to drive security enhancements and new features across the platform. This is a vast, global forcing function of security enhancements which, given the other megatrends, is applied relatively quickly and cost-effectively. If the customer’s organization can not apply this level of resources, and realistically even some of the biggest organizations can’t, then an optimal security strategy is to embrace every security feature update the cloud provides to protect networks, systems, and data. It’s like tapping into a global digital immune system.Software-defined infrastructure: Continuous controls monitoring vs. policy intent One of the sources of the comparative advantage of the cloud over on-prem is that it is a software-defined infrastructure. This is a particular advantage for security since configuration in the cloud is inherently declarative and programmatically configured. This also means that configuration code can be overlaid with embedded policy intent (policy-as-code and controls-as-code). The customer validates their configuration by analysis, and then can continuously assure that configuration corresponds to reality. They can model changes and apply them with less operating risk, permitting phased-in changes and experiments. As a result, they can take more aggressive stances to apply tighter controls with less reliability risk. This means they can easily add more controls to their environment and update it continuously. This is another example of where cloud security aligns fully with business and technology agility. The BeyondProd model and SLSA framework are prime examples of how our software-defined infrastructure has helped improve cloud security. BeyondProd and the BeyondCorp framework apply zero-trust principles to protecting cloud services. Just like not all users are in the same physical location or using the same devices, developers do not all deploy code to the same environment. BeyondProd enables microservices to run securely with granular controls in public clouds, private clouds, and third-party hosted services.The SLSA framework applies this approach to the complex nature of modern software development and deployment. Developed in collaboration with the Open Source Security Foundation, the SLSA framework formalizes criteria for software supply chain integrity. That’s no small hill to climb, given that today’s software is made up of code, binaries, networked APIs and their assorted configuration files. Managing security in a software-defined infrastructure means the customer can intrinsically deliver continuous controls monitoring, constant inventory assurance and be capable of operating at an “efficient frontier” of a highly secure environment without having to incur significant operating risks. Increasing deployment velocityCloud providers use a continuous integration/continuous deployment model. This is a necessity for enabling innovation through frequent improvements, including security updates supported by a consistent version of products everywhere, as well as achieving reliability at scale. Cloud security and other mechanisms are API based and uniform across products, which enables the management of configuration in programmatic ways—also known as configuration-as-code. When configuration-as-code is combined with the overall nature of cloud being a software-defined infrastructure, it enables customers to implement CI/CD approaches for software deployment and configuration to enable consistency in their use of the cloud. This automation and increased velocity decreases the time customers spend waiting for fixes and features to be applied. That includes the speed of deploying security features and updates, and permits fast roll-back for any reason. Ultimately, this means that the customer can move even faster yet with demonstrably less risk—eating and having your cake, as it were. Overall, we find deployment velocity to be a critical tool for strong security.Simplicity: Cloud as an abstraction machine A common concern about moving to the cloud is that it’s too complex. Admittedly, starting from scratch and learning all the features the cloud offers may seem daunting. Yet even today’s feature-rich cloud offerings are much simpler than prior on-prem environments—which are far less robust. The perception of complexity comes from people being exposed to the scope of the whole platform, despite more abstraction of the underlying platform configuration. In on-prem environments, there are large teams of network engineers, system administrators, system programmers, software developers, security engineering teams, storage admins, and many more roles and teams. Each has their own domain or silo to operate in. That loose-flying collection of technologies with its myriad of configuration options and incompatibilities required a degree of artisanal engineering that represents more complexity and less security and resilience than customers will encounter in the cloud. Cloud is only going to get simpler because the market rewards the cloud providers for abstraction and autonomic operations. In turn, this permits more scale and more use, creating a relentless hunt for abstraction. Like our digital immune system analogy, the customer should see the cloud as an abstraction pattern-generating machine: It takes the best operational innovations from tens of thousands of customers, and assimilates them for the benefit of everyone. The increased simplicity and abstraction permit more explicit assertion of security policy in more precise and expressive ways applied in the right context. Simply put, simplicity removes more potential surprise—and security issues are often rooted in surprise.Sovereignty meets sustainability: Global to local The cloud’s global scale and ability to operate in localized and distributed ways creates three potential pillars of sovereignty, which will be increasingly important in all jurisdictions and sectors. It can intrinsically support the need for national or regional controls, limits on data access, as well as delegation of certain operations, and means for greater portability across services.The global footprint of many cloud providers means that cloud can more easily meet national or regional deployment needs. Workloads can be more easily deployed to more energy-efficient infrastructures. That, coupled with cloud’s inherent efficiency due to higher resource utilization, means cloud is more sustainable overall. By engaging with customers and policymakers across these pillars, we can provide solutions that address their requirements, while optimizing for additional considerations like functionality, cost, infrastructure consistency, and developer experience. Data sovereignty provides customers with a mechanism to prevent the provider from accessing their data, approving access only for specific provider behaviors that customers think are necessary. Examples of customer controls provided by Google Cloud include storing and managing encryption keys outside the cloud, giving customers the power to only grant access to these keys based on detailed access justifications, and protecting data-in-use. With these features, the customer is the ultimate arbiter of access to their data. Operational sovereignty provides customers with assurances that the people working at a cloud provider cannot compromise customer workloads. The customer benefits from the scale of a multi-tenant environment while preserving control similar to a traditional on-prem environment. Examples of these controls include restricting the deployment of new resources to specific provider regions, and limiting support personnel access based on predefined attributes such as citizenship or a particular geographic location. Software sovereignty provides customers with assurances that they can control the availability of their workloads and run them wherever they want, without being dependent on or locked-in to a single cloud provider. This includes the ability to survive events that require them to quickly change where their workloads are deployed and what level of outside connection is allowed. This is only possible when two requirements are met, both of which simplify workload management and mitigate concentration risks: first, when customers have access to platforms that embrace open APIs and services; and second, when customers have access to technologies that support the deployment of applications across many platforms, in a full range of configurations including multi-cloud, hybrid, and on-prem, using orchestration tooling. Examples of these controls are platforms that allow customers to manage workloads across providers; and orchestration tooling that allows customers to create a single API that can be backed by applications running on different providers, including proprietary cloud-based and open-source alternatives.This overall approach also provides a means for organizations (and groups of organizations that make up a sector or national critical infrastructure) to manage concentration risks. They can do this either by relying on the increased regional and zonal isolation mechanisms in the cloud, or through improved means of configuring resilient multi-cloud services. This is also why the commitment to open source and open standards is so important.The bottom line is that cloud computing megatrends will propel security forward faster, for less cost and less effort than any other security initiative. With the help of these megatrends, the advantage of cloud security over on-prem is inevitable.Related ArticleRead Article
Quelle: Google Cloud Platform

Carrefour Belgium: Driving a seamless digital experience with SAP on Google Cloud

Stijn Stabel’s first days as CTO of Carrefour Belgium were… challenging. “The data center was out of date with a roof that leaked,” he recalls. Not quite what one would expect from the eighth-largest retailer in the world. Carrefour is a household name in Europe, Asia, and the Middle East, operating over 12,000 hypermarkets, groceries, and convenience stores in more than 30 countries. Carrefour Belgium has more than 10,000 employees operating 700 stores along with an eCommerce business. Nonetheless, Stabel’s ambitions were ambitious: “Our goal is to become a digital retail company,” he says. “We want to move quickly from being a slow mover in digital transformation to becoming trailblazing. That’s one of the coolest challenges you can have as a CTO.”Three years later, Carrefour Belgium is well along the path to achieving that goal, having migrated nearly all of its SAP application stack to Google Cloud, including finance, HR, and others. “We’re really going headfirst and full-on. It’s a huge challenge, but it’s definitely one of the most exciting transformations I have seen so far,” he says. The challenges that Carrefour Belgium faced were more than just an aging data center. With systems divided between two private clouds, there was no efficient way to leverage data for the advanced analytics Stabel knew the company would need to compete. “The world is changing at a record pace,” he says. “Either you’re keeping up with that or you’re not. Standing still means basically choosing to give up on the company’s progress — and at some point, to really give up on the company altogether.” This is especially true, he says, when it comes to creating a seamless customer experience both online and in stores. “Everything in the store will be digital,” he says. “How much longer are people going to put up with price tags printed on paper that have to be changed over and over? How long will it be acceptable to maintain a large ecological footprint? Sustainability will be increasingly important.”Olivier Luxon, Carrefour Belgium’s CIO, agrees, emphasizing the centrality of customer experience in everything the company does and how quickly customer needs shifted due to the global pandemic. “What we really saw with COVID was customers seeking more digital services. That had been a trend previously, but it accelerated dramatically with COVID.”The decision to move SAP to Google CloudAfter researching its options, Carrefour Belgium chose a “Google-first” cloud strategy for four reasons:Partnership: “Google listens to our needs and adapts to them instead of trying to box us in,” Stabel says. Technology: Google’s analytical and other tools, particularly BigQuery, were the deciding factor when it came to choosing a cloud provider for its SAP systems. “There’s no sound reason to move your SAP environment to a cloud host other than the one you’re going to use it for analytics,” he says, noting that the company will eventually bring all of its SAP data into BigQuery for analytics. “Our data team is working on building out a foundation based on BigQuery, which will enable them to work on multiple use cases such as assortment planning and promotion. The goal is to become a truly data-driven company.” Security: Carrefour Belgium is implementing BeyondCorp, the zero trust security solution from Google Cloud. By shifting access controls from the network perimeter to individual users, BeyondCorp enables secure work from virtually any location without the need for a traditional VPN. ”We’re going to be the first Carrefour division moving to that platform, so we’re very excited to be blazing the trail,” Stabel says.Value: “I have to report to a CFO, so partnership and technology alone are not enough to make a business case,” Stabel says. “I have to show demonstrable business value, and that’s what we get with Google Cloud.”Carrefour Belgium’s migration strategy has been to lift and shift its legacy SAP ERP Central Component (ECC) environment before doing a greenfield implementation of S/4HANA on Google Cloud, a process that is already underway. Currently, the HR module has been upgraded to S/4HANA, with retail operations to follow. More performance, better experience, greater insightsIt’s still early days, but the move to Google Cloud has already paid dividends in improved performance for back office operations, which, Olivier points out, frees time and resources to devote to serving customers better. Eventually, he feels that SAP on Google Cloud will have a direct impact on customer experience, particularly given the opportunities that data analytics will provide to better understand customer needs and meet them more effectively. “Data is becoming more and more important, not only for Carrefour, but for all the companies in the world,” Olivier says. “It drives personalized customer experience, promotions, operational efficiency, and finance. If you don’t set up the right data architecture on Day One, it will be close to impossible to be efficient as a company in a few years from now.”In the end, the goal is to provide Carrefour Belgium the tools it needs to serve customers better. “SAP supports our business by giving us the right tools and processes to manage areas including supply chain, finance, HR, and retail,” Olivier says. “What was missing, however, was the availability, scalability, and security we needed to better serve our employees, stores and customers, and that’s something we got by moving to Google Cloud.” And by moving to Google Cloud — which has been carbon neutral since 2007 and is committed to operating entirely carbon-free by 2030 — Carrefour is also able to pursue its sustainability objectives simply by modernizing its business operations in the cloud.“Google is a company that eats, breathes, and sleeps digital,” Stabel says. “At its heart, Carrefour is a retail company. We know how to be a retailer. Our partnership is a cross-pollination. What I’m really looking forward to is continuing to learn from Google Cloud and see what other solutions we can adopt to improve Carrefour Belgium and better serve our users &  customers.”Learn more about Carrefour Belgium’s deployment and how you can accelerate your organization’s digital transformation by moving your SAP environment to Google Cloud.Related ArticleSupporting business transformation for German retailers with SAP on Google CloudLearn how we’re helping German retailers migrate SAP systems to Google Cloud to minimize risk and downtime and build a foundation for fut…Read Article
Quelle: Google Cloud Platform

Top 10 takeaways from Looker’s 2021 JOIN@Home conference

JOIN@Home was an incredible celebration of the achievements that the Looker community made in the last year, and I was proud to be a part of it. Prominent leaders in the data world shared their successes, tips, and plans for the future. In the spirit of keeping the learning alive, I summarized the top two takeaways from each of the keynotes. They’re accompanied by illustrations that were captured live during the sessions by a local artist. Plus, there’s a fun surprise for you at the end. “Celebrating Data Heroes – Transforming Our World with Data”Our opening keynote featured a number of inspiring data professionals who use Looker in their work every day to see trends, drive decision making, and grow their customer base.Some of their main takeaways were:You can use analytics to make change for the greater good.Surgeon scientist Dr. Cherisse Berry spoke of cross-referencing healthcare outcomes data like trauma care survival rates, how long patients wait before being seen, and whether patients were appropriately triaged with demographic data to find gender and racial disparities in healthcare. For instance, she found that critically injured women receive trauma care less often than men. Because her analysis made the disparity known, informed decisions and actions can be taken to bring greater equality to New York state’s trauma care system. Provide templates to make insights more easily available to more users, especially non-technical ones.Michelle Yurovsky of  UIPath, an automation platform that helps customers avoid repetitive tasks, shared one of the key ways UIPath gets customers engaged: by providing dashboard templates that answer common automation questions. Customers get useful insights the second they click on the product. They can copy and modify the templates according to their business needs, so they’re less intimidated to start working with analytics – especially if they have no previous experience building dashboards. *Source: Coursera internal data, November 2021.“Developing a Better Future with Data”This keynote looked to the future of analytics.Two major themes were:Composable analytics capabilities help make application development faster, easier and more accessible.Composable analytics means creating a custom analytics solution using readily available components. You have access to composable analytics with Looker through the extension framework, which offers downloadable components you can use to build your application right on top of the Looker platform. Filter and visualization components enable you to more easily create the visual side of these data experiences.Augmented analytics help make it easier to handle the scale and complexity of data in modern business – and to make smarter decisions about probable future outcomes.Augmented analytics generate sophisticated analyses, created by integrating machine learning (ML) and artificial intelligence (AI) with data. The Looker team has worked to make augmented analytics more accessible to everyone this year. In particular, new Blocks give you access to ML  insights through the familiar Looker interface, enabling you to more quickly prototype ML- and AI-driven solutions. For instance, the Time-series Forecasting Block (which uses BigQuery ML) can be installed to give analysts deeper insights into future demand for better inventory and supply chain management. CCAI Insights gives call centers access to Contact Center AI Insights data with analysis they can use immediately. “The Looker Difference”Product Managers Ani Jain and Tej Toor highlighted many recent features you might find useful for activating and enabling users with Looker.Here are two moments that stood out:Giving your teams better starting points can lead to more engagement with analytics.Two improved ways to find insights from this year: Quick Starts and board improvements. Quick Starts function as pre-built Explore pages that your users can open with a click, helping to make ad hoc analysis more accessible and less intimidating. They’re also a convenient way to save an analysis you find yourself doing frequently – and they even save your filter settings. And, with new navigation improvements in Looker, boards are easier to find and use. Now you can pin links to a board, whether it’s a dashboard, a Look, an Explore, or something else, including external links. So go ahead. Try your hand at creating a useful data hub for your team with a new board.Natural language processing and Looker can help you make sense of relationships within data, quickly.A great example of this is the Healthcare NLP API Block, which creates an interactive user interface where healthcare providers, payers, pharma companies, and others in the healthcare industry can more easily access intelligent insights. Under the hood, this Block works on top of GCP Healthcare NLP API, an API offering pre-trained natural language models to extract medical concepts and relationships from medical text. The NLP API helps to structure the data, and the Looker Block can make the insights within that data more accessible.“Building and Monetizing Custom Data Experiences with Looker” Pedro Arellano, Product Director at Looker, and Jawad Laraqui, CEO of Boston-based consultancy Data Driven, chatted about embedded analytics and the remarkable speed one can build data applications with Looker and monetization strategies.Points you don’t want to miss from this one:Looker can help you augment an existing customer experience and create a new revenue stream with embedded data.For example, you can provide personalized insights to a customer engaged with your product, or automate business processes such as using data to trigger a service order workflow when an issue is encountered with a particular product. Embedding data in these ways can make the customer experience smoother all around. To take it a step further, you can monetize a data product you build to help create a new revenue stream.Building for Looker Marketplace can help you find more customers for your app and can promote a better user experience. Jawad compared using the extension framework to build for the Looker Marketplace as having an app in the Apple store. Being in the Marketplace is a way for customers to find and use his product organically, and it helps give the end users a streamlined experience. He said: “We were able to quickly copy and paste our whole application from a stand-alone website into something that is inside of Looker. And we did this quickly—one developer did this in one day. It’s a lot easier than you think, so I encourage everyone to give it a try. Just go build!”“Looker for Modelers: What’s New and What’s Next”Adam Wilson, Product Manager at Looker, covered the latest upgrades and future plans for Looker’s semantic data model. This layer sits atop multiple sources of data and standardizes common metrics and definitions, so it can be governed and fed into modern built-in business intelligence (BI) interactive dashboards, connected into familiar tools such as Google sheets, and other BI tools where users work —we’re calling this the unified semantic model.Capabilities to look out for:Take advantage of Persistent Derived Table (PDT) upgrades that facilitate the end-user experience.You can use incremental PDTs to capture data updates without rebuilding the whole table, meaning your users get fresh data on a more regular basis with a lower load on your data warehouse. And it’s now possible to validate PDT build status in development mode, giving you the visibility needed to determine when to push updates to production. Coming soon, you’ll be able to do an impact analysis on proposed changes with visualized dependencies between PDTs.Reach users where they are with Connected Sheets and other BI tools.Coming soon, you’ll be able to explore Looker data in Google Sheets and share charts to Slides, too. And with Governed BI Connectors, Looker can act as a source of truth for users who are accustomed to interacting with data in Tableau, Power BI, and Google Data Studio.  You can sign up to hear when the Connected Sheets and Looker integration is available or separately to hear about preview availability for Governed BI Connectors.* Source: The Total Economic Impact™ of Looker, Forrester Consulting, June 2021.A commissioned study conducted by Forrester Consulting on behalf of Google Cloud** Source: Google Community Data, December 2020, 2021HackathonSpeaking of interesting new developments, here’s your fun surprise: a hackathon recap with a new chart you can use in your own analytics.The Looker developer community came together to create innovative Looker projects at this year’s JOIN hackathon, Hack@Home 2021. The event provided the participants access to the latest Looker features and resources to create tools useful for all Looker developers. The Nearly Best Hack Winner project demonstrated how easy it is to make custom visualizations by creating an animated bar chart race visualization that anyone can use. The Best Hack Winner showcased the power of the Looker extension framework with a Looker application that conveniently writes CSV data into Looker database connections.You can still view all the keynotes, as well as the breakout sessions and learning deep dives, on-demand on the JOIN@Home content hub. These will be available through the end of the month, so go soak up the learning while you can.
Quelle: Google Cloud Platform

Want multi-cluster Kubernetes without all the cost and overhead? Here’s how

Editor’s note: Today, we hear from Mengliao Wang, Senior Software Developer, Team Lead at Geotab, a leading provider of fleet management hardware and software solutions. Read on to hear how the company is expanding on their adoption of Google Cloud to deliver new services for their customers by leveraging Google Kubernetes Engine (GKE) multi-cluster features.Geotab’s customers ask a lot of our platform: They use it to gain insights from vast amounts of telemetry data collected from their fleet vehicles. They rely on it to adhere to strict data privacy requirements. And, because our customers are located all over the world, they need the platform to address their data residency and other jurisdictional processing requirements, which require compute and storage to live within a specific geographic region. Meanwhile, as a managed service provider, we need a cost-efficient business model — that was certainly a driving factor for adopting containers and GKE. As we started architecting the deployment of multiple clusters to support our customers’ data residency requirements, we determined we also needed to explore approaches to reduce the total operational maintenance and costs of our multi-cluster environment.In order to meet customers where they are, we moved forward with running GKE clusters in multiple Google Cloud Platform regions. At the same time, we recently began using GKE multi-cluster services, which provides our customers with the security and low latency they need, while giving us cost savings and an easy-to-maintain solution. Read on to learn more about Geotab, our journey to Google Cloud and GKE, and, more recently, how we deployed multi-cluster Kubernetes using GKE multi-cluster services.The rise of connected fleet vehicles “By 2024, 82% of all manufactured vehicles will be equipped with embedded telematics.”—Berg InsightAs a global leader in IoT and connected transportation, Geotab is advancing security, connecting commercial vehicles to the internet, and providing web-based analytics to help customers better manage their fleet vehicles. With over 2.5 million connected vehicles and processing billions of data points per day, we leverage data analytics and machine learning to support our customers in several ways. We help them improve productivity, optimize fleets by reducing their fuel consumption, enhance driver safety, achieve strong compliance to regulatory changes, and meet sustainability goals. Geotab partners with Original Equipment Manufacturers (OEMs) to help expand customers’ fleet management capabilities through access to the Geotab platform.Our journey to Google Cloud and GKEWe originally chose Google Cloud as our primary cloud provider as we found it to be the most stable of the cloud providers we tried, with the least amount of unscheduled downtime. End-to-end reliability is an important consideration for our customers’ safety and their confidence in Geotab’s driver-assistance features. Since getting started on our public cloud journey, we’ve leveraged Google Cloud to modernize different aspects of the Geotab platform.First, we embarked on a multi-milestone and multi-year initiative to modernize the Geotab Data Platform, adopting a container-based architecture using open source technologies; we continue to leverage Google Cloud services to launch innovative solutions that combine analytics and access to massive data volumes for better transportation planning decisions. Today, Geotab Data Platform is built entirely on GKE, with multiple services such as data ingestion, data digestion, data processing, monitoring and alerting, a management console, and several applications. We are now leveraging this modern platform to introduce new Geotab services to our customers.Exploring multi-cluster KubernetesAs discussed above, we recently began deploying our GKE clusters into multiple regions, to meet our customers’ performance and data residency requirements. However, not every service that makes up the Geotab platform is created equal… For example, data digestion and data ingestion services are at the core of the data platform. Data digestion services are Application Programming Interfaces (API), machine learning models, and business intelligence (BI) tools that consume data from the data environment for various data analysis purposes, and are served directly to customers. Data ingestion services ingest billions of telematics data records per day from Geotab GO devices and are responsible for persisting them into our data environment.But when looking at optimizing operating costs, we identified several services outside of the data platform that did not process sensitive customer information — our monitoring and alerting services are examples of these services. Duplicating these services in multiple regions would result in higher infrastructure costs and would result in additional maintenance complexity and overhead.We decided to deploy the services that do not process any customer data as shared services in a dedicated cluster. Not only does this lower the cost for resources, but it also makes it easier to manage from an operational perspective. However, this approach introduced two new challenges:Services such as data ingestion and data digestion that run in each jurisdiction needed to expose their metrics outside of their cluster to make them available to the shared services (monitoring and alerting / management console) running on the shared cluster, resulting in some security concerns.Since metrics would not be passing within a cluster subnetwork, they would travel via the public network, resulting in higher latency as well as additional security concerns.This is where GKE Multi-cluster Services (MCS) came in, perfectly solving these concerns without introducing any new architectural components for us to configure and maintain. MCS is a cross-cluster Service Discovery and invocation mechanism built-in to GKE. MCS extends the capabilities of the standard Kubernetes Service object. Services that are configured to be exported with MCS are discoverable and accessible across all clusters within a fleet of clusters via a virtual IP address, matching the behavior of a ClusterIP Service that is accessible within a cluster. With MCS, we do not need to expose public endpoints and all traffic is routed within the Google network.With MCS configured, we get the best of both worlds: services between the shared cluster and other regionally hosted clusters communicate as if they are all hosted in one cluster! Problem solved! Reflecting on the journeyOur modernization journey on Google Cloud continues to pay dividends. During the first phase of our journey, we reaped the benefits of being able to scale up our systems with less downtime. With GKE features like MCS, we are able to reduce the time required to roll-out new features to our global customers while addressing our business objectives to manage operating costs. We look forward to continuing on our multi-cluster journey with Google Cloud and GKE. Are you interested in learning more about how GKE multi-cluster services can help with your Kubernetes multi-cluster challenges? Check out this guide to configuring multi-cluster services, or reach out to a Google Cloud expert — we’re eager to help!Related ArticleDriving change: How Geotab is modernizing applications with Google CloudOver time, Geotab converted production servers running Windows Server to containers and open source, saving hundreds of thousands of doll…Read Article
Quelle: Google Cloud Platform

Are you a multicloud engineer yet? The case for building skills on more than one cloud

Over the past few months, I made the choice to move from the AWS ecosystem to Google Cloud — both great clouds! — and I think it’s made me a stronger, more well-rounded technologist.But I’m just one data point in a big trend. Multicloud is an inevitability in medium-to-large organizations at this point, as I and others have been saying for awhile now. As IT footprints get more complex, you should expect to see a broader range of cloud provider requirements showing up where you work and interview. Ready or not, multicloud is happening.In fact, Hashicorp’s recent State of Cloud Strategy Survey found 76% of employers are already using multiple clouds in some fashion, with more than 50% flagging lack of skills among their employees as a top challenge to survival in the cloud.That spells opportunity for you as an engineer. But with limited time and bandwidth, where do you place your bets to ensure that you’re staying competitive in this ever-cloudier world?You could pick one cloud to get good at and stick with it; that’s a perfectly valid career bet. (And if you do bet your career on one cloud, you should totally pick Google Cloud! I have reasons!) But in this post I’m arguing that expanding your scope of professional fluency to at least two of the three major US cloud providers (Google Cloud, AWS, Microsoft Azure) opens up some unique, future-optimized career options.What do I mean by ‘multicloud fluency’? For the sake of this discussion, I’m defining “multicloud fluency” as a level of familiarity with each cloud that would enable you to, say, pass the flagship professional-level certification offered by that cloud provider–for example, Google Cloud’s Professional Cloud Architect certification or AWS’s Certified Solutions Architect Professional. Notably, I am not saying that multicloud fluency implies experience maintaining production workloads on more than one cloud, and I’ll clarify why in a minute.How does multicloud fluency make you a better cloud engineer?I asked the cloud community on Twitter to give me some examples of how knowledge of multiple clouds has helped their careers, and dozens of engineers responded with a great discussion.Turns out that even if you never incorporate services from multiple clouds in the same project — and many people don’t! — there’s still value in understanding how the other cloud lives.Learning the lingua franca of cloudI like this framing of the different cloud providers as “Romance languages” — as with human languages in the same family tree, clouds share many of the same conceptual building blocks. Adults learn primarily by analogy to things we’ve already encountered. Just as learning one programming language makes it easier to learn more, learning one cloud reduces your ramp-up time on others.More than just helping you absorb new information faster, understanding the strengths and tradeoffs of different cloud providers can help you make the best choice of services and architectures for new projects. I actually remember struggling with this at times when I worked for a consulting shop that focused exclusively on AWS. A client would ask “What if we did this on Azure?” and I really didn’t have the context to be sure. But if you have a solid foundational understanding of the landscape across the major providers, you can feel confident — and inspire confidence! — in your technical choices.Becoming a unicornTo be clear, this level of awareness isn’t common among engineering talent. That’s why people with multicloud chops are often considered “unicorns” in the hiring market. Want to stand out in 2022? Show that you’re conversant in more than just one cloud. At the very least, it expands the market for your skills to include companies that focus on each of the clouds you know.Taking that idea to its extreme, some of the biggest advocates for the value of a multicloud resumé are consultants, which makes sense given that they often work on different clouds depending on the client project of the week. Lynn Langit, an independent consultant and one of the cloud technologists I most respect, estimates that she spends about 40% of her consulting time on Google Cloud, 40% on AWS, and 20% on Azure. Fluency across providers lets her select the engagements that are most interesting to her and allows her to recommend the technology that provides the greatest value.But don’t get me wrong: multicloud skills can also be great for your career progression if you work on an in-house engineering team. As companies’ cloud posture becomes more complex, they need technical leaders and decision-makers who comprehend their full cloud footprint. Want to become a principal engineer or engineering manager at a mid-to-large-sized enterprise or growing startup? Those roles require an organization-wide understanding of your technology landscape, and that’s probably going to include services from more than one cloud. How to multicloud-ify your careerWe’ve established that some familiarity with multiple clouds expands your career options. But learning one cloud can seem daunting enough, especially if it’s not part of your current day job. How do you chart a multicloud career path that doesn’t end with you spreading yourself too thin to be effective at anything?Get good at the core conceptsYes, all the clouds are different. But they share many of the same basic approaches to IAM, virtual networking, high availability, and more. These are portable fundamentals that you can move between clouds as needed. If you’re new to cloud, an associate-level solutions architect certification will help you cover the basics. Make sure to do hands-on labs to help make the concepts real, though — we learn much more by doing than by reading.Go deep on your primary cloudFundamentals aside, it’s really important that you have a native level of fluency in one cloud provider. You may have the opportunity to pick up multicloud skills on the job, but to get a cloud engineering role you’re almost certainly going to need to show significant expertise on a specific cloud.Note: If you’re brand new to cloud and not sure which provider to start with, my biased (but informed) recommendation is to give Google Cloud a try. It has a free tier that won’t bill you until you give permission, and the nifty project structure makes it really easy to spin up and tear down different test environments.It’s worth noting that engineering teams specialize, too; everybody has loose ends, but they’ll often try to standardize on one cloud provider as much as they can. If you work on such a team, take advantage of the opportunity to get as much hands-on experience with their preferred cloud as possible.Go broad on your secondary cloudYou may have heard of the concept of T-shaped skills. A well-rounded developer is broadly familiar with a range of relevant technologies (the horizontal part of the “T”), and an expert in a deep, specific niche. You can think of your skills on your primary cloud provider as the deep part of your “T”. (Actually, let’s be real — even a single cloud has too many services for any one person to hold in their heads at an expert level. Your niche is likely to be a subset of your primary cloud’s services: say, security or data.)We could put this a different way: build on your primary cloud, get certified on your secondary. This gives you hirable expertise on your “native” cloud and situational awareness of the rest of the market. As opportunities come up to build on that secondary cloud, you’ll be ready.I should add that several people have emphasized to me that they sense diminishing returns when keeping up with more than one secondary cloud. At some point the cognitive switching gets overwhelming and the additional learning doesn’t add much value. Perhaps the sweet spot looks like this: 1< 2 > 3.Bet on cloud-native services and multicloud toolingThe whole point of building on the cloud is to take advantage of what the cloud does best — and usually that means leveraging powerful, native managed services like Spanner and Vertex AI. On the other hand, the cloud ecosystem has now matured to the point where fantastic, open-source multicloud management tooling for wrangling those provider-specific services is readily available. (Doing containers on cloud? Probably using Kubernetes! Looking for a DevOps role? The team is probably looking for Terraform expertise no matter what cloud they major on.) By investing learning time in some of these cross-cloud tools, you open even more doors to build interesting things with the team of your choice.Multicloud and youWhen I moved into the Google Cloud world after years of being an AWS Hero, I made sure to follow a new set of Google Cloud voices like Stephanie Wong and Richard Seroter. But I didn’t ghost my AWS-using friends, either! I’m a better technologist (and a better community member) when I keep up with both ecosystems. “But I can hardly keep up with the firehose of features and updates coming from Cloud A. How will I be able to add in Cloud B?” Accept that you can’t know everything. Nobody does. Use your broad knowledge of cloud fundamentals as an index, read the docs frequently for services that you use a lot, and keep your awareness of your secondary cloud fresh:Follow a few trusted voices who can help you filter the signal from the noiseAttend a virtual event once a quarter or so; it’s never been easier to access live learningBuild a weekend side project that puts your skills into practiceUltimately, you (not your team or their technology choices!) are responsible for the trajectory of your career. If this post has raised career questions that I can help answer, please feel free to hit me up on Twitter. Let’s continue the conversation.Related ArticleFive do’s and don’ts of multicloud, according to the expertsWe talked with experts about why to do multicloud, and how to do it right. Here is what we learned.Read Article
Quelle: Google Cloud Platform

Five do’s and don’ts CSPs should know about going cloud-native

As communication service providers (CSPs) continue to focus on capitalizing on the promise of 5G, I’ve been having more conversations with operators and network equipment providers on why and how it may make sense to adopt cloud-native approaches. More specifically, we often discuss best practices around accelerating the time to value of 5G and simplifying the deployment and management of networks as well as the applications deployed on top of them. In fact, Gartner predicts that by 2025, cloud-native platforms will serve as the foundation for more than 95% of new digital initiatives, which is up from less than 40% in 2021. Both Ankur Jain, Head of Engineering for Google Cloud for Telecommunications, and I recently sat down with telecommunications industry experts, including Jitin Bhandari, CTO of Cloud and Network Services at Nokia and Dr. Lester Thomas, Head of New Technologies and Innovation at Vodafone, to discuss how CSPs can best take advantage of cloud-native. If you are on the path to “cloudify” your networks for 5G and beyond, here’s your chance to learn from some folks who have done it and are doing it now. We summarize five key takeaways from my conversations with these experts. Do: Leverage cloud-native approaches to simplify networks5G promises value creation for CSPs and enterprises, but first we must think about simplifying those networks. As Jitin Bhandari from Nokia puts it in his conversation with Google’s Ankur Jain, “Over the last two decades…we’ve built layers of complexity in our telco networks. You see these waves of 2G, 3G, and now we’re building 5G – they are still reminiscent of fixed networks.” Moving to a cloud-native approach could help CSPs break that cycle and simplify telecommunications networks; how the networks are constructed and how they are operated. The good news is that leading with a cloud-native approach is how Google has always built our services and networks, and is a key reason operators and partners like Nokia are collaborating with us to drive value in the 5G era.Don’t: Just take legacy operational processes with you to the cloudThere is absolutely a world of difference between migrating to the cloud and adopting a cloud-native approach. According to Dr. Thomas from Vodafone, “When you think about migrating existing systems and applications to the cloud, the tendency is to take your operational processes with [you]. You can almost question what real business benefit does that give you?” On the other hand, moving to a cloud service, consumption-based model and leveraging cloud-native solutions like Kubernetes, microservices, and open APIs forces one to rethink how applications are built and decouples one from the legacy operations that, in the past, hindered speed of innovation. Another takeaway from Dr. Thomas is that “our cloud-native approach is typically coupled with agile delivery methods, DevOps so you can take reliability engineering as the way in which you operate and [be] much more focused on data and open APIs.”Do: Recognize that operators will continue to own and control their networksWhenever I chat with our CSP customers on whether the cloud and Google Cloud specifically is carrier-grade and telco-ready, the topic of security, privacy, and control often comes up. First, I think it’s important to make it clear that operators will continue to control, own and manage their networks as well as the data running on those networks. You can think of Google Cloud as supplying the enabling technologies. Second, protecting the trust and security of Google Cloud is a key priority for us. As such, we publish and adhere to a set of trust principles that govern our approach to security. In addition, by working with Google Cloud, CSPs can take advantage of the same secure-by-design infrastructure and investments that Google makes to ensure the nine applications and services we offer, each supporting over 1 billion users around the world, run quickly, reliably and securely.Do: Build scale and simplicity into your data platform to unlock a whole world of use casesWhile we are on the subject of control, Dr. Thomas also shared some key insights from an operator’s point of view on building scale and simplicity into one’s data platform. “Getting the data governance right is a critical part,” said Dr. Thomas. “We have all these different use cases that we use the data to drive business insights and value. The underlying data for it is in a common database. So as we do each use case, we will bring in new data into our data ocean, but they’ll be standardized and normalized.” In addition to data consolidation and normalization, it is essential to set standards for data quality, ownership, lifecycle management, interoperability and exchange. With that established, you can really focus on delivering that business value with 5G, IoT, and even network optimization use cases, most of which are data and analytics driven. For example, Dr. Thomas talked about using data and automation to help detect network anomalies, and that’s only the beginning. “The anomaly detection use case that we’ve done so far is about analyzing the root cause of what’s going on in the network. We see that as the very first part of an autonomous network.”Don’t: Fall into the habit of architecting separate infrastructure for virtualized and containerized workloadsThough many software vendors are well on the way to migrating to cloud-native containerized workloads, virtualized workloads still exist and will continue to exist for some time. At the same time, the status quo of deploying separate infrastructure for virtualized and containerized workloads creates unnecessary complexity, limits scale, creates unmanageable silos, and is, quite frankly, unsustainable. The solution is to have a single set of managed infrastructure with Kubernetes running on top and the ability to seamlessly place and orchestrate CNFs, VNFs and even edge applications. Google Distributed Cloudwas built with the necessary capabilities to enable this CNF and VM coexistence so that infrastructure silos can become a thing of the past.We are learning a lot from our friends in the telecommunications industry as more operators consider taking a more modern approach to building, operating, and maintaining future generations of networks and services. At Google Cloud, we understand that these are tasks not to be done alone nor in siloes. They will be done in partnership across the ecosystem, where we bring our strength in cloud-native, data, and AI/ML solutions combined with the telecommunications-specific expertise and technologies provided by our partners. Catch all the conversations with our experts in our Cloudification of Telecom Networks video series.Related ArticleIntroducing Google Distributed Cloud—in your data center, at the edge, and in the cloudGoogle Distributed Cloud runs Anthos on dedicated hardware at the edge or hosted in your data center, enabling a new class of low-latency…Read Article
Quelle: Google Cloud Platform

Raising the bar in Security Operations: Google Acquires Siemplify

At Google Cloud, we are committed to advancing invisible security and democratizing security operations for every organization. Today, we’re proud to share the next step in this journey with the acquisition of Siemplify, a leading security orchestration, automation and response (SOAR) provider. Siemplify shares our vision in this space, and will join Google Cloud’s security team to help companies better manage their threat response.In a time when cyberattacks are rapidly growing in both frequency and sophistication, there’s never been a better time to bring these two companies together. We both share the belief that security analysts need to be able to solve more incidents with greater complexity while requiring less effort and less specialized knowledge. With Siemplify, we will change the rules on how organizations hunt, detect, and respond to threats.Providing a proven SOAR capability unified with Chronicle’s innovative approach to security analytics is an important step forward in our vision. Building an intuitive, efficient security operations workflow around planet-scale security telemetry will further realize Google Cloud’s vision of a modern threat management stack that empowers customers to go beyond typical security event and information management (SIEM) and extended detection and response (XDR) tooling, enabling better detection and response at the speed and scale of modern environments. “We’re excited to join Google Cloud and build on the success we’ve had in the market helping companies address growing security threats,” said Amos Stern, CEO at Siemplify. “Together with Chronicle’s rich security analytics and threat intelligence, we can truly help security professionals transform the security operations center to defend against today’s threats.”The Siemplify platform is an intuitive workbench that enables security teams to both manage risk better and reduce the cost of addressing threats. Siemplify allows Security Operation Center analysts to manage their operations from end-to-end, respond to cyber threats with speed and precision, and get smarter with every analyst interaction. The technology also helps improve SOC performance by reducing caseloads, raising analyst productivity, and creating better visibility across workflows.We plan to invest in SOAR capabilities with Siemplify’s cloud services as our foundation and the team’s talent leading the way. Our intention is to integrate Siemplify’s capabilities into Chronicle in ways that help enterprises modernize and automate their security operations. We’re looking forward to welcoming the Siemplify team to Google Cloud and working with them to help security operations teams accomplish so much more in defense of their organizations. You can read Siemplify CEO Amos Stern’s blog for more on this exciting news.
Quelle: Google Cloud Platform

Apigee 2021: A year of innovations

Apigee is committed to continually innovating new capabilities and solutions for our customers, and 2021 saw new product launches, partnerships, and best practices for managing your expanding range of business-critical use cases. Here are some of our favorite stories from 2021. Spotting the trends in APIsOur State of API Economy 2021 Report surveyed over 700 IT leaders globally and identified five key API trends that emerged post-COVID. SaaS and hybrid cloud-based API deployments are increasing with half of all respondents reporting increases in these areas, and AI- and ML-powered API management is also gaining traction, with usage growing 230% year-over-year among Apigee customers. Business metrics like Net Promoter Score (NPS) and speed-to-market are API users’ preferred way to measure success, and API ecosystems are increasingly innovation drivers, with high-maturity organizations much more likely to focus on building a developer ecosystem or B2B partner ecosystem around their API. Finally, API security and governance is more important than ever, as research showed that increased investment in security and governance was a high priority. Check out the blog to explore these five trends in more detail.Launching new capabilities with Apigee XWe announced Apigee X, our next-generation platform that brings the powerful scale of Google technologies to Apigee API Management and allows enterprises to power API programs for enhanced scale, security, and automation. Apigee X customers can harness the capabilities of Cloud CDN to maximize the availability and performance of APIs across the globe, deploying across more than two dozen Google Cloud regions and enhancing caching at over 100 locations. Apigee X customers can apply solutions like Cloud Armor web application firewall for enhanced API security and Cloud Identity and Access Management (IAM) for authenticating and authorizing access to the Apigee platform. Apigee X also enhances automation by applying Google Cloud’s AI and ML capabilities to historical API metadata to detect anomalies, predict traffic, and ensure compliance. To read more about these features, check out our blogs on Apigee X and Cloud Armor, Apigee X and Cloud CDN, and Apigee X and AI. Making new connections with Apigee IntegrationApigee brought our successful API-first approach to integration this year with the release of Apigee Integration. This silo-busting solution lets customers connect existing data and applications, and surface them as easily accessible APIs. Apigee Integration brings together the best of API management and integration into one unified platform so IT teams can scale their operations, improve developer productivity, and increase the speed to market. The platform comes with built-in connectors to Salesforce, Cloud SQL (MySQL, PostgreSQL), Cloud Pub/Sub and BigQuery, with connectors for additional third-party applications and databases on their way. Advanced integration patterns also serve our customers with even more use cases. Check out our launch blog and our Next session video for more details. Managing GraphQL APIs with ApigeeThe exponential rise in digital services adoption among enterprises now generates petabytes of data every minute. You can harness the power of this data with query languages like GraphQL, accessing the data your app needs with one single request. The growing popularity of GraphQL APIs and their business-critical use cases mean it’s important to manage them with full life cycle capabilities, much like you manage your REST APIs. Last year we compared REST and GraphQL and introduced best practices for managing GraphQL APIs. You can also read our blog announcing Apigee’s support for the management of GraphQL APIs, and our partnership with StepZen to deliver these capabilities. To dive deeper into building GraphQL APIs, check out our Next session video. Looking back at the year’s top stories2021 was the year of the customer, and we published the following stories to show how API management helps enterprises modernize their applications, build digital ecosystems, and generate value for their customers and their own organizations:Telco:Leveraging APIs to create value for telco ecosystems How the MTN Group is expanding its business with APIsRetail:Conversational AI with Apigee API Management for enhancing customer experiencesSeven-Eleven Japan uses Google Cloud to serve up real-time data for fast business decisionsFinancial services:Arab Bank: Accelerating application innovation with Anthos and ApigeeMilitary Bank: Turns to Apigee to manage its API strategy and deliver digital transformationEnergy: How Veolia’s API-first approach is powering sustainable resource managementGRTgaz: Calling on Apigee to help internal and external partners access intelligence securelyThat’s a wrap for 2021! We hope you have a safe and happy holiday season, and we can’t wait to see what the new year brings for us. Stay tuned in 2022 for product launch announcements, partnerships, tips, and stories of how organizations like yours are innovating with Apigee.Related ArticleGoogle named a leader in the 2021 Gartner® Magic Quadrant® for Full Life Cycle API ManagementWe’re excited to share that Gartner has recognized Google Cloud’s Apigee as a Leader in the 2021 Magic Quadrant for Full Life Cycle API M…Read Article
Quelle: Google Cloud Platform