Accelerating SAP CPG enterprises with Google Cloud Cortex Framework

In a rapidly changing Consumer Packaged Goods (CPG) industry, business agility and digital innovation are essential to achieve business outcomes. Cost pressures, rising demand, and new consumer expectations have led to whiplash for CPG companies who need to accelerate digital transformation to remain competitive. Google Cloud has a tradition of delivering solutions specifically designed to help CPGs deliver on these imperatives. With all of these industry inflection points, CPG companies face an additional challenge:  making sense of huge volumes of data, typically siloed within multiple disparate sources inside and outside the organization. They need to tie this data together and enrich it with outside signals to improve insights and forecasts that help them meet market demands and disruptions more efficiently. To address these challenges and opportunities, CPG enterprises can leverage Google Cloud Cortex Framework which provides reference architectures, packaged solution content, and deployment accelerators to help organizations kickstart insights and reduce the time to value with Google’s Data Cloud. CPG solutions built on top of Cortex Data Foundation combine SAP data and other key data sets such as trends, weather, and more, to solve common challenges across the supply chain. Today, we are announcing new, predefined analytics content and extending our partner solution offerings for CPG organizations that build on our recent content releases. Demand Sensing – Gain a clearer picture of demandCPG organizations rely on historical sales and other data to predict demand – weeks, months, and even years into the future. A key challenge, however, is detecting and responding quickly to unexpected changes and shifts in the market during production and delivery schedules. An accurate demand plan is essential for reducing business costs and maximizing profitability, but what happens if weather suddenly changes, consumer trends vary, or marketing efforts create demand spikes or dips? Current models do not reflect these new signals, even as they materially impact the demand plan. Identifying near term changes in demand is mission critical to better match demand with supply. Unlocking greater business opportunities starts by combining data and understanding diverse signals beyond historical sales.Google Cloud Cortex Demand Sensing helps CPG companies better understand and shape demand by consolidating data from SAP ERP, Google, and third parties to enable demand planners to make proactive business decisions based on the latest market signals. With Cortex Demand Sensing, businesses can get started quickly with a modern cloud-based solution that includes the best of Google’s data cloud services like BigQuery, Vertex AI, and visualization capabilities with Looker. Our new solution accelerator content helps CPGs detect who the customer is, what product attributes are driving sales, and what factors may impact demand. The solution includes sample data, predefined analytics and machine learning models and Looker dashboarding templates to help deliver insights and highlight impact alerts to demand planners faster. By bringing together SAP and non SAP data sets, the solution helps organizations to be more nimble, through a more holistic view across demand drivers.By augmenting demand planning with additional demand signals and contextual data like weather anomalies, search trends and more, CPG companies can improve visibility into factors that influence forecasting and lower inventory holding costs, or drive greater sales thanks to improved near term demand alignment and management.At Google Cloud we understand the challenges of forecasting demand in today’s landscape and are developing products and solutions that empower CPG companies around the world to quickly gain insights and improve accuracy. We have options for companies of all sizes, whether you have in-house talent that can build upon our existing architectures and accelerators or are interested in leveraging something more “out of the box” from our partner ecosystem. A growing partner ecosystem of solution offeringsOne of the most exciting aspects of Google Cloud Cortex Framework is that it can also accelerate the onramp of data to advanced analytics and AI solutions to further accelerate business outcomes that directly deliver Top-Line and Bottom-line financial impact for customers. We are delighted to recognize several leading partners in this space including: C3 AI: A comprehensive suite of enterprise AI applications that works with Google Cloud’s leading AI tools, frameworks, industry solutions, and services. Example use cases include:Inventory Optimization: applies advanced AI, machine learning, and optimization techniques to enable companies to minimize inventory levels of parts, raw materials, and finished goods while maintaining confidence that they will have sufficient inventory available to meet customer service level agreements.​Supply Network Risk: Identify and mitigate current and future disruptions across the whole supply chain including inbound supplier delays, order delivery delays, and manufacturing bottlenecks.AI Demand Forecasting: leverage advanced AI techniques to help generate and maintain the most accurate forecasts and demand plans to maximize sales while minimizing costs.Palantir Foundry is a leading platform for data-driven operations and decision making. Example use cases include:Assortment Recommendation Engine: Optimize product assortment planning level via a recommendation engine. Incorporate planogram performance and sales metrics to specify product placementsProduct Fulfillment Optimization: Create a single source of truth for y demand forecasting and logistics teams to collaborate to ensure products are manufactured and delivered to the right locations at the right time.Inventory Management & Out-of-Stock Prevention: Predict out-of-stock events and resolve through dynamic recommendations. Simulate supply chain trade-off decisions, and proactively reroute based on dynamic demand.360 Visibility into Key Assets: Virtualize your entire value network with 360 visibility into the most valuable assets in your business, including Customers, Stores, Products, and more.What customers are sayingAlready, CPG companies around the world have taken advantage of Google Cloud Cortex Framework powered solutions to accelerate time to insight and time to value with less risk, complexity, and cost. Here’s what they have to say:  “Operating from Chile with exports of fish and shellfish to more than 50 countries across 5 continents worldwide, makes supply chain and sustainability insights critical to our company. We chose to implement Cortex Data Foundation to leverage our SAP data with other data sources in BigQuery for deeper visibility into our business. We migrated and upgraded our SAP ERP system to S/4HANA on Google Cloud in just 3 months and completed Cortex solution content installation and integration in parallel in just 2 weeks! We have been truly amazed by both the innovation and simplicity of the solution and were able to get started quickly. With new insights across our business, we can forecast faster and more accurately, which has unlocked innovation and agility not possible before.” — Pedro Aguirre, CIO, Camanchaca SAGoogle Cloud Cortex Framework continues to grow in exciting new ways, and we look forward to announcing more of them soon. Right now, CPG companies can leverage Cortex content provided and our Partner ecosystem to power next-level intelligent operations with the flexibility and scalability of the cloud. Cortex lets organizations ingest high-volume data from multiple sources securely, quickly, and cost-efficiently. Combine SAP and external data into a single unified system where it is easily accessed to fuel smarter business intelligence. To learn more about Google Cloud Cortex Framework, visit our solution site and tune in to our Google Cloud Next ’22 session. Get hands on with our Cortex Data Foundation and Cortex Demand Sensing solutions today.Related ArticleGoogle Cloud Cortex Framework extends offering in latest release and beyondGoogle Cloud is extending Google Cloud Cortex Framework with new analytics content and making it easier to combine SAP data with other da…Read Article
Quelle: Google Cloud Platform

Google Cloud VMware Engine – What’s New: Increased commercial flexibility, ease of use and more

We’ve made several updates to Google Cloud VMware Engine in the past few months — today’s post provides a recap of our latest milestones making it easier and more cost-effective for you to migrate and run your vSphere workloads in a cloud-native enterprise-grade VMware environment in Google Cloud. In January, we announced Single node private cloud, additional regions, PCI-DSS and more.Key updates this time around include:Inclusion of Google Cloud VMware Engine in VMware Cloud Universal subscription program for increased commercial flexibilityPreview of automation with Google Cloud API/CLI supportAdvanced migration capabilities with VMware HCX enterprise features included, at no additional costCustom core counts to optimize application licensing costsService availability in Zurich, with additional regions planned in Asia, Europe and South AmericaTraffic Director and Google Cloud VMware Engine integration for scaling web services and linking native GCP load balancers and the GCVE backendsDell PowerScale for GCVE is now available. This enables in-guest NFS, SMB, and HDFS to be accessed by GCVE VMs.Preview support for 96 node private clouds, stretch clusters and roadmap inclusion of additional compliance certifications.Google Cloud VMware Engine inclusion in the VMware Cloud Universal subscription program: You can now purchase the Google Cloud VMware Engine offering as part of VMware Cloud Universal from VMware and VMware partners. The program can allow you to take advantage of savings through the VMware Cloud Acceleration Benefit and unused VMware Cloud Universal credits. It also allows streamlined consumption by enabling you to burn down your Google Cloud commits while purchasing from VMware. To learn more, please read this post.Preview of Google Cloud API/CLI support for automation: Users can now enable automation at scale for VMware Engine infrastructure operations using Google Cloud API/CLI. It also enables you to manage these environments using a standard set of toolchain consistent with the rest of Google Cloud. If you are interested in participating in this public preview, please contact your Google account team.Custom core counts to optimize application licensing costs: To help customers manage and optimize their application licensing costs on Google Cloud VMware Engine, we introduced a capability called custom core counts — giving you the flexibility to configure your clusters to help meet your application-specific licensing requirements and reduce costs. You can set the required number of CPU cores at the time of cluster creation, selecting from a range of options, thereby effectively reducing the number of cores you may have to license for that application. To learn more, please read this post.Advanced migration capabilities with HCX enterprise features included, at no additional cost: Private cloud creation now uses the VMware HCX Enterprise license level by default, enabling premium migration capabilities. The more noteworthy of these features include HCX Replication Assisted vMotion that enables bulk, no-downtime migration from on-premises to Google Cloud VMware Engine and Mobility Optimized Networking that provides optimal traffic routing under certain scenarios to prevent network tromboning between the on-premises and cloud-based resources on extended networks. For more information on how to use HCX to migrate your workloads to Google Cloud VMware Engine, please read our documentation here.Google Cloud VMware Engine is now available in the Zurich region: This brings the availability of the service to 14 regions globally, enabling our multi-national and regional customers to leverage a VMware-compatible infrastructure-as-a-service platform on Google Cloud. In each of these regions, we support 4-9’s of SLA in a single zone.Traffic Director and Google Cloud VMware Engine integration: Traffic Director, a fully managed control plane for Service Mesh, can be combined with our portfolio of load balancers and withhybrid network endpoint groups (hybrid NEG) to provide a high-performance front-end for web services hosted in VMware Engine. Traffic Director can also serve as the glue that links the native GCP load balancers and the VMware Engine backends, enabling new services such as Cloud CDN, Cloud Armorand more. To learn more, please read this post.Dell PowerScale for Google Cloud VMware Engine: Dell PowerScale is now available for in-guest access for VMware Engine VMs. This enables seamless migration from on-prem environments and provides customers more choice in scale-out storage for VMware Engine. PowerScale for Google Cloud in-guest access includes multiprotocol access with NFS, SMB, and HDFS, snapshots, native replication, AD integration, and shared storage between VMware Engine and Compute Engine instances. To learn more check out Dell PowerScale for Google Cloud and Google Cloud VMware Engine. Preview support for 96 node private clouds for increased scale, stretch clusters for HA and roadmap inclusion of additional compliance certifications.[Preview] Increasing scale from up to 64 nodes per private cloud to a maximum of 96 nodes per private cloud. This would enable larger customer environments to be supported with the same highly performant dedicated infrastructure and would increase operational efficiency by managing such large environments with a single vCenter server[Preview] With stretched clusters, a cluster would be deployed across two availability zones in a region, with synchronous replication, enabling higher levels of availability and failure independence.[Roadmap] Working on adding more compliance certifications – SOC1, Information System Security Management and Assessment Program (ISMAP), BSI:C5Presence at VMware Explore 2022 and Google Next ‘22We recently had the opportunity to connect with many of you and share these updates at VMware Explore in San Francisco. You can revisit our breakout sessions to learn more about how you can quickly migrate and transform your VMware workloads by viewing our on-demand content. You’ll find sessions that cover a plethora of topics including migration, transformation with Google Cloud services, security, backup and disaster recovery, and more. We also have an exciting line up of sessions and demos at VMware Explore in Barcelona in November – stay tuned for more information.Join us at Google Next ‘22 for an exciting panel where you can hear how customers have used Google Cloud VMware Engine, which delivers a VMware stack running natively in Google Cloud without needing changes to existing applications, to reduce migration timelines, lower risk, and transform their businesses.You can also get started by learning about Google Cloud VMware Engine and your options for migration, or talk to our sales team to join the customers who have embarked upon this journey. This brings us to the end of our updates this time around. For the latest updates to the service, please bookmark our release notes.Related ArticleRunning VMware in the cloud: How Google Cloud VMware Engine stacks upLearn how Google Cloud VMware Engine provides unique capabilities to migrate and run VMware workloads natively in Google Cloud.Read Article
Quelle: Google Cloud Platform

What makes Google Cloud security special: Our reflections 1 year after joining OCISO

Editor’s note: Google Cloud’s Office of the Chief Information Security Officer (OCISO) is an expert team of cybersecurity leaders, including established industry CISOs, initially formed in 2019. Together they have more than 500 years of combined cybersecurity experience and leadership across industries including global healthcare, finance, telecommunications and media, government and public sector, and retail industries. Their goal is to meet the customer where they are and help them take the best next steps to secure their enterprise. In this column, Taylor Lehmann, Director in OCISO Director, and David Stone, Security Consultant in OCISO, reflect on their first year with Google Cloud and the OCISO team.After spending most of our careers helping secure some of the world’s most critical infrastructure and services, we joined Google Cloud because we wanted to help enterprises be safer with Google.One thing that became immediately apparent is that at Google Cloud, security is a primary ingredient baked into everything we do. We can provide organizations with an opportunity to deploy secure workloads on a secure platform, designed and maintained by thousands of security-obsessed Googlers with decades of experience defending against adversaries of all capability levels. Our engineering philosophies drive us to design products that are secure by design, secure by default, and constantly updated to incorporate lessons learned from our own research and by defeating attacks.  Our existing customers know that our continuously-improving cloud platform has security turned on and up before they set up their cloud identity and build their first project. The value of cloud technology can’t be understated: It allows security teams to reduce their attack surface through removing entire categories of threats because security has been engineered into the hardware and software from the ground up.Dogfooding: A critical component of our security cultureGoogle helped popularize the practice of dogfooding, when a software company uses its own products before making them available to the general public. We also use dogfooding to drive the creation of advanced security technologies. Because we use the security technologies we sell, we never settle for just good enough — for Googlers (who have exceptionally high expectations for the technology they use), for customers, and for their users. In some cases, these technologies (such as BeyondCorp and BeyondProd, implementations of Zero Trust security models pioneered at Google) are available to us years before the broader need for them outside of Google is fully understood. Similarly, our Threat Analysis Group (TAG) began developing approaches to track and stop threats to Google’s systems and networks following lessons we learned in 2010. What’s unique about these initiatives (and newer ones like Chronicle) is not only how they came together, but how they continue to improve by our own dogfooding.Embracing the shared fate model to better protect usersIt’s important to update your thinking to keep pace with the ever-evolving cybersecurity landscape. The shared responsibility model, which establishes whether the customer or the cloud service provider (CSP) is responsible for various aspects of security, has guided security relationships and interactions since the early days of CSPs. At Google Cloud, we believe that it now stops short of helping customers achieve better security outcomes. Instead of shared responsibility, we believe in shared fate. Shared fate includes us building and operating a trusted cloud platform for your workloads. We provide guidance for security best practices and secured, attested infrastructure-as-code patterns that you can use to deploy your workloads. We release solutions that combine Google Cloud services to solve complex security problems, and we offer innovative insurance options to help you measure and mitigate the risks that you must accept. Shared fate involves a closer interaction between us and you to secure your resources on Google Cloud. By sharing fate, we can create a system of mutual accountability and can set expectations that the CSP and their customers are actively involved in making each other secure and successful. Establishing trust in our software supply chainSoftware supply chains need to be better secured, and we believe Google’s approach to be the most robust and well-rounded. We contribute to many public communities, such as the Linux Foundation, and use our Vulnerability Rewards Program to improve the security of software we open source for the world. We recently announced Assured Open Source Software, which seeks to maintain and secure select open source packages for customers the same way Google secures them for itself. Assured Open Source is yet another dogfood project, taking what we do at Google and externalizing it for everyone’s benefit.A resilient ecosystem requires community participationBeing an active member of the community is a priority at Google, and can be a vital part of securing the critical infrastructure that we all rely on. We joined the Health-ISAC (Information Sharing and Analysis Center) as a partner this July. We’ve maintained relationships with Financial Services ISAC, Auto ISAC (for vehicle software security,) Retail ISAC, and others for years. Sharing knowledge and guidance between our organizations can only help improve everyone’s ability to defend against the latest cybersecurity threats. We’re not just partners, we’re helping build close relationships with these organizations, pairing teams together to protect communities globally.Top challenges during transformationWe believe the future is better running workloads on a trusted cloud platform like Google Cloud, but the journey there can be challenging. In feedback we’ve received over the past year, including from nearly 100 executive workshops and interactions we’ve led, our customers have shared their top challenges with us. The seven most frequent ones are: Evolving a software-defined perimeter where identity, not firewall rules, keep bad out and allow good in;Enabling secure, remote access capabilities that allow access to data and services anywhere and from any device;Ensuring data stays in approved locations while allowing the enterprise to be agile and responsible to their stakeholder use cases;Scaling effective security programs to match the growth in consumption infrastructure and cloud-native services by their business;Managing their attack surface in light of two facts: That more than 42 billion devices are expected to be connected to the internet by 2025, and organizations are looking for ways to connect and leverage an ever-growing collection of data;Analyzing and sharing data securely with third parties as businesses seek to leverage this information to get closer to customer needs while also generating more revenue; and finally,Transforming teams by federating responsibilities for security outside of the security organization and establishing effective guardrails to safely constrain and protect use of cloud resources.  The future is multi-cloudAn important point that we’ve learned, and that we’ve emphasized in our customer interactions over the past year, is that Google Cloud is not singularly-focused on how to be successful only on our own platform. We focus on building technologies that meet customers where they are at, create value for their organizations and customers, and reduce the operator toil needed to get there. It’s why we built Anthos, contribute to and support open source, and develop products like Chronicle which work well no matter where you decide to deploy a workload — on-prem, on Google Cloud, or on another cloud.At its heart, the cybersecurity community is its people and its technology. That’s why we’re investing $10 billion in cybersecurity over the next five years, why we work hard to improve DEI initiatives at Google and beyond, and why we provide accessible, free training and certification programs in security and cloud to democratize knowledge and build the next generation of cloud leaders.We close out our first year thankful for the opportunity to work with so many customers, communities, partners, and governments around the world. We have learned and have grown better at what we do from the experiences we had interacting across these groups. In the final months of this year and onwards into 2023, we will continue to find new ways to use Google’s resources to help customers, build products, and support the safety and security of societies around the world.Related ArticleRead Article
Quelle: Google Cloud Platform

Announcing the 2022 Accelerate State of DevOps Report: A deep dive into security

In 2021, more than22 billion records were exposed because of data breaches, with several huge companies falling victim. Between that andother malicious attacks, security continues to be top of mind for organizations as they work to keep customer data safe and their businesses up and running. With this in mind, Google Cloud’s DevOps Research and Assessment (DORA) team decided to focus on security for the 2022 Accelerate State of DevOps Report, which is out today.Over the past eight years, more than 33,000 professionals around the world have taken part in the Accelerate State of DevOps survey, making it the largest and longest-running research of its kind. Year after year, Accelerate State of DevOps Reports provide data-driven industry insights that examine the capabilities and practices that drive software delivery, as well as operational and organizational performance.Securing the software supply chainTo analyze the relationship between security and DevOps, we explored the topic of software supply chain security, which the survey only touched upon lightly in previous years. To do this, we used the Supply-chain Levels for Secure Artifacts (SLSA) framework, as well as the NIST’s Secure Software Development Framework (SSDF). Together, these two frameworks allowed us to explore both the technical and non-technical aspects that influence how an organization implements and thinks about software security practices.Overall, we found surprisingly broad adoption of emerging security practices, with a majority of respondents reporting at least partial adoption of every practice we asked about. Among all the practices that SLSA and NIST SSDF promote, using application-level security scanning as part of continuous integration/continuous delivery (CI/CD) systems for production releases was the most common practice, with 63% of respondents saying this was “very” or “completely” established. Preserving code history and using build scripts are also highly established, while signing metadata and requiring a two-person review process have the most room for growth.One thing we found surprising was that the biggest predictor of an organization’s software security practices was cultural, not technical: high-trust, low-blame cultures — as defined by Westrum — focused on performance were significantly more likely to adopt emerging security practices than low trust, high-blame cultures focused on power or rules. Not only that, survey results indicate that teams who focus on establishing these security practices have reduced developer burnout and are more likely to recommend their team to someone else. To that end, the data indicate that organizational culture and modern development processes (such as continuous integration) are the biggest drivers of an organization’s software security and are the best place to start for organizations looking to improve their security posture.What else is new in 2022?This year’s focus on security didn’t stop us from exploring software delivery and operational performance. We classify DevOps teams using four key metrics: deployment frequency, lead time for changes, mean-time-to-restore, and change fail rate, as well as a fifth metric that we introduced last year, reliability. Software delivery performanceLooking at these five metrics, respondents fell into three clusters – High, Medium and Low. Unlike in years past, there was no evidence of an ‘Elite’ cluster. When it came to software delivery performance, this year’s High cluster is a blend of last year’s High and Elite clusters.As shown in the percentage breakdowns in the table below, High performers are at a four-year low and Low performers rose dramatically from 7% in 2021 to 19% in 2022! The Medium cluster, however, swelled to 69% of respondents. That said, if you compare this year’s Low, Medium, and High clusters with last year’s, you’ll see that there is a shift toward slightly higher software delivery performance overall. This year’s High performers are performing better – their performance is a blend of last year’s High and Elite. Low performers are also performing better than last year – this year’s Low performers are a blend of last year’s Low and Medium.We plan to conduct further research that will help us better understand this shift, but for now, our hypothesis is that the ongoing pandemic may have hampered teams’ ability to share knowledge, collaborate, and innovate, contributing to a decrease in the number of High performers and an increase in the number of Low performers.Operational performance When it comes to DevOps, software delivery performance isn’t the whole picture — it can also contribute to the organization’s overall operational performance. To dive deeper, we performed a cluster analysis on the three categories the five metrics are designed to represent: throughput (a composite of lead time of code changes and deployment frequency), stability (a composite of time to restore a service and change failure rate) and operational performance (reliability).Through our data analysis, four distinct types of DevOps organizations emerged; these clusters differ notably in their practices and technical capabilities, so we broke them down a bit further:Starting: This cluster performs neither well nor poorly across any of our dimensions. This cluster may be in the early stages of their product, feature, or service’s development. They may be less focused on reliability because they’re focusing on getting feedback, understanding their product-market fit and more generally, exploring. Flowing: This cluster performs well across all characteristics: high reliability, high stability, high throughput. Only 17% of respondents achieve this flow state.Slowing: Respondents in this cluster do not deploy too often, but when they do, they are likely to succeed. Over a third of responses fall into this cluster, making it the most representative of our sample. This pattern is likely typical (though far from exclusive) to a team that is in the process of incrementally improving, but they and their customers are mostly happy with the current state of their application or product.  Retiring: And finally, this cluster looks like a team that is working on a service or application that is still valuable to them and their customers, but no longer under active development.Are you in the Flowing cohort? While previous respondents followed this guidance to help them achieve Elite status, teams aiming for the Flowing cohort should focus on loosely-coupled architectures, CI/CD, version control and providing workplace flexibility. Be sure to check out our technical capabilities articles, which go into more detail on these competencies and how to implement them. Show us how you use DORAThe State of DevOps Report is a great place to begin learning about some ways your team can improve its DevOps performance, but it is also helpful to see how other organizations are already using the report to make a meaningful impact throughout their organizations. Last year we launched the Inaugural Google Cloud DevOps Awards, and this year we are excited to share the DevOps Awards Ebook, which includes 13 case studies from last year’s winning companies. Learn from companies like Deloitte, Lowe’s, and Virgin Media on how they successfully implemented DORA practices in their organizations. And be sure to apply to the 2022 DevOps Awards to share your organization’s transformation story! Thanks to everyone who took our 2022 survey. We hope the Accelerate State of DevOps Report helps organizations of all sizes, industries, and regions improve their DevOps capabilities, and we look forward to hearing your thoughts and feedback. To learn more about the report and implementing DevOps with Google Cloud:Download the reportFind out more about how your organization stacks up against others in your industry with theDevOps Quick Check Learn more about how you can implement DORA practices in your organization with our Enterprise GuidebookModel your organization around the DevOps capabilities of high-performing teamsRelated ArticleDevOps for tech companies and startups: Learn from over 32,000 professionals on how to drive success with Google Cloud’s DORA researchThe 2021 State of DevOps Report is live and we want to help your organization continue to thrive with Google Cloud’s best DevOps practices.Read Article
Quelle: Google Cloud Platform

Google Cloud Deploy introduces post deployment verification

Google Cloud Deploy is introducing a new feature called deployment verification, with this feature developers and operators will be able to orchestrate and execute post deployment testing without having to undertake a more extensive testing integration, like through using Cloud Deploy notifications or manually testing.The 2021 State of DevOps report showed us that continuous testing is a strong predictor of successful continuous delivery. By incorporating early and frequent testing throughout the delivery process, with testers working alongside developers throughout, teams can iterate and make changes to their product, service, or application more quickly. What about performing post delivery testing, to determine if certain conditions are met to further validate a deployment? For most, the ability to run these tests remains critical to their business and is an oft desired table stakes capability from a continuous delivery tool.As shared in our previous post this past August, Cloud Deploy uses Skaffold for render and deploy operations. This new feature relies on a new Skaffold phase named ‘verify’, this phase allows developers and operators to add a list of test containers to be run post deployment and monitored for success/failure.How to useWe are going to use thepython-hello-world from Cloud Code Samples to show how deployment verification works. With our Cloud Build trigger and file configured and Cloud Deploy Pipeline created, we can start to try the post deployment verification feature.First, we need to modify the skaffold.yaml to insert the new verify phase:SkaffoldThe possibility to use any container image (either standalone containers or built by Skaffold) gives developers the flexibility to perform simple tests up to more complex scenarios. For this case we are going to use ‘wget’ to check if the “/hello” page exists and if it’s up (http 200 response).Although we can use Kubernetes readiness probe to check if our application/pod is ready to receive requests, this new Cloud Deploy feature allows us to perform controlled and pre-defined tests. We can check application metrics and/or execute integration tests, for example.Now let’s take a look at our clouddeploy.yaml. The post deployment verification can be used for different targets based on different Skaffold profiles, in our case the ‘dev’ target, also we need to configure the targets we want to have deployment verification, as highlighted below. This new strategy configuration allows for potential additional Cloud Deploy deployment strategies in the future, for now we are going to use the standard one. Cloud DeployAfter these changes, we can trigger our CI/CD process using ‘gcloud builds submit’ or pushing the code to the source repo in order to trigger Cloud Build. After the build phase (also known as Continuous Integration) Cloud Build will create a Google Cloud Deploy release and deploy it through the specified delivery pipeline onto our ‘dev’ target.Important: Like Cloud Deploy rendering and deployment, the verification container runs on Cloud Build secure and hosted environment, and not in the same environment of your application, so you need to expose the application to execute the post deployment verification, or you can use Cloud Build Private pools.To check the deployment status, open Cloud Deploy then navigate to the delivery pipeline and click on the last release in the release list. On the release details page, select the last rollout from the rollout list. Success LogsThe above screenshot shows the post deployment verification was successful. You can click on verification logs to see the details. If we change the address of our ‘wget’ verification on skaffold.yaml and re-run the process, we can see what happens when the verification fails.Failure LogsWhen the deployment verification fails, the rollout should also fail. All the deployment verification tests have to pass. If any deployment verification test fails the rollout also fails.  However, it’s possible to re-run post deployment verification for a failed rollout. Also it’s possible to receive a Pub/Sub notification when a verification is started and completed.Try yourself!The Google Cloud Deploy tutorials page has been updated with a deployment verification walkthrough. This interactive tutorial will take you through the steps to set up and use the Google Cloud Deploy service. This pipeline includes automated deployment verification, which runs checks at each stage to test whether the application has been successfully deployed.The FutureComprehensive, easy-to-use, and cost-effective DevOps tools are key to building an efficient software development team, and it’s our hope that Google Cloud Deploy will help you implement complete CI/CD pipelines. And we’re just getting started! Stay tuned as we introduce exciting new capabilities and features to Google Cloud Deploy in the months to come. In the meantime, check out the product page, documentation, quickstart, and tutorials. Finally, If you have feedback on Google Cloud Deploy, you can join the conversation. We look forward to hearing from you!Related ArticleGoogle Cloud Deploy gets continuous delivery productivity enhancementsIn this latest release, Google Cloud Deploy got improved onboarding, delivery pipeline management and additional enterprise features.Read ArticleRelated ArticleGoogle Cloud Deploy, now GA, makes it easier to do continuous delivery to GKEGoogle Cloud Deploy managed service, now GA, makes it easier to do continuous delivery to Google Kubernetes EngineRead Article
Quelle: Google Cloud Platform

RoQC and Microsoft simplify cloud migration with Microsoft Energy Data Services

This post was co-authored by Ian Barron, Chief Technology Officer, RoQC.

The vast amount of data in energy companies slows down their digital transformation. Together with RoQC solutions, Microsoft Energy Data Services will accelerate your journey in democratizing access to data by providing an easy-to-deploy managed service fully supported by Microsoft.

Managing large data sets is complicated, and few industries have larger and more complex data sets than the energy industry. Data complexity and large investments in on-premises storage solutions and multitudes of computer systems prevent the transition to cloud-based sub-surface data management. A single company can have tens of petabytes of structured and unstructured data, which if not quality-assured, can lead to an increase in cost if it goes wrong.

Solutions from RoQC, a Norwegian software company, clean up structured data for energy companies. This makes data management more efficient from a time and cost perspective, and also makes decision-making more reliable.

With Microsoft Energy Data Services, energy companies can leverage new cloud-based data management capabilities provided by RoQC and Microsoft Energy Data Services.

Microsoft Energy Data Services is a data platform fully supported by Microsoft, that enables efficient data management, standardization, liberation, and consumption in energy exploration. The solution is a hyperscale data ecosystem that leverages the capabilities of the OSDU Data Platform™, Microsoft's secure and trustworthy cloud services with our partners’ extensive domain expertise.

"Through machine learning, our software gives energy companies complete control of their data and assets. When the amounts of data are reduced, we eliminate uncertainty and duplication, and optimize the quality of the data sets. Traditionally a petrophysicist might spend a day or two cleaning up the logs for one well before they can be used for detailed analysis—with RoQC LogQA the same petrophysicist can clean hundreds of thousands of logs in the same timeframe. By cooperating with one of the largest platform providers in the world, we gain access to technology, competency, and markets it would be hard for us to get otherwise."—Bjørn Thorsen, CEO of RoQC.

New possibilities through cooperation

RoQC, a certified independent software vendor with Microsoft, has been able to expand its technology globally through the partnership.

Partner development manager for Microsoft Norway, Ole Christian Smerud, assures that the cooperation is mutually beneficial. "As a platform provider, we depend on strong partners to give our customers the best solutions. While we provide a platform, cloud competency, and access to an ecosystem for RoQC, they bring domain knowledge and relevance to their industry," he says.

Save millions with better data

RoQC believes that the energy industry struggles to take the step into the cloud, simply because of the data complexity and that most companies lack control over their data. By qualifying and quantifying data sets by identifying and deleting duplicates, RoQC Tools can reduce the data set size with commensurate dramatic savings in storage costs.

By reducing the amount of data by 10 to 30 percent, we’re talking millions of dollars in savings. The bigger the organization, the bigger the effect.

RoQC Tools are primarily designed so that data managers can perform tasks that are usually time-consuming as efficiently as possible. Very often they can complete a task that usually takes months, in a minute or two. Sometimes, the tasks would not be possible at all without the tools.

There is an obvious and well-documented correlation between increasing the quality of your data and reducing the risk of decisions based on that data. Geoscientists and project leaders in this field make decisions worth millions, maybe billions. You don’t want to make a decision of that magnitude based on insufficient or weak data.

RoQC believes the energy companies’ data is the key to shifting away from fossil resources. In the data sets, subsea energy companies have knowledge of "everything" about the ocean floor and sub-sea.

"Minerals from the ocean floor and sub-surface might be the next big thing for subsea oil-dependent nations like Norway. It is an already overused statement, but data is literally the new oil for this industry," says Bjørn Thorsen.

Preparing efficient data migration

RoQC provides both tools and consultants to enable a client to prepare their data prior to migrating the data to Azure. This preparation can include everything from simply identifying and removing duplicates to developing and implementing standards and then cleaning the data to comply with the standards. These preparations can be done directly in the clients’ normal (e.g., Halliburton/Schlumberger) interpretation platforms.

Furthermore, RoQC’s LogQA provides extremely powerful native, machine learning–based QA and cleanup tools for log data once the data has been migrated to Microsoft Energy Data Services, an enterprise-grade OSDU Data Platform on the Microsoft Cloud.

LogQA monitors the quality of the well log data that a client has stored on OSDU Data Platform. LogQA was partially developed in collaboration with Microsoft as part of Microsoft Energy Data Services, and LogQA is maintained on the latest OSDU Data Platform APIs and version/schema.

As LogQA is native to the Microsoft Cloud infrastructure there is no customer deployment required before a customer can utilize LogQA to monitor, identify, and rapidly rectify the data quality issues. LogQA is designed to work with typically energy industry client datasets, which is potentially millions of well logs.

How to work with RoQC Solutions on Microsoft Energy Data Services

For access to RoQC solutions, reach out to Bjørn Thorsen, CEO, RoQC Data Management AS, Norway at Bjorn@roqc.no.

Microsoft Energy Data Services is an enterprise-grade, fully managed, OSDU Data Platform for the energy industry that is efficient, standardized, easy to deploy, and scalable for data management—for ingesting, aggregating, storing, searching, and retrieving data. The platform will provide scale, security, privacy, and compliance that are expected by our enterprise customers. The platform offers out-of-the-box compatibility with RoQC applications, which accelerates time-to-market and being able to run their domain workflows with ease, with data contained in Microsoft Energy Data Services, and minimal effort.

Get started with Microsoft Energy Data Services today.
Quelle: Azure