Google Cloud VMware Engine – What’s New: Increased commercial flexibility, ease of use and more

We’ve made several updates to Google Cloud VMware Engine in the past few months — today’s post provides a recap of our latest milestones making it easier and more cost-effective for you to migrate and run your vSphere workloads in a cloud-native enterprise-grade VMware environment in Google Cloud. In January, we announced Single node private cloud, additional regions, PCI-DSS and more.Key updates this time around include:Inclusion of Google Cloud VMware Engine in VMware Cloud Universal subscription program for increased commercial flexibilityPreview of automation with Google Cloud API/CLI supportAdvanced migration capabilities with VMware HCX enterprise features included, at no additional costCustom core counts to optimize application licensing costsService availability in Zurich, with additional regions planned in Asia, Europe and South AmericaTraffic Director and Google Cloud VMware Engine integration for scaling web services and linking native GCP load balancers and the GCVE backendsDell PowerScale for GCVE is now available. This enables in-guest NFS, SMB, and HDFS to be accessed by GCVE VMs.Preview support for 96 node private clouds, stretch clusters and roadmap inclusion of additional compliance certifications.Google Cloud VMware Engine inclusion in the VMware Cloud Universal subscription program: You can now purchase the Google Cloud VMware Engine offering as part of VMware Cloud Universal from VMware and VMware partners. The program can allow you to take advantage of savings through the VMware Cloud Acceleration Benefit and unused VMware Cloud Universal credits. It also allows streamlined consumption by enabling you to burn down your Google Cloud commits while purchasing from VMware. To learn more, please read this post.Preview of Google Cloud API/CLI support for automation: Users can now enable automation at scale for VMware Engine infrastructure operations using Google Cloud API/CLI. It also enables you to manage these environments using a standard set of toolchain consistent with the rest of Google Cloud. If you are interested in participating in this public preview, please contact your Google account team.Custom core counts to optimize application licensing costs: To help customers manage and optimize their application licensing costs on Google Cloud VMware Engine, we introduced a capability called custom core counts — giving you the flexibility to configure your clusters to help meet your application-specific licensing requirements and reduce costs. You can set the required number of CPU cores at the time of cluster creation, selecting from a range of options, thereby effectively reducing the number of cores you may have to license for that application. To learn more, please read this post.Advanced migration capabilities with HCX enterprise features included, at no additional cost: Private cloud creation now uses the VMware HCX Enterprise license level by default, enabling premium migration capabilities. The more noteworthy of these features include HCX Replication Assisted vMotion that enables bulk, no-downtime migration from on-premises to Google Cloud VMware Engine and Mobility Optimized Networking that provides optimal traffic routing under certain scenarios to prevent network tromboning between the on-premises and cloud-based resources on extended networks. For more information on how to use HCX to migrate your workloads to Google Cloud VMware Engine, please read our documentation here.Google Cloud VMware Engine is now available in the Zurich region: This brings the availability of the service to 14 regions globally, enabling our multi-national and regional customers to leverage a VMware-compatible infrastructure-as-a-service platform on Google Cloud. In each of these regions, we support 4-9’s of SLA in a single zone.Traffic Director and Google Cloud VMware Engine integration: Traffic Director, a fully managed control plane for Service Mesh, can be combined with our portfolio of load balancers and withhybrid network endpoint groups (hybrid NEG) to provide a high-performance front-end for web services hosted in VMware Engine. Traffic Director can also serve as the glue that links the native GCP load balancers and the VMware Engine backends, enabling new services such as Cloud CDN, Cloud Armorand more. To learn more, please read this post.Dell PowerScale for Google Cloud VMware Engine: Dell PowerScale is now available for in-guest access for VMware Engine VMs. This enables seamless migration from on-prem environments and provides customers more choice in scale-out storage for VMware Engine. PowerScale for Google Cloud in-guest access includes multiprotocol access with NFS, SMB, and HDFS, snapshots, native replication, AD integration, and shared storage between VMware Engine and Compute Engine instances. To learn more check out Dell PowerScale for Google Cloud and Google Cloud VMware Engine. Preview support for 96 node private clouds for increased scale, stretch clusters for HA and roadmap inclusion of additional compliance certifications.[Preview] Increasing scale from up to 64 nodes per private cloud to a maximum of 96 nodes per private cloud. This would enable larger customer environments to be supported with the same highly performant dedicated infrastructure and would increase operational efficiency by managing such large environments with a single vCenter server[Preview] With stretched clusters, a cluster would be deployed across two availability zones in a region, with synchronous replication, enabling higher levels of availability and failure independence.[Roadmap] Working on adding more compliance certifications – SOC1, Information System Security Management and Assessment Program (ISMAP), BSI:C5Presence at VMware Explore 2022 and Google Next ‘22We recently had the opportunity to connect with many of you and share these updates at VMware Explore in San Francisco. You can revisit our breakout sessions to learn more about how you can quickly migrate and transform your VMware workloads by viewing our on-demand content. You’ll find sessions that cover a plethora of topics including migration, transformation with Google Cloud services, security, backup and disaster recovery, and more. We also have an exciting line up of sessions and demos at VMware Explore in Barcelona in November – stay tuned for more information.Join us at Google Next ‘22 for an exciting panel where you can hear how customers have used Google Cloud VMware Engine, which delivers a VMware stack running natively in Google Cloud without needing changes to existing applications, to reduce migration timelines, lower risk, and transform their businesses.You can also get started by learning about Google Cloud VMware Engine and your options for migration, or talk to our sales team to join the customers who have embarked upon this journey. This brings us to the end of our updates this time around. For the latest updates to the service, please bookmark our release notes.Related ArticleRunning VMware in the cloud: How Google Cloud VMware Engine stacks upLearn how Google Cloud VMware Engine provides unique capabilities to migrate and run VMware workloads natively in Google Cloud.Read Article
Quelle: Google Cloud Platform

What makes Google Cloud security special: Our reflections 1 year after joining OCISO

Editor’s note: Google Cloud’s Office of the Chief Information Security Officer (OCISO) is an expert team of cybersecurity leaders, including established industry CISOs, initially formed in 2019. Together they have more than 500 years of combined cybersecurity experience and leadership across industries including global healthcare, finance, telecommunications and media, government and public sector, and retail industries. Their goal is to meet the customer where they are and help them take the best next steps to secure their enterprise. In this column, Taylor Lehmann, Director in OCISO Director, and David Stone, Security Consultant in OCISO, reflect on their first year with Google Cloud and the OCISO team.After spending most of our careers helping secure some of the world’s most critical infrastructure and services, we joined Google Cloud because we wanted to help enterprises be safer with Google.One thing that became immediately apparent is that at Google Cloud, security is a primary ingredient baked into everything we do. We can provide organizations with an opportunity to deploy secure workloads on a secure platform, designed and maintained by thousands of security-obsessed Googlers with decades of experience defending against adversaries of all capability levels. Our engineering philosophies drive us to design products that are secure by design, secure by default, and constantly updated to incorporate lessons learned from our own research and by defeating attacks.  Our existing customers know that our continuously-improving cloud platform has security turned on and up before they set up their cloud identity and build their first project. The value of cloud technology can’t be understated: It allows security teams to reduce their attack surface through removing entire categories of threats because security has been engineered into the hardware and software from the ground up.Dogfooding: A critical component of our security cultureGoogle helped popularize the practice of dogfooding, when a software company uses its own products before making them available to the general public. We also use dogfooding to drive the creation of advanced security technologies. Because we use the security technologies we sell, we never settle for just good enough — for Googlers (who have exceptionally high expectations for the technology they use), for customers, and for their users. In some cases, these technologies (such as BeyondCorp and BeyondProd, implementations of Zero Trust security models pioneered at Google) are available to us years before the broader need for them outside of Google is fully understood. Similarly, our Threat Analysis Group (TAG) began developing approaches to track and stop threats to Google’s systems and networks following lessons we learned in 2010. What’s unique about these initiatives (and newer ones like Chronicle) is not only how they came together, but how they continue to improve by our own dogfooding.Embracing the shared fate model to better protect usersIt’s important to update your thinking to keep pace with the ever-evolving cybersecurity landscape. The shared responsibility model, which establishes whether the customer or the cloud service provider (CSP) is responsible for various aspects of security, has guided security relationships and interactions since the early days of CSPs. At Google Cloud, we believe that it now stops short of helping customers achieve better security outcomes. Instead of shared responsibility, we believe in shared fate. Shared fate includes us building and operating a trusted cloud platform for your workloads. We provide guidance for security best practices and secured, attested infrastructure-as-code patterns that you can use to deploy your workloads. We release solutions that combine Google Cloud services to solve complex security problems, and we offer innovative insurance options to help you measure and mitigate the risks that you must accept. Shared fate involves a closer interaction between us and you to secure your resources on Google Cloud. By sharing fate, we can create a system of mutual accountability and can set expectations that the CSP and their customers are actively involved in making each other secure and successful. Establishing trust in our software supply chainSoftware supply chains need to be better secured, and we believe Google’s approach to be the most robust and well-rounded. We contribute to many public communities, such as the Linux Foundation, and use our Vulnerability Rewards Program to improve the security of software we open source for the world. We recently announced Assured Open Source Software, which seeks to maintain and secure select open source packages for customers the same way Google secures them for itself. Assured Open Source is yet another dogfood project, taking what we do at Google and externalizing it for everyone’s benefit.A resilient ecosystem requires community participationBeing an active member of the community is a priority at Google, and can be a vital part of securing the critical infrastructure that we all rely on. We joined the Health-ISAC (Information Sharing and Analysis Center) as a partner this July. We’ve maintained relationships with Financial Services ISAC, Auto ISAC (for vehicle software security,) Retail ISAC, and others for years. Sharing knowledge and guidance between our organizations can only help improve everyone’s ability to defend against the latest cybersecurity threats. We’re not just partners, we’re helping build close relationships with these organizations, pairing teams together to protect communities globally.Top challenges during transformationWe believe the future is better running workloads on a trusted cloud platform like Google Cloud, but the journey there can be challenging. In feedback we’ve received over the past year, including from nearly 100 executive workshops and interactions we’ve led, our customers have shared their top challenges with us. The seven most frequent ones are: Evolving a software-defined perimeter where identity, not firewall rules, keep bad out and allow good in;Enabling secure, remote access capabilities that allow access to data and services anywhere and from any device;Ensuring data stays in approved locations while allowing the enterprise to be agile and responsible to their stakeholder use cases;Scaling effective security programs to match the growth in consumption infrastructure and cloud-native services by their business;Managing their attack surface in light of two facts: That more than 42 billion devices are expected to be connected to the internet by 2025, and organizations are looking for ways to connect and leverage an ever-growing collection of data;Analyzing and sharing data securely with third parties as businesses seek to leverage this information to get closer to customer needs while also generating more revenue; and finally,Transforming teams by federating responsibilities for security outside of the security organization and establishing effective guardrails to safely constrain and protect use of cloud resources.  The future is multi-cloudAn important point that we’ve learned, and that we’ve emphasized in our customer interactions over the past year, is that Google Cloud is not singularly-focused on how to be successful only on our own platform. We focus on building technologies that meet customers where they are at, create value for their organizations and customers, and reduce the operator toil needed to get there. It’s why we built Anthos, contribute to and support open source, and develop products like Chronicle which work well no matter where you decide to deploy a workload — on-prem, on Google Cloud, or on another cloud.At its heart, the cybersecurity community is its people and its technology. That’s why we’re investing $10 billion in cybersecurity over the next five years, why we work hard to improve DEI initiatives at Google and beyond, and why we provide accessible, free training and certification programs in security and cloud to democratize knowledge and build the next generation of cloud leaders.We close out our first year thankful for the opportunity to work with so many customers, communities, partners, and governments around the world. We have learned and have grown better at what we do from the experiences we had interacting across these groups. In the final months of this year and onwards into 2023, we will continue to find new ways to use Google’s resources to help customers, build products, and support the safety and security of societies around the world.Related ArticleRead Article
Quelle: Google Cloud Platform

Announcing the 2022 Accelerate State of DevOps Report: A deep dive into security

In 2021, more than22 billion records were exposed because of data breaches, with several huge companies falling victim. Between that andother malicious attacks, security continues to be top of mind for organizations as they work to keep customer data safe and their businesses up and running. With this in mind, Google Cloud’s DevOps Research and Assessment (DORA) team decided to focus on security for the 2022 Accelerate State of DevOps Report, which is out today.Over the past eight years, more than 33,000 professionals around the world have taken part in the Accelerate State of DevOps survey, making it the largest and longest-running research of its kind. Year after year, Accelerate State of DevOps Reports provide data-driven industry insights that examine the capabilities and practices that drive software delivery, as well as operational and organizational performance.Securing the software supply chainTo analyze the relationship between security and DevOps, we explored the topic of software supply chain security, which the survey only touched upon lightly in previous years. To do this, we used the Supply-chain Levels for Secure Artifacts (SLSA) framework, as well as the NIST’s Secure Software Development Framework (SSDF). Together, these two frameworks allowed us to explore both the technical and non-technical aspects that influence how an organization implements and thinks about software security practices.Overall, we found surprisingly broad adoption of emerging security practices, with a majority of respondents reporting at least partial adoption of every practice we asked about. Among all the practices that SLSA and NIST SSDF promote, using application-level security scanning as part of continuous integration/continuous delivery (CI/CD) systems for production releases was the most common practice, with 63% of respondents saying this was “very” or “completely” established. Preserving code history and using build scripts are also highly established, while signing metadata and requiring a two-person review process have the most room for growth.One thing we found surprising was that the biggest predictor of an organization’s software security practices was cultural, not technical: high-trust, low-blame cultures — as defined by Westrum — focused on performance were significantly more likely to adopt emerging security practices than low trust, high-blame cultures focused on power or rules. Not only that, survey results indicate that teams who focus on establishing these security practices have reduced developer burnout and are more likely to recommend their team to someone else. To that end, the data indicate that organizational culture and modern development processes (such as continuous integration) are the biggest drivers of an organization’s software security and are the best place to start for organizations looking to improve their security posture.What else is new in 2022?This year’s focus on security didn’t stop us from exploring software delivery and operational performance. We classify DevOps teams using four key metrics: deployment frequency, lead time for changes, mean-time-to-restore, and change fail rate, as well as a fifth metric that we introduced last year, reliability. Software delivery performanceLooking at these five metrics, respondents fell into three clusters – High, Medium and Low. Unlike in years past, there was no evidence of an ‘Elite’ cluster. When it came to software delivery performance, this year’s High cluster is a blend of last year’s High and Elite clusters.As shown in the percentage breakdowns in the table below, High performers are at a four-year low and Low performers rose dramatically from 7% in 2021 to 19% in 2022! The Medium cluster, however, swelled to 69% of respondents. That said, if you compare this year’s Low, Medium, and High clusters with last year’s, you’ll see that there is a shift toward slightly higher software delivery performance overall. This year’s High performers are performing better – their performance is a blend of last year’s High and Elite. Low performers are also performing better than last year – this year’s Low performers are a blend of last year’s Low and Medium.We plan to conduct further research that will help us better understand this shift, but for now, our hypothesis is that the ongoing pandemic may have hampered teams’ ability to share knowledge, collaborate, and innovate, contributing to a decrease in the number of High performers and an increase in the number of Low performers.Operational performance When it comes to DevOps, software delivery performance isn’t the whole picture — it can also contribute to the organization’s overall operational performance. To dive deeper, we performed a cluster analysis on the three categories the five metrics are designed to represent: throughput (a composite of lead time of code changes and deployment frequency), stability (a composite of time to restore a service and change failure rate) and operational performance (reliability).Through our data analysis, four distinct types of DevOps organizations emerged; these clusters differ notably in their practices and technical capabilities, so we broke them down a bit further:Starting: This cluster performs neither well nor poorly across any of our dimensions. This cluster may be in the early stages of their product, feature, or service’s development. They may be less focused on reliability because they’re focusing on getting feedback, understanding their product-market fit and more generally, exploring. Flowing: This cluster performs well across all characteristics: high reliability, high stability, high throughput. Only 17% of respondents achieve this flow state.Slowing: Respondents in this cluster do not deploy too often, but when they do, they are likely to succeed. Over a third of responses fall into this cluster, making it the most representative of our sample. This pattern is likely typical (though far from exclusive) to a team that is in the process of incrementally improving, but they and their customers are mostly happy with the current state of their application or product.  Retiring: And finally, this cluster looks like a team that is working on a service or application that is still valuable to them and their customers, but no longer under active development.Are you in the Flowing cohort? While previous respondents followed this guidance to help them achieve Elite status, teams aiming for the Flowing cohort should focus on loosely-coupled architectures, CI/CD, version control and providing workplace flexibility. Be sure to check out our technical capabilities articles, which go into more detail on these competencies and how to implement them. Show us how you use DORAThe State of DevOps Report is a great place to begin learning about some ways your team can improve its DevOps performance, but it is also helpful to see how other organizations are already using the report to make a meaningful impact throughout their organizations. Last year we launched the Inaugural Google Cloud DevOps Awards, and this year we are excited to share the DevOps Awards Ebook, which includes 13 case studies from last year’s winning companies. Learn from companies like Deloitte, Lowe’s, and Virgin Media on how they successfully implemented DORA practices in their organizations. And be sure to apply to the 2022 DevOps Awards to share your organization’s transformation story! Thanks to everyone who took our 2022 survey. We hope the Accelerate State of DevOps Report helps organizations of all sizes, industries, and regions improve their DevOps capabilities, and we look forward to hearing your thoughts and feedback. To learn more about the report and implementing DevOps with Google Cloud:Download the reportFind out more about how your organization stacks up against others in your industry with theDevOps Quick Check Learn more about how you can implement DORA practices in your organization with our Enterprise GuidebookModel your organization around the DevOps capabilities of high-performing teamsRelated ArticleDevOps for tech companies and startups: Learn from over 32,000 professionals on how to drive success with Google Cloud’s DORA researchThe 2021 State of DevOps Report is live and we want to help your organization continue to thrive with Google Cloud’s best DevOps practices.Read Article
Quelle: Google Cloud Platform

Google Cloud Deploy introduces post deployment verification

Google Cloud Deploy is introducing a new feature called deployment verification, with this feature developers and operators will be able to orchestrate and execute post deployment testing without having to undertake a more extensive testing integration, like through using Cloud Deploy notifications or manually testing.The 2021 State of DevOps report showed us that continuous testing is a strong predictor of successful continuous delivery. By incorporating early and frequent testing throughout the delivery process, with testers working alongside developers throughout, teams can iterate and make changes to their product, service, or application more quickly. What about performing post delivery testing, to determine if certain conditions are met to further validate a deployment? For most, the ability to run these tests remains critical to their business and is an oft desired table stakes capability from a continuous delivery tool.As shared in our previous post this past August, Cloud Deploy uses Skaffold for render and deploy operations. This new feature relies on a new Skaffold phase named ‘verify’, this phase allows developers and operators to add a list of test containers to be run post deployment and monitored for success/failure.How to useWe are going to use thepython-hello-world from Cloud Code Samples to show how deployment verification works. With our Cloud Build trigger and file configured and Cloud Deploy Pipeline created, we can start to try the post deployment verification feature.First, we need to modify the skaffold.yaml to insert the new verify phase:SkaffoldThe possibility to use any container image (either standalone containers or built by Skaffold) gives developers the flexibility to perform simple tests up to more complex scenarios. For this case we are going to use ‘wget’ to check if the “/hello” page exists and if it’s up (http 200 response).Although we can use Kubernetes readiness probe to check if our application/pod is ready to receive requests, this new Cloud Deploy feature allows us to perform controlled and pre-defined tests. We can check application metrics and/or execute integration tests, for example.Now let’s take a look at our clouddeploy.yaml. The post deployment verification can be used for different targets based on different Skaffold profiles, in our case the ‘dev’ target, also we need to configure the targets we want to have deployment verification, as highlighted below. This new strategy configuration allows for potential additional Cloud Deploy deployment strategies in the future, for now we are going to use the standard one. Cloud DeployAfter these changes, we can trigger our CI/CD process using ‘gcloud builds submit’ or pushing the code to the source repo in order to trigger Cloud Build. After the build phase (also known as Continuous Integration) Cloud Build will create a Google Cloud Deploy release and deploy it through the specified delivery pipeline onto our ‘dev’ target.Important: Like Cloud Deploy rendering and deployment, the verification container runs on Cloud Build secure and hosted environment, and not in the same environment of your application, so you need to expose the application to execute the post deployment verification, or you can use Cloud Build Private pools.To check the deployment status, open Cloud Deploy then navigate to the delivery pipeline and click on the last release in the release list. On the release details page, select the last rollout from the rollout list. Success LogsThe above screenshot shows the post deployment verification was successful. You can click on verification logs to see the details. If we change the address of our ‘wget’ verification on skaffold.yaml and re-run the process, we can see what happens when the verification fails.Failure LogsWhen the deployment verification fails, the rollout should also fail. All the deployment verification tests have to pass. If any deployment verification test fails the rollout also fails.  However, it’s possible to re-run post deployment verification for a failed rollout. Also it’s possible to receive a Pub/Sub notification when a verification is started and completed.Try yourself!The Google Cloud Deploy tutorials page has been updated with a deployment verification walkthrough. This interactive tutorial will take you through the steps to set up and use the Google Cloud Deploy service. This pipeline includes automated deployment verification, which runs checks at each stage to test whether the application has been successfully deployed.The FutureComprehensive, easy-to-use, and cost-effective DevOps tools are key to building an efficient software development team, and it’s our hope that Google Cloud Deploy will help you implement complete CI/CD pipelines. And we’re just getting started! Stay tuned as we introduce exciting new capabilities and features to Google Cloud Deploy in the months to come. In the meantime, check out the product page, documentation, quickstart, and tutorials. Finally, If you have feedback on Google Cloud Deploy, you can join the conversation. We look forward to hearing from you!Related ArticleGoogle Cloud Deploy gets continuous delivery productivity enhancementsIn this latest release, Google Cloud Deploy got improved onboarding, delivery pipeline management and additional enterprise features.Read ArticleRelated ArticleGoogle Cloud Deploy, now GA, makes it easier to do continuous delivery to GKEGoogle Cloud Deploy managed service, now GA, makes it easier to do continuous delivery to Google Kubernetes EngineRead Article
Quelle: Google Cloud Platform

Introducing Cloud Logging – Log Analytics, powered by BigQuery

Logging is a critical part of the software development lifecycle allowing developers to debug their apps, DevOps/SRE teams to troubleshoot issues, and security admins to analyze access. Cloud Logging provides a powerful pipeline to reliably ingest logs at scale and quickly find your logs. Today, we’re pleased to announce Log Analytics, a new set of features in Cloud Logging available in Preview, powered by BigQuery that allows you to gain even more insights and value from your logs. Introducing Log AnalyticsLog Analytics brings entirely new capabilities to search, aggregate, or transform logs at query time directly into Cloud Logging with a new user experience that’s optimized for analyzing logs data through the power of BigQuery . BigQuery is a cost-effective, serverless, multi cloud data warehouse to power your data-driven innovation. With Log Analytics, you can now harness SQL (see figure 1) and the capabilities of BigQuery to analyze their logs.  Now, Cloud Logging offers the functionality you had in the past and analytical capabilities through Log Analytics Secure, compliant and scalable logging ingestion pipeline through Logs RouterManaged logging-as-a-service solution with a specialized user interface for log analysis. Support for centralized logging across Google Cloud, other clouds, and on premAutomated insights and suggestions such as Error ReportingLog based metrics and alerts for real-time aggregation, visualization and alerting of logsFlexible pay as you go pricingNEW – Powerful BigQuery engine and SQL option for ad hoc log processingNEW – Automatic read only access to all log analytics logs in BigQueryNEW – Rich visualization of log data (Figure-2, in private preview)Why is Log Analytics powerful?Log Analytics leverages the power of BigQuery to enable Cloud Logging users to perform Analytics on Log data. Centralized logging –  By collecting and centrally storing the log data in a dedicated Log Bucket, it allows multiple stakeholders to manipulate their data from the same datasource. You don’t need to make duplicate copies of the data.Reduced cost and complexity – Log Analytics allows reuse of data across the organization, effectively saving cost and reducing complexities. Ad hoc log analysis – It allows for ad-hoc query-time log analysis without requiring complex pre-processing. Scalable platform – Log Analytics can scale for observability using the serverless BQplatform and perform aggregation at petabyte scale efficientlyLog Analytics is designed for multiple users in an organization and aims to break down silos. Here are the top categories we hear from our users: Developers & DevOps use it for Infrastructure and Application troubleshootingSecurity Teams use it for Audit Log Analysis. Networking professionals use it to perform network log analysisBusiness Operations Teams can potentially manipulate the data, create KPIs and in the future create dashboards.  PricingLog Analytics is included in the standard Cloud Logging pricing. Queries submitted through the Log Analytics user interface do not incur any additional cost. Enabling analysis in BigQuery is optional and, if enabled, queries submitted against the BigQuery linked data set including Data Studio, Looker and via BigQuery API, incur the standard BigQuery query cost.Get startedVisit the Log Analytics page in the Cloud Console and upgrade an existing Log Bucket or create a new Log Bucket. Check out our sample queries to get you started. Charting in Log Analytics is available now as a Private Preview (sign-up here). In the next blog post, we will talk about how and when users should leverage Log Analytics, how you will get started with Log Analytics and dive into a few common use cases.  You can join the discussion in our Cloud Operations page on the Google Cloud Community site.Related ArticleAnnouncing new simple query options in Cloud LoggingThe faster you can find logs, the faster you can resolve issues. Today, we’re pleased to announce a simpler way to find logs in Logs Expl…Read Article
Quelle: Google Cloud Platform

4 steps to get the most out of your Google Cloud Next experience

I’ll be honest.I’ve been dreaming about Google Cloud Next. Not dreaming as in “wistfully anticipating” – dreaming as in “this is all my brain wants to think about, even while I’m asleep.” It’s going to be incredible. It’s going to be global. One of a kind. Inclusive. Truly personalized. If you can’t tell, I’m excited. Next ’22 is right around the corner now (October 11 – 13). Just 14 days until you can dive into the latest innovations, hear from Google experts, get inspired by what your peers are doing with technology, and try out a new skill in one of the lab sessions. I’m thrilled to share a little more about what you can look forward to this year. Introducing the catalogThe session catalog is live and ready for you to explore. It highlights each session, speaker, who the content is for and what you’ll learn. Here’s a preview: Content track & who it’s forWhat you’ll learnOne session you should check outBuildApplication developersHow to to build, architect, deploy, and maintain applications on Google CloudBuilding a serverless event-driven web app in under 10 mins will take a use case, break it down into composable pieces, and build an end-to-end application using the Google Cloud serverless portfolio of products.AnalyzeData analysts & data scientistsHow to model your data and optimize business insights, using the power of analytics, AI, and machine learningWhat’s next for data analysts and data scientists will illuminate the “why” and “how” of operationalizing Data Analytics and AI, and let you in on the latest product innovations for BigQuery and Vertex AI. DesignData engineersAll about tools for developing, deploying, and managing data-driven applications at scale to solve real world problemsWhat’s next for data engineers can help you understand how to navigate the pressure for increased agility by unifying your data across analytical and transactional systems. Plus, get the latest product innovations across Spanner, AlloyDB, Cloud SQL and BigQuery. ModernizeEnterprise architects & developersHow to make your cloud modernization easy with multicloud support, intuitive migration tools, and solutions for SAP & VMwareWhat’s next for enterprise architects and developers will reveal exciting new enhancements to the infrastructure portfolio, help you transform and optimize your infrastructure and spend, and share experiences from entertainment and AI industry leaders.OperateDevOps system administratorsHow to leverage Google Cloud to test, monitor, and deploy code easily and quickly.What’s next for DevOps, SysAdmins, and operators is your opportunity to hear the biggest announcements for the Ops space. Come learn about all the new services and features in store for you. SecureSecurity professionalsHow to defend against emerging threats at modern scale and efficiencyMeeting your digital sovereignty requirements: best practices, resources, & peer insights offers tools, strategic partners, and stories from peers for the best ways to meet the evolving requirements for digital sovereignty, including data residency, access and operational controls, and survivability. CollaborateBusiness leaders & IT administratorsHow to empower teams to connect, create and collaborate securely from anywhere, anytimeBoosting collaboration in the hybrid workplace covers tools to help employees deliver their best in the new norm of hybrid work. Keep your teams connected and collaborating whether they’re working from home, the office, or anywhere in between.InnovateExecutives & technology business leadersHow your peers have managed large transformative projects, programs, and assessments across technology domainsTransform digital experiences with Google AI powered search and recommendations will show you how you can increase conversions and reduce search abandonment with Google-quality search and recommendations on your digital properties.Clocking over 140 sessions total across three days, the catalog has, shall we say, a lot to offer for everyone. There is so much to learn. Fortunately, there’s a handy way to organize your favorite sessions: by building a playlist. PlaylistsMaking a playlist is your ticket to getting the best of Next. Playlists are also a big part of how we take a massive global event and make it personalized, just for you. Easily keep track of the stuff that matters to you and make the most out of your time. My playlistI’ll go first. I made a playlist called Changemakers in Cloud. The “best” product with the coolest features means nothing if no one realized actual value from it. When a customer does something totally new, makes an impact on something big like climate change, or changes someone’s life for the better using one of our products, that’s what makes what we all do together worth it. Each session in my playlist features a customer speaker sharing how they’ve driven real change with purpose using Google Cloud. You’re invited to browse the rest of the Google-curated playlists, too. Make sure to switch between the tabs above the playlist titles to see different categories. Create Your OwnYour turn. After you register for Next ’22:1. Find a session you want to attend and click the blue icon in the lower right corner of the session tile.2. Click + Create new playlist.3. Give it any name and description you like, then click Create.Et voilà,you have just created a playlist.4. When you find another session you want to add to it, click that blue icon in the session tile and then click on your playlist title. Repeat until you’ve saved everything you want to attend.To see how the list is coming along, click the blue My Playlists button in the upper right corner of the Next website. You’ll have your plan laid out before you know it. Don’t miss the momentAs one of the countless Googlers bursting with excitement and anticipation for the moment we are all working hard to bring to you this year – completely free for all. I hope you won’t miss the moment we are so excited to bring to you kicking off October 11. Register for Next today and join us live to explore what’s new and what’s coming next in Google Cloud.Can’t wait to see you there.Related ArticleRegister for Google Cloud NextRegister now for Google Cloud Next ‘22, coming live to a city near you, as well as online and on demand.Read Article
Quelle: Google Cloud Platform

Get a head start with no-cost learning challenges before Next ‘22

Google Cloud Next is just two weeks away, taking place October 11-13. We’re giving developers across the globe the chance to get a head start with no-cost learning opportunities. By registering now for Next ‘22, you’ll get early access to #GoogleClout challenges designed for Next attendees, including the recently announced Google Cloud Fly Cup challenge. Already registered? Then you can dive straight in. Explore the Next ‘22 agenda and navigate to the Developer Zone, the hub for all developer experiences at Next. Check out the latest #GoogleClout challenges with opportunities to win great prizes, take your cloud skills to the next level with the Google Cloud Fly Cup Challenge, then tune in for Google Cloud certification sessions and the Innovators Hive livestream. Flex your #GoogleClout and win the hottest book in cloud Test your cloud knowledge against participants worldwide in the #GoogleClout challenge—a no-cost, 20-minute competition posted each Wednesday. Race the clock to see how fast you can complete the challenge. The faster you go, the higher your score. How it works:Register for Google Cloud NextRace to complete the six challenges in the #GoogleClout game before time runs out on October 13 Share your scores on social media using the #GoogleClout hashtagComplete the six challenges by October 13 to earn a special digital badge, plus an e-copy of Priyanka Vergadia’s bestselling book “Visualizing Google Cloud”.Take your data analytics skills to new heights with Drone League Racing The Google Cloud Fly Cup Challenge is a new three stage developer focused competition to help boost cloud skills and drive innovation into the sport of Drone Racing. Using DRL race data and Google Cloud analytics tools, developers of any skill level will be able to predict race outcomes and provide tips to DRL pilots to help enhance their season performance. Compete for the chance to win an expenses-paid trip to the season finale of the DRL 2022-23 World Championship and be celebrated on stage. Tune in for Innovators Hive broadcast and Google Cloud certification sessions at Next Innovators Hive is broadcasting from Germany, India, Japan and the USA. You’ll hear from Google Cloud executives and engineers about new cloud technologies to help you build more—and to do it better and faster. Or are you looking to invest in your cloud career progression? Choose from the six Google Cloud certification sessions available, whether you’re growing your career in app modernization or data, infrastructure modernization, Workspace administration, or digital transformation. Hear from certified experts about the benefits to pursuing your certification path, the best preparation resources, and unlock exclusive learning offers. Register for Next and subscribe to the playlist. Ready to start your challenge and explore Google Cloud certification? Make sure to register for Next ‘22 and check out the no-cost learning challenges in the Developer Zone today, and create a playlist to join the Google Cloud certification sessions.Related ArticleSign up for the Google Cloud Fly Cup ChallengeLearn more about how to participate in the Google Cloud Fly Cup, brought to you in partnership with The Drone Racing League.Read Article
Quelle: Google Cloud Platform

Google Cloud Deploy adds Cloud Run and deployment verification support

Google Cloud customers want to be able to easily deploy their applications to the full breadth of platforms that we offer, including Cloud Run. And when they push out code to production, they want confirmation that the deployment was successful. Today, we’re pleased to announce the Preview availability of Cloud Run targets and deployment verification for Google Cloud Deploy.Deploy to Cloud RunSupport for Cloud Run, our managed serverless container runtime, has been a top feature request for Google Cloud Deploy. It’s not hard to understand why: Adding a Cloud Run target to Google Cloud Deploy makes it easier to develop and deliver your enterprise applications.  Available in Preview, delivery pipelines can now specify and deploy to Cloud Run targets, enabling continuous delivery of Cloud Run services. All the continuous delivery capabilities that Google Cloud Deploy provides for other targets — rollback, approval, audit, and delivery metrics, to name just a few — are also available for Cloud Run targets. This consistency and feature parity allows platform operators and application developers to manage and reason about their application delivery pipelines in the same way, regardless of the runtime target.This consistency is enabled by Skaffold, an open-source cloud-native tool developed by Google that’s the foundation of Cloud Deploy. With the recent 2.0 beta 2 release, Skaffold users can now develop and deploy Cloud Run services just as they already do for Google Kubernetes Engine and Anthos clusters, making Skaffold workflows a consistent point of adoption and extension for Google Cloud Deploy.Continuous delivery pipeline with two Cloud Run targetsVerify your deploymentThe success or failure of a deployment frequently involves more than just rolling out an artifact to a target platform — it also involves testing to further confirm the deployment, often in the form of automated integration and canary testing. Customers told us they wanted formal support for deployment verification within Google Cloud Deploy. And when a deployment succeeds but a post-deployment verification test fails, the rollout should be identified as a failure, too.Within Google Cloud Deploy, you can now specify one or more (testing) containers to execute immediately when an application is successfully deployed. This deployment verification support within Google Cloud Deploy is based on Skaffold 2.0’s recently introduced verify command. You can use any process that runs in a container to verify the state of the application. An example could be as simple as issuing a curl command, or more complex, like validating all of the links via a third-party tool, or even gathering performance metrics. Verifying a deployment is as easy as configuring Skaffold to test the deployment (“command”), then specifying a ‘verify: true’ in the Cloud Deploy delivery pipeline’s progression sequence.As with render and deploy operations, deployment verification in Google Cloud Deploy is performed in its own execution environment. This allows for custom verification configurations using a specified worker pool or service account, and storing results in a preferred Cloud Storage location. Verification results are factored in when determining whether the rollout was a success or failure. When a deployment verification failure occurs, it’s easy to inspect the logs and, if necessary, rerun the deployment verification without having to re-deploy. Deployment verification is available for all target types, including Cloud Run.Deployment verification status and results in rollout detailsThe futureComprehensive, easy-to-use, and cost-effective DevOps tools are key to building an efficient software delivery capability, and it’s our hope that Google Cloud Deploy will help you implement complete CI/CD pipelines. And we’re just getting started. Stay tuned as we introduce exciting new capabilities and features to Google Cloud Deploy in the months to come. In the meantime, check out the product page, documentation, quickstart, and tutorials. Finally, If you have feedback on Google Cloud Deploy, you can join the conversation. We look forward to hearing from you.Related ArticleGoogle Cloud Deploy, now GA, makes it easier to do continuous delivery to GKEGoogle Cloud Deploy managed service, now GA, makes it easier to do continuous delivery to Google Kubernetes EngineRead Article
Quelle: Google Cloud Platform

Adore Me embraces the power and flexibility of Looker and Google Cloud

You don’t have to work in the women’s clothing business to know that one size doesn’t fit all. Adore Me pioneered the try-at-home shopping service, helping to ensure that every woman can feel good in what she wears. I’ve been lucky enough to have played a part in our growth and success over the years. Now, data is transforming every aspect of how we work, shop, and do business, making these last few years especially exciting. But I’m often asked about how we utilize data here at Adore Me, so I thought I’d share some of the obstacles we’ve encountered, how we resolved them, and offer up a few pointers that I hope others will find helpful. Freeing up teams from getting to the data, to use the data more effectivelyIt’s no secret: The less time we spend getting to the data, the more time we have to actually use it to support our business. Getting an online shopping service off the ground brings complexity into every part of our business. We quickly discovered that providing everyone in-house with the ability to make smart, data-driven decisions resulted in fewer errors and fewer choices that slowed down the business, driving better results for the company and our customers. Once I got my nose out of code and started looking around for ways I could help the business make the most of its data, Looker and BigQuery quickly fell into place as the solutions we needed. In BigQuery, we found a centralized, self-managed database that reduced management overhead. And once all our data was in place, Looker had the most significant impact on our overall productivity, particularly around efficiency and reducing human hours previously spent waiting for data and sharing results between teams. With Looker, we saved time on both ends: in the gathering of data as well as in sharing the insights it revealed with those who needed them most. What’s remarkable about the BigQuery and Looker combination is how much we can accomplish with relatively small teams. We have our Business Intelligence team, the Data Engineering team, and the Data Science team. These are our ‘data people’, who bring in the data rather than consume it. Then we have our power users who need quick insights from that data and therefore rely on Looker to access up-to-the minute data when they need it. Empowering everyone with data consistently pays off, and it’s a much better use of our time than hammering away at SQL. Surfacing data insights that lead to actionData permeates everything we do at Adore Me because we believe that a smarter business results in happier customers. Data helps us run interference, identify problems, and find a fix in real-time, whether that’s optimizing our delivery times or tracking lost packages. On the business planning side, our data reveals what our customers are looking at on our site. This gives us insight into their interests, what’s trending, and what they want to see more of, which in turn also helps to inform our marketing strategies. As an online shop, driving traffic to the site is critical to Adore Me’s business. With real-time data at our disposal, we’re able to determine which campaigns are the most effective and which markets are best suited for a specific message, so we can intelligently refine our campaigns during peak seasons. With the data in BigQuery and insights surfaced by Looker, we can deliver the products and services our customers want most on our site. Enabling continuous improvement with a flexible infrastructureUltimately, we want to have all of our production-critical data in BigQuery and Looker, acting as an easy-to-manage single source of truth. Data lives where we can easily access it, see it, and analyze it. We can set the rules for all of our KPIs, and everyone is able to look at the same data in order to work towards achieving them together. What makes Google Cloud Platform so powerful is the suite of products and services that allow our teams to experiment with data in ways that are relevant to our particular business needs. For example, when working with new data sources, we need the ability to quickly visualize a .csv file, and Google Data Studio is the perfect tool for enabling that. If we find something that we want to bring into production, BigQuery makes it easy while modeling it in Looker speeds up the process. This is one way we are constantly improving and enriching our organization’s data capabilities.  Making it easy to find the right tools for the jobOur teams have discovered that the variety of solutions offered by Google Cloud are ideal to address the evolving data challenges we face. Flexibility is critical in business today, and Google Cloud provides a major advantage to those who embrace a proof of concept mentality, which is why we take advantage of the free Google Cloud trials offered. They allow us to roll a product into a project, test drive it for a few days, and fail fast if necessary. No contracts. No hassle. Better still, the variety of products, their ease of use, and overall versatility make it a good bet that we’ll find a solution that works for us. Anyone with experience working with data will tell you that there’s no shortage of fly-by-night tools out there. But personal experience has shown us that, at the end of the day, success comes down to the strength of your team and choosing the right tools to get the job done. At Adore Me, we’ve built a fantastic team and, with the power of Looker and BigQuery, the sky’s the limit.
Quelle: Google Cloud Platform

How Google Cloud and Fitbit are building a better view of health for hospitals, with analytics and insights in the cloud

Great technology gives us new ways of seeing and working with the world. The microscope enabled new scientific understanding. Trains and telegraphs, in different ways, changed the way we think about distance. Today, cloud computing is changing how we can assist in improving human health.When you think of the healthcare system, it historically includes a visit to the doctor,  sometimes coupled with a hospital stay. These are deeply important events, where tests are done, information on the patient is gathered, and a consultation is set up. But as you think about this structure, there are also limits. Multiple visits are inconvenient and potentially distressing for patients, expensive for the healthcare system and, at best, provide a view of patient health at a specific point in time.But what if that snapshot of health could be supplemented with a stream of patient information that the doctor could observe and use to help predict and prevent diseases? By harnessing advancements in wearables—devices that sense temperature, heart rate, and oxygen levels—combined with  the power of cloud and artificial intelligence (AI) technologies, it is possible to develop a more accurate understanding of patient health.This broader perspective is the goal of a collaboration between cardiologists at The Hague’s Haga Teaching Hospital, Fitbit—one of the world’s leading wearables that tracks activity, sleep, stress, heart rate, and more—and Google Cloud.Initially focusing on 100 individuals who have been identified as at-risk of developing heart disease, during a pilot study (ME-TIME), cardiologists at the hospital will give patients a Fitbit Charge 5—Fitbit’s latest activity and health tracker with ECG monitoring1—to wear at home after an initial consultation.With user consent, the devices will send information about certain patient behavioural metrics to the hospital via the cloud, in an encrypted state. This data is only accessed by (Haga Teaching Hospital approved) physicians and data scientists at the hospital and is not used by Haga for any other purposes than medical research during the study.2 With user consent, the data, which includes the amount of physical activity a patient is undertaking, will be monitored by Haga ’s physicians against other clinical information already gathered about the individual by the hospital during prior consultations. With user consent, Haga Teaching Hospital will also compare the data against its other relevant pseudonymized experience data, so the hospital can learn more about potential patterns and abnormalities associated with certain heart conditions. This is made possible by Google Cloud’s infrastructure, which will be used to store the encrypted data at scale, while artificial intelligence (AI) and data analytics tools will power near real-time analysis. For example, predictive analytics on this data could help identify early signs of a life-threatening disease such as a heart attack or stroke, so doctors can investigate further and provide preventative treatment—even before symptoms arise. Haga is using Device Connect for Fitbit, a new solution from Google Cloud, as part of the trial. Now available for healthcare and life sciences enterprises, the solution empowers business leaders and clinicians with accelerated analytics and insights from consenting users’ Fitbit data, powered by Google Cloud.3The project is in collaboration with partner Omnigen who has supported Haga with deployment, in addition to processing and analysis of data. Other hospitals in the Netherlands are already expressing interest in participating in similar projects. Longer term, we see applications to help with deeper understanding of overall population health for healthcare professionals, reducing unnecessary visits to the hospital – and better operation of the wider healthcare system. Preliminary results of the project may be available as early as the end of this year.“Health is a precious commodity. You realise that all the more if you are struck down by an illness. If you can prevent it or catch it in time so that it can be treated, you have gained a great deal,” said cardiologist, Dr. Ivo van der Bilt of Haga Teaching Hospital, who has been leading on this collaboration. “Digital tools and technologies like those provided by Google Cloud and Fitbit open up a world of possibilities for healthcare and a new era of even more accessible medicine is possible.”“This collaboration shows how Fitbit can help support innovation in population health, helping healthcare systems & care programmes create more efficient and effective care pathways that aren’t always tied to primary or secondary care settings. Plus it provides patients with tools to help them with their health and wellbeing each day, with metrics which can be overseen by clinical care teams.” said Nicola Maxwell, Head of Fitbit Health Solutions Europe, Middle East & Africa.This collaboration is an important step towards a goal of creating a more dynamic, rich, and holistic understanding of human health for hospitals, carried out with a strong emphasis on transparency. We are proud to be part of a project that we expect can help patients and healthcare workers alike. We believe this is only the start of what’s possible in healthcare with digital tools like Fitbit and cloud computing. 1. The Fitbit ECG app is only available in select countries. Not intended for use by people under 22 years old. See fitbit.com/ecg for additional details.2. Haga Teaching Hospital is responsible for any consents, notices or other specific conditions as may be required to permit any accessing, storing, and other processing of this data. Google Cloud does not have control over the data used in this study, which belongs to Haga Teaching Hospital. More generally, Google’s interactions with Fitbit are subject to strict legal requirements, including with respect to how Google accesses and handles relevant Fitbit health and wellness data. Details on these obligations can be found here. 3. This is the same data as that made available through the Fitbit Web API, which the Device Connect integration is built on.Related ArticleIntroducing Device Connect for Fitbit: How Google Cloud and Fitbit are working together to help people live healthier livesHow Google Cloud and Fitbit are working together to help people live healthier lives with Device Connect for Fitbit.Read Article
Quelle: Google Cloud Platform