5 resources to help you get started with SRE

Site reliability engineering (SRE) is an essential part of engineering at Google—it’s a mindset, and a set of practices, metrics, and prescriptive ways to ensure systems reliability. But not everyone knows the best places to start to implement SRE in their own organizations. Here are our top resources at Google Cloud for getting started.1. Do you have an SRE team yet? How to start and assess your journeyWe’re often asked what implementing SRE means in practice, since our customers face challenges quantifying their success when setting up their own SRE practices. In this post, we share a couple of checklists to be used by members of an organization responsible for any high-reliability services. These will be useful when you’re trying to move your team toward an SRE model. Implementing this model at your organization can benefit both your services and teams due to higher service reliability, lower operational cost, and higher-value work for everyone on the team.Related ArticleDo you have an SRE team yet? How to start and assess your journeyThis post shares checklists you can use when you’re trying to move your team toward an SRE model. These checklists can be useful as a for…Read Article2. SRE fundamentals: SLIs, SLAs and SLOsCore to the definition of SRE is the idea that metrics should be closely tied to business objectives. Thus, a big part of the day-to-day of SREs is establishing and monitoring these service-level metrics. At Google, we use several essential measurements—SLO, SLA and SLI—in SRE planning and practice. This post gives you an overview of what each of these acronyms are, what they mean, and how to incorporate them.Related ArticleSRE fundamentals: SLIs, SLAs and SLOsA big part of SRE is establishing and monitoring service-level metrics like SLOs, SLAs and SLIs. This post gives you an overview of what …Read Article3. How SRE teams are organized, and how to get startedYou know what SREs do and understand which best practices should be implemented at various levels of SRE maturity. Now you’re ready to take the next step by setting up your own SRE team. In this post, we’ll cover how different implementations of SRE teams establish boundaries to achieve their goals. We describe six different implementations that we’ve experienced, and what we have observed to be their most important pros and cons.Related ArticleHow SRE teams are organized, and how to get startedGetting started with SRE often starts with understanding SRE principles and how teams are organized. Find tips here on which SRE team imp…Read Article4. Meeting reliability challenges with SRE principlesThrough years of work using SRE principles, we’ve found there are a few common challenges that teams face, and some important ways to meet or avoid those challenges. Learn what we at Google think are the three top sources of production stress and how we recommend addressing them.Related ArticleMeeting reliability challenges with SRE principlesFollowing SRE principles can help you build reliable production systems. When getting started, you may encounter three common challenges….Read Article5. Transitioning a typical engineering ops team into an SRE powerhousePerpetually adding engineers to ops teams to meet customer growth doesn’t scale. Google’s SRE principles can help, bringing software engineering solutions to operational problems. In this post, we’ll take a look at how we transformed our global network ops team by abandoning traditional network engineering orthodoxy and replacing it with SRE. You’ll learn how Google’s production networking team tackled this problem and consider how you might incorporate SRE principles in your own organization.Related ArticleTransitioning a typical engineering ops team into an SRE powerhouseMoving a network operations team to an SRE-driven model took some time, but was well worth the effort, as teams can focus on reliability …Read ArticleLots more to readCan’t wait to read more about SRE? We wrote an entire book on SRE to help you get started (actually, we’ve written more than one). You can also find all our DevOps and SRE blog content or follow our columns on Customer Reliability Engineering.Related ArticleHow do you eat an elephant? Google SREs talk digital transformationIt’s not just about technology. Google Cloud SREs touch on the human and organizational side of a cloud migration.Read Article
Quelle: Google Cloud Platform

Curious about Google Cloud Bare Metal Solution? Start here.

So you’ve decided to migrate your business to the cloud—good call!There are many workloads that are easy to lift and shift to the cloud, but there are also specialized workloads (such as Oracle) that are difficult to migrate to a cloud environment due to complicated licensing, hardware, and support requirements. Bare Metal Solution provides a path to modernize these applications. You first lift and shift these workloads to Bare Metal Solution so you can exit your data center and stop managing hardware; then you will be in a great position to modernize your application with Google Cloud. Bare Metal Solution enables an easier and a faster migration path while maintaining your existing investments and architecture.(Click to enlarge the Bare Metal Solution migration cheat sheet)How does it work?Bare Metal Solution provides purpose-built bare metal machines in regional extensions that are connected to Google Cloud by a managed, high-performance connection with a low-latency network fabric. Google Cloud provides and manages the core infrastructure, the network, the physical and network security, and hardware monitoring capabilities in an environment from which you can easily access all Google Cloud services. What does the Bare Metal Solution environment include?The core infrastructure includes secure, controlled-environment facilities and power. The Bare Metal Solution environment also includes provisioning and maintenance of the  sole-tenancy hardware with local SAN, and smart hands support. The network, which is managed by Google Cloud, includes a low-latency Cloud Interconnect connection into your Bare Metal Solution environment. And you have access to other Google Cloud services such as private API access, management tools, support, and billing.What are you responsible for?You are only responsible for your software, applications, and data while Google Cloud handles the support, backup maintenance, monitoring, logging, and security. You can bring your own license of the specialized software such as Oracle. ConclusionNow that you know about Bare Metal Solution, you’re ready to take the next step in the direction of infrastructure modernization, no matter what specialized workloads you may have. To learn more about Bare Metal Solution, check out the documentation.Priyanka discusses Google Cloud Bare Metal SolutionFor more #GCPSketchnote, follow the GitHub repo. And for similar cloud content, follow me on twitter @pvergadia and keep an eye on thecloudgirl.devRelated Article5 cheat sheets to help you get started on your Google Cloud journeyWhether you need to determine the best way to move to the cloud, or decide on the best storage option, we’ve built a number of cheat shee…Read Article
Quelle: Google Cloud Platform

Go is powering enterprise developers: Developer survey results

Each year the Go team at Google conducts a developer survey, to capture feedback from the Go community, and identify trends that shape our work. Go is one of the most popular languages used on Google Cloud, and this year, we expanded the survey to include more specific questions around Cloud development. We’re sharing some of our results here (with the full report available in our separate post), to provide some insight into feedback that informs our commitment to ensuring the experience of building with Go on Google Cloud is best-in-class.Go has become a critical tool in the enterpriseGo is cementing its role as a critical tool in the enterprise, boosting developer productivity and serving as a key component to business success. When it comes to workloads, Go is heavily used in deployment on the Cloud. Among our survey respondents, the most common use case of Go was to build API/RPC services (74%), followed closely by command-line applications (65%), both of which are tools used commonly by cloud developers. When working on their projects, Go users continue to feel incredibly satisfied and productive, especially in the enterprise, showcasing Go’s suitability in that environment. With 92% of enterprise users feeling “somewhat” or “very” satisfied, and satisfaction for “using cloud services” with Go up 14%, we’re happy to hear that Go remains pleasant to work with. Go’s effect on productivity is also quite positive, with 81% of enterprise users feeling “very” or “extremely” productive. We also heard that two-thirds (66%) of Go developers feel that Go is critical to their company’s success. It’s amazing to hear how users and teams continue to lean on Go for it’s reliability, simplicity, and speed.Adoption of Go is getting easier Go’s adoption in the workplace is growing, and it’s becoming easier for teams to become productive with Go. On the adoption front, working on an “existing project written in another language” and IT leadership “[preferring] another language” both continued to decrease as reasons why teams don’t use Go more often. And following up on the productivity results discussed earlier, three quarters of enterprise users become productive with Go in less than 3 months, with 93% reaching productivity within a year. Results like these show that getting started with Go remains quick and easy, though getting larger teams to move to Go or use it when faced with existing language preferences, while declining as challenges, is still a point of friction. We’ll continue to address these issues by improving our documentation, and additional work on tooling and support. Part of the work we’ve already done includes taking over maintenance of the VS Code Go Plugin and releasing several improvements, in addition to constant improvements to our package discovery site, pkg.go.dev. For example, this year’s survey showed that 91% of pkg.go.dev users are able to quickly find Go packages and libraries, compared to 82% for those who don’t use the site. We’re committed to further improving the process of adopting Go, and we believe these results underscore that. Bringing continuous improvements to GoThe Go team at Google is committed to continually improving the experience of developing with Go. In this year’s survey we heard that a large portion (~17%) of Go users feel that Go is missing critical features, and among that set 88% feel that not having generics in Go prevents them from using it more. The good news is that generics is coming to Go! Earlier this year we shared our proposal for adding generics to Go and just recently the proposal was accepted, marking a huge step towards bringing generics to the language. Adding a feature like generics is only possible with constant feedback and collaboration from the community (which is part of what makes being a Go developer so great).Bringing continuous improvements to Go, like feature specific work, or more generally with our bi-annual releases, requires trust from the Go community, and that trust is something we continue to build. The Go community is growing, with developers using Go for more types of projects, and teams of larger scope using Go to tackle their biggest challenges. With an increasingly diverse community it’s important to ensure we’re helping all users succeed. Fortunately, the trust our users put in us is strong, with user confidence in Go leadership and feeling welcome in the Go community remaining stable over the last few years. This year in particular, we saw a significant increase (up 6%) in users agreeing that Go “leadership understands [their] needs” showcasing that the work we put in is helping to address more users across the ecosystem. We take this trust seriously and will continue to engage with our users to improve the experience of using Go.There’s more to the story, and more ways to get involvedWe discussed a few of the key results from this year’s Go Developer Survey, particularly as it relates to cloud development, and our commitment to improving Go. There are many more details that you can view in the complete report. Additionally, we’ll continue to collect feedback from the Go community, through an increased cadence of surveys, and smaller group discussions, particularly as it relates to enterprise development. Stay tuned by following Go on twitter, and by visiting Go.dev, to learn how you can get involved.Related ArticleGet Go-ing with Cloud Functions: Go 1.11 is now a supported languageGo 1.11 is now a supported language on Cloud FunctionsRead Article
Quelle: Google Cloud Platform

Better protect your web apps and APIs against threats and fraud with Google Cloud

With web applications and public APIs becoming increasingly important to how organizations interface with their customers and partners, many are turning to dedicated tools that can help protect these assets. As research firm Gartner notes in its 2020 report “Defining Cloud Web Application and API Protection Services,” “By 2023, more than 30% of public-facing web applications will be protected by cloud web application and API protection (WAAP) services that combine DDoS protection, bot mitigation, API protection and web application firewalls (WAFs). This is an increase from fewer than 10% today.”1 Currently, most of these services come in the form of different point solutions for different types of threats. This leads to gaps in protection and increased acquisition and operational costs. To tackle these challenges, Google Cloud has launched a security solution, Web App and API Protection (WAAP), which provides comprehensive threat protection for your web applications and APIs. Google Cloud WAAP is based on the same technology Google uses to protect its public-facing services against web application exploits, DDoS attacks, fraudulent bot activity, and API targeted threats. It represents a shift from siloed to unified application protection, and can deliver improved threat prevention, greater operational efficiencies, and consolidated visibility and telemetry. It also provides protection across clouds and on-premises environments.Google Cloud WAAP combines three leading products to provide comprehensive protection against threats and fraud: Google Cloud Armor, which is part of Google Cloud’s global load balancing infrastructure, provides WAF and anti-DDoS capabilities, protecting applications against the Open Web Application Security Project (OWASP) Top 10, sophisticated application exploits, and both volumetric and layer 7 availability attacks. Apigee, Google Cloud’s API management platform, provides API lifecycle management capabilities, with a heavy focus on security. The solution verifies API keys, generates and validates OAuth access tokens, rate limits traffic, enforces quotas, and provides analytics on API trends. reCAPTCHA Enterprise provides transparent protection from fraudulent activity, spam, and abuse like scraping, credential stuffing, automated account creation, and exploits from automated bots.Google Cloud WAAP solution high-level architecture“I’ve seen our customers benefit greatly from each part of Google Cloud WAAP, and now that it’s a packaged solution,  we can bring a more comprehensive security solution to a broader set of clients much faster.” said Miles Ward, CTO of SADA Systems. “SADA is excited to partner with Google to bring this outstanding security solution to our customers’ mission critical projects.”How WAAP is helping customers today The following two scenarios showcase how a bank and an airline are using Google Cloud’s WAAP solution to address their heightened security needs. Balancing security requirements with ease of useA bank is launching a new microservices based payment app and, due to the architecture of the application, it exposes several APIs which need to be protected. Three different teams are involved and have different priorities that need to be balanced.Google Cloud’s WAAP solution allows different teams at the bank to collaborate closely to fulfil their requirements using one solution and one vendor. Managing OWASP Top 10 Web Application Security RisksAn airline needs to protect its reservation website from OWASP Top 10 Web Application Security Risks. Preventing attackers from utilizing leaked or stolen email addresses and passwords to gain unauthorized access (credential stuffing) is a priority. Their APIs are used by 3rd party travel sites for making reservations, therefore the airline also needs to be able to manage authentication and authorization of their public APIs.The airline uses the Google Cloud WAAP solution, implementing Cloud Armor as a WAF, Apigee as the API management layer, and reCAPTCHA Enterprise to defend against credential stuffing.Google Cloud WAAP solution workflowLet’s take a look at the workflow of this request with the Google Cloud WAAP solution.The first point of contact on the WAAP solution is Cloud Armor. Cloud Armor protects against OWASP Top 10 vulnerabilities like cross-site scripting (XSS), SQL Injection (SQLi) etc and also provides protection against L3, L4, and L7 DDoS attacks. If none of the above rules are triggered on the Cloud Armor policies, a request is sent to the reCAPTCHA Enterprise API to evaluate whether the incoming traffic is a legitimate request or not [Machine bot vs. Human]. If it is a legitimate request, then the request is forwarded to the airline’s backend. If the request is not a legitimate one, then Cloud Armor has the ability to deny the request by sending a 403 response code to the user. Further, Cloud Armor can take more intelligent actions like redirecting to a different page or forwarding the request to a honeypot. For any API requests, once the Cloud Armor OWASP rules and DDoS protection has been evaluated, the request is then forwarded to Apigee to check the validity of the API request. Apigee is now able to determine if the API keys or access tokens used in the request are valid and that the consumer has access to the API or not. If Apigee determines the request to be a non-legitimate one, Apigee can serve a 403 response code to the end user otherwise, Apigee will forward the request to the Airline’s backend.For all requests being made to the airline’s reservation website, the WAAP solution is the first point of contact and can detect and mitigate bad actors at the edge before the request even reaches the airline’s backend.As more and more organizations accelerate their digital transformation journey, and as business processes and commerce rely more on digital interactions, the need for heightened levels of security and protection has risen significantly. Moving to a unified application protection like Google Cloud’s WAAP solution can help organizations deliver improved threat prevention, greater operational efficiencies, and consolidated visibility and telemetry, in record time.  Get started using WAAP today For more details on how Google Cloud can help with comprehensive web app and API protection, check out our WAAP solution page, watch our on-demand webinar on App Modernization and Protection, and read our whitepaper written by Enterprise Strategy Group on Meeting the challenges of securing modern web applications with WAAP.1. Gartner, Defining Cloud Web Application and API Protection Services, Jeremy D’Hoinne and Adam Hils, Refreshed 20 May 2020.Related ArticleMulti-layer API security with Apigee and Google Cloud ArmorHow Apigee X with Google Cloud Armor provides robust API management and multi-layer security.Read Article
Quelle: Google Cloud Platform

3 keys to multicloud success you’ll find in Anthos 1.7

Most organizations choose to work with multiple cloud providers, for a host of different reasons. In a recent Gartner survey of public cloud users, 81% of respondents said they are working with two or more providers. And as well you should! It’s completely reasonable to use the capabilities from multiple cloud providers to achieve your desired business outcomes. Beyond simply letting you run apps in on-prem and in different clouds, we’ve noticed that successful multicloud implementations share characteristics that enable higher-level benefits for both developers and operators. To do multicloud right, you need to: Establish a strong “anchor” to a single cloud provider Create a consistent operator experienceStandardize software deployment for developers We recently released Anthos 1.7, our run-anywhere Kubernetes platform that’s connected to Google Cloud, delivering an array of capabilities that make multicloud more accessible and sustainable. Let’s take a look at how our latest Anthos release tracks to a successful multicloud deployment. 1. Create an anchor in the cloudYour cloud journey should be anchored to a single cloud. Is that controversial? At Google Cloud, we think that instead of dragging your current state to the desired location, you bring characteristics of your desired state to your current location. And instead of re-creating foundational behaviors in each cloud, you anchor on a single cloud, and use those practices everywhere else.Let’s be specific. Cloud Logging is our scalable, high-performing service for infrastructure and application logs. In addition to sending logs from on-premises Anthos environments, you can now send logs and metrics from Anthos on AWS to Cloud Logging and Cloud Monitoring. Use one powerful logging system that all your environments feed into, and retire your on-prem logging infrastructure.When all your clusters are attached to Google Cloud, you can also simplify management. With the new Connect gateway, you can interact with any cluster, anywhere, all from Google Cloud. Deploy workloads to a cluster on-prem. Read logs from a workload running inside an AWS VPC. By using Google Cloud and Anthos as your multicloud anchor, you can centralize activities and reduce the toil of per-cloud management.Letting the public cloud manage more things allows you to focus on what matters: your software. In this release, we enabled a preview of our managed control plane forAnthos Service Mesh on Google Cloud. This gives you an Istio-powered mesh with the data plane in your cluster, but with us scaling, patching, and operating the control plane itself. You can even use this for your virtual machine workloads. Take advantage of the cloud’s innovation and add your Compute Engine workloads into Anthos Service Mesh. The reality is that most enterprise compute resources are still in VMs and many will remain there for a long time to come. This way, all of your VM-based workloads can have the same mesh functionality as your container-based workloads—even if the operating system is in Managed Instance Group (MIG). You can also use Anthos Service Mesh to apply Common Vulnerabilities and Exposures (CVEs) updates, for better lifecycle management.2. Create a consistent experience for operatorsNo multicloud solution can eliminate all per-cloud management for operations teams. There will always be some level of direct management of each cloud. Can we reduce the amount so that operations teams don’t waste so much time with bespoke configurations? Yes, we can. Anthos normalizes a significant portion of your operational effort, regardless of where your Kubernetes cluster resides. And we’re working to bring more and more consistency to the Anthos experience on each of its target platforms. This helps operators learn something once, and apply it everywhere.In Anthos 1.7, we delivered Windows container support for vSphere environments, as well as support for our own container-optimized OS. That brings Anthos to parity with what we offer in GKE on Google Cloud. We also GA-ed the CSI driver for vSphere, giving on-prem clusters the same experience with storage volumes as Google Cloud customers.Then there’s Anthos Config Management (ACM), which delivers a powerful, declarative way to define desired state and keep your environment in that state. That means defining and deploying security policies, reference data, and required agents with source-controlled configuration files. And in Anthos 1.7, we’re extending ACM to a wider range of supported cluster types (besides GKE) including EKS 1.19, AKS 1.19, OpenShift 4.6, KIND 0.10 and Rancher 1.2.4. Whether you’re deploying GKE clusters with Anthos, or attaching your existing Kubernetes clusters running in other environments, components like ACM and Connect gateway give you a consistent operational experience.3. Establish a secure, familiar deployment target for developersFrom what I’ve observed, the main beneficiaries of multicloud are developers—and by extension, the end users of the software those developers create. With multicloud, developers can use the best services from each cloud and run each workload in the right place. The hard part? Creating some level of repeatability across all these environments. How a developer deploys to a hypervisor or container environment on-prem is very different from how they deploy to an app-centric platform in the cloud. There are different requirements for how to package up the software, different deployment tools, and different handoffs or automated integrations to expose the application for use. Can we normalize it a bit? Indeed we can, by creating a consistent dev experience for the inner loop, and a standard deployment API for every environment.To that end, the Google Cloud Code team have added extensions to your favorite IDEs to make it easier to build YAML for use in any Anthos environment. Create standard Kubernetes deployment manifests, a Cloud Build definition, or even a configuration that represents a first-party cloud managed service. And with local emulators for things like Kubernetes and Cloud Run, you can build and test locally before packaging up your software for deployment to Anthos.Speaking of builds, with the new Connect gateway, you can create Cloud Build definitions that deploy to any Anthos-connected cluster. Cloud Build is a powerful service for packaging and deploying software, and the ability to use it to deploy anywhere is a big deal.There’s more. How should developers securely access cloud services from their apps? You don’t want something unique for each environment. In Google Cloud, Workload Identity is used to map Kubernetes services accounts to IAM accounts so that you never need to stash credentials in the environment. With Anthos 1.7, we’ve made our Workload Identity capability available on-premises, and in AWS. Just build your apps, and at runtime they can securely talk to managed services with appropriate permissions.Don’t just take our word for itMulticloud is an idea whose time has come, and the new features and capabilities that we’re building into Anthos are rapidly translating into industry recognition and successful customer deployments.When it comes to analyst firms, Forrester recently named Google Cloud a“Leader” in Multicloud Container Development Platforms, citing Anthos’ automated cluster lifecycle operations, control plane management, logging, and policy-driven security features.When it comes to customers, we’re working with global enterprises across a number of industries who want to modernize their application portfolios for agility and drive cost savings. Here are three recent customers Anthos customers: Major League Baseball uses Anthos to run applications like Statcast that need to run in the ballpark for best performance and low latency. Anthos on bare metal also makes it easier for them to swap out a server in the event of a hardware failure. PKO Bank Polski, the largest bank in Poland, uses Anthos to scale its services up dynamically when peaks occur unexpectedly. Marcin Dzienniak, PKO’s Chief Product Owner of the Cloud Center of Excellence, said “using Anthos, we’ll be able to speed up our development and deliver new services faster.” Finally, the Wellcome Sanger Institute, one of the world’s leading centers for genomic science, uses Anthos to improve the stability of their research IT infrastructure. Anthos deployment was a quick and easy process: the team had JupyterHub, an open-source research collaboration tool, up and running in just five days, including all notebooks and secure researcher access.With the launch of Anthos 1.7, we hope to continue delivering exceptional experience for even more Anthos customers.Next stepsDownload the Forrester Total Economic Impact study today to hear directly from enterprise engineering leaders and dive deep into the economic impact Anthos can deliver your organization. For a complete guide to using Anthos clusters on AWS, including cluster setup and administration, refer to setting up Anthos on other public clouds. To learn more about Anthos on bare metal, read about one Developer Advocate’s experience getting hands-on with Anthos on bare metal and then, to try it yourself, check out the Anthos Developer Sandbox.Related ArticleIntroducing the Anthos Developer Sandbox—free with a Google accountThe new Anthos Developer Sandbox spins up all the tools you need to learn how to develop for the Anthos platform.Read Article
Quelle: Google Cloud Platform

Part 2: Hackathons aren’t just for programmers anymore

As we discussed in our recently published article, no-code hackathons are a great way to empower line-of-business workers and encourage innovation, but they sometimes require different planning steps than traditional hackathons aimed at coders. In this article, we take a look at three questions you should ask yourself to refine the event’s goals, as well as a four-step planning framework and best practices we’ve seen adopted by users of AppSheet, Google Cloud’s no-code application development platform. One thing to keep in mind: just like a no-code, custom business app, the beauty of a hackathon lies in the fact that each one is a customized effort based on your specific goals. Once you refine your goals, use the framework to build your hackathon. And once you’ve successfully managed your hackathon event, you can use that experience to inform the next one.Three questions to refine your goals for a hackathonThe hackathon will be many employees’ introduction to the concept and promise of no-code programs, so before any planning for the event itself can occur, enterprises need a refined sense of what the program should achieve and how it will fit into existing operations. 1. What does innovation look like for your organization? Innovation can mean a number of different things–removing manual processes, improving efficiency in the field, digitizing workflows, and more. Identifying what kind of innovation you hope to spur is important to introducing a no-code platform, and its potential, to the workforce. That said, it is important not to over-prescribe goals: one benefit of democratizing the tools of innovation is that employees often discover and solve challenges that had previously gone unacknowledged. 2. What types of organizational and governance structures will support the no-code program?It’s obviously powerful to extend app building to more employees, but businesses still need to ensure that no-code efforts don’t redundantly overlap with IT projects. Likewise, they need to avoid Shadow IT problems in which IT lacks visibility into no-code projects, and they need to apply security protections to the corporate digital assets leveraged for no-code apps. All of this means governance and organizational models are important to a successful no-code rollout. In an IT-centralized model, IT teams create nearly all of the applications to address an organization’s needs. In an IT-decentralized program, the broader organization develops the applications within the governance framework provided by IT. Both can encourage non-technical employees to become no-code citizen developers, but they do so in different ways. For example, in an IT-centralized model, business users might use the no-code platforms to build prototypes, the final versions of which are built by IT. In an IT-decentralized model, non-technical employees throughout the organization might build their own solutions according to governance guardrails set by IT. 3. Beyond hackathons, how will you encourage citizen development?A hackathon is a great way to drive engagement and hands-on learning, but it’s not the only way  to inspire the workforce–and even after a successful hackathon, organizations still need educational and community-building resources to maintain momentum. Many such resources are available, such as AppSheet’s Creator Community. It’s important to look for resources that already exist outside the company, but many successful no-code programs have also invested in internal resources, such as recurring office hours with experts or onboarding programs specific to the company’s goals. Four key steps for planning a successful no-code hackathon The preceding questions are a starting point for defining the no-code program’s intentions–but planning and holding a hackathon involves a few more steps. This four-part framework offers some best practices to ensure the event is organized based on your organization’s unique needs.    1. Define your objective This may seem obvious, but an organization is unlikely to succeed if it holds a hackathon just for the sake of holding an event, without clear goals and intentions. With digital fatigue running rampant, employees aren’t always primed to accept new technical solutions–so to cut through the noise and inspire the workforce, it’s critical to define what the event should achieve. The goal might be to align your organization’s goals with actions individual citizen developers or teams of citizen developers can take. It might be identifying manual tasks that could be automated or potential long-tail solutions. There is no single approach, but often, successful no-code hackathons focus either on functionality or use cases. When a hackathon focuses on functionality, they are often more open-ended but also more likely to produce novel ideas or call attention to challenges that are familiar to line-of-business workers but unknown to leaders. Hackathons built around a specific use case or challenge can be more targeted, but they can also be overwhelming to new citizen developers who are still learning how the no-code platform lets them harness different kinds of functionality. For example, let’s say my objective is to focus on removing paper from my company’s business processes. The key is not to tell hackathon participants what kind of app to build because I don’t know all the issues they might be struggling with. But every app must digitize something that is now being executed manually. That will be my main criteria for any hackathon app. Of course, I will need to develop a list of questions whose answers will ensure that my objective is being met. These questions will also help my new citizen developers formulate an app building plan and for the most part should focus on the following areas:Problem. Describe the manual process you want to digitize as well as the paper you want to remove from your process and how it applies to the stated objective.Scope. Describe what areas of business the problem impacts—is it within an individual scope, or a department, or cross-departmental?Data. Describe the underlying data you want to digitally capture, where it resides today (in a filing cabinet, transposed to a doc or spreadsheet) and where it should reside electronically.Solution. Describe how you plan on resolving the problem—how is the app you propose to build going to address the problem? What capabilities does the app need? What features on the app building platform will you be using?Success metrics. How do you measure success? An increase in productivity measured by a decrease in time spent? Reduction of paper measured by amount digitized?Recommended best practices: This may seem like a lot of information to collect early in the hackathon process but it provides a structured way to look at an ongoing problem and resolve it. Most citizen developers are new to app building and this type of methodology teaches them how to approach an app building project. This type of approach also serves as a way for departments to identify process issues that may be known or unknown—those long tail applications that provide incremental value and when taken together, may exceed the value of larger, more complex applications. Finally, consider building an app on your no-code platform to collect this information. This signals two things: an overall commitment to digitization as well as a commitment to the platform as the digitization vehicle.2. Cultivate executive buy-in Hackathons are usually more successful when company leaders advocate for them. Employees may be uncertain about new technology platforms and tools if the intentions behind and benefits of those resources have not been clearly communicated. Similarly, employees accustomed to existing processes may not adopt new ones if incentives and goals have not been refreshed. These efforts require executive leadership, and when businesses successfully navigate large shifts in technology, the CTO or even CEO is often involved. Recommended Best Practices: Successful hackathons rely on participation and support from the leadership team as well as IT and line of business management. This participation indicates that citizen development is part of the corporate culture and has the support of the entire company. One member may be designated as the owner or multiple members may be given roles—the more active the support, the more likely it is for employees to get involved. There should be active involvement from the leadership team to promote all hackathon aspects:Encourage participation. Promoting the hackathon throughout the company via all forms of communication, including email, company and LOB meetings, newsletters, collaboration apps like Google Chat, Slack, or Microsoft Teams, etc.Acknowledge milestones. Provide snapshots of the hackathon as it progresses, including number of teams participating, number of departments, number of apps being built, first app to be tested, etc. Along with normal forms of communication, consider setting up a Hackathon community page and actively post to it.Formalize the effort. Host a company-wide event to acknowledge and reward participants. Perhaps the hackathon is a success metrics contest where the winners and runner up receive a prize but make sure that all participants are recognized for their valuable contributions.2. Anticipate hackathon participants’ needs, andWhether a professional or citizen developer, the principles of application development are the same. While no-code application development requires significantly less technical knowledge than traditional application development, it will require training, both during and outside of a hackathon. The more you anticipate, and provide, the necessary training, the more successful your hackathon will be.Recommended best practices: We’ve seen successful programs that bring in outside experts to guide non-technical workers, others that appoint internal experts, others still that focus on self-learning modules–and some that combine all three. All of these approaches are generally more productive when combined with community-building efforts. For example, throughout last year, our Creator Community created incredible apps and made them available to fellow community members. COVID-19 Community Support App is one such great example, and the app has now been translated into over 100 languages by creators around the world. Similarly, we saw a number of citizen developers rally around each other during a global hackathon to build COVID-19 support apps, relying on self-guided resources and community support. We’ve also seen companiessuch as Globe Telecom invest not only in hackathons but also in making experts available for weekly office hours. Many organizations have successfully built citizen developer ecosystems by creating spaces where questions can be posed and discussed. Programs that include specific examples of no-code apps are usually more powerful. In designing training programs, it can be easy to overlook that hackathon participants need assurances that it is okay to make mistakes–which reinforces the importance of governance. It can also be easy to neglect the data management training that can help citizen developers more easily adopt a no-code platform. Measures of success are another important training component. Are no-code apps meant to be deployed across the entire organization, to be used by select teams, or to serve as foundations for future proposals? Is a no-code app’s success quantified by hours saved working on a process, number of users for a specific app, or some other metric? 4. Ensure you don’t miss the final ingredients There are three components, often overlooked, that will determine the overall success of a hackathon:Make it fun. Gamification can be a terrific way to generate enthusiasm. When an organization incentivizes budding citizen developers with recognition or awards, interest in the no-code program often skyrockets.Build awareness. No hackathon will succeed if employees aren’t aware of it, so organizers and the leadership team need to get people excited by marketing the event. Galvanize participation. Will hackathon participants operate as individuals or as members of a team? Most likely, this decision will depend on a number of factors specific to your company with the end result being more employee participation. Recommended best practices: Gamifying a hackathon cannot only generate enthusiasm but also serve as a way to attract more participants. But don’t just focus on the awards aspect for the winners and runner ups. Think of ways to make it fun for all participants. For example, hackathon T-Shirts, trophies, or plaques for everyone. Or fun award categories for best app idea, first deployed app, etc.With or without prizes, no hackathon will succeed if employees aren’t aware of it, so organizers and the leadership team need to get people excited by building awareness through marketing the event. The promotions could range from newsletter blurbs to advertisements on internal sites to references during all-hands meetings– but whatever the venue, this is your chance to communicate both the ways no-code can be empowering as well as any competitions or rewards that the event will entail. Additionally, one can neither devise gamification strategies nor advertise the event without determining whether hackathon participants will work individually or in teams. Which approach works best may depend on a range of factors, such as whether the organization uses an IT-centralized or IT-decentralized model. But generally, individual participants tend to focus on specific goals whereas teams, by virtue of including more perspectives, can promote community and exploration. If you choose to go with teams, we recommend that you solicit cross-departmental teams or teams from different departments, as this is likely to diversify the solutions and conversations that the hackathon facilitates. Finally, don’t forget the closing ceremonies! Whether it’s a live judging panel or simply distribution of awards, find a way to celebrate everyone that participated. This goes a really long way in helping the program to gain traction. Keep the momentum going!With the preceding questions, framework, and best practices, you’re well on your way to holding a no-code hackathon. Congratulations! But what comes next? Here are four ways to keep the momentum going:  Hold a post-mortem with your team. What are your main takeaways from this event? Should you host another event in the future? Are there any additional goals you need to consider for future events? What would you change for the next one?Measure your success. Did you meet your objective? Was participation in line with  expectations? Did you require a wait list for the event or is the entire organization now trained? What areas can you focus on for your next hackathon? Double down on citizen development across your organization. After you’ve held your first hackathon event, there will be buzz around all that was accomplished. Those who were not able to participate in the first will likely want to attend the next. If you haven’t already marked your calendar for the next one, determine what date makes the most sense. Keep a channel open for support. One of the most exciting parts about no-code hackathons is that citizen developers feel empowered to tackle new and bigger problems. In order to achieve this, they need outlets for support. We highly recommend either creating an internal channel for conversation, such as a citizen developer chat room, or pointing them towards a citizen developer community. We hope you will take this framework and convert it into something that not only meets the unique needs of your organization, but also helps propel you toward a more innovative future. Whether this is just the beginning of your organization’s citizen development journey or the next step in an ongoing digital transformation, the possibilities are endless, and we’re excited to see what you build. Click here to learn more about AppSheet, and jumpstart your no-code journey with our library of sample apps.Related ArticlePart 1: Hackathons aren’t just for programmers anymoreHow no-code hackathons encourage citizen developers to innovate.Read Article
Quelle: Google Cloud Platform

Earning customer trust through a pandemic: delivering our 2020 CCAG pooled audit

At Google Cloud, we work closely with customers who want to assess and verify the security of our platform. Take as an example our recent collaboration with the Collaborative Cloud Audit Group (CCAG). As our customers increased their use of cloud services to meet the demands of teleworking andaid in COVID-19 recovery, we’ve worked hard to meet our commitment to being the industry’s most trusted cloud, despite the global pandemic. That’s why we are proud to announce that Google Cloud completed an annual pooled audit with the CCAG in a completely remote setting, and was the only cloud service provider to do so in 2020.  The CCAG is a syndicate of 39 leading European financial institutions and insurance companies who depend on cloud infrastructure and technologies to deliver innovative solutions and experiences for their customers. For these institutions, managing the risks associated with outsourcing material workloads and satisfying strict national and EU regulatory obligations is of critical importance. Carrying out cloud audits at scale is resource intensive, and CCAG members exercise their audit rights by combining the audit scope and fieldwork into one unified annual engagement. Pooled audits of cloud service providers, as stipulated in the European Banking Authority’s Guidelines on outsourcing arrangements, help streamline the audit process and decrease the organisational burden on both the CCAG members and their providers, like Google.Hamidou Dia, vice president for Solutions Engineering at Google Cloud, whose team spearheaded the audit, reflected on how initiatives such as pooled audits can bolster customer trust: “The financial services industry is rapidly changing to meet rising customer expectations and growing regulatory compliance requirements,” Dia said. “We offer verifiable transparency to our customers, so they can confidently and securely leverage Google’s innovative cloud technologies to digitally transform their business and the industry as a whole. We are pleased to partner with CCAG, who are emerging as global leaders in setting the framework for efficient and effective pooled audit assessments.”The COVID-19 pandemic required CCAG and Google to re-imagine the 2020 audit process, which is traditionally performed via onsite meetings and inspections. We instead relied on the security and collaboration capabilities of Google Drive and Google Meet to store and access evidence exhibits, and to meet with subject matter experts. During each phase of the approximately six-month engagement period, the audit teams worked openly and transparently through both offline and interactive sessions to validate Google Cloud’s policies, processes, and technologies. “This is the first time we worked completely remotely and we all learned a lot. We were able to complete the audit fieldwork and Google offered CCAG  extensive transparency into their processes and live systems,” said Christina Hepp, divisional head IT, Operations & Sourcing Group Audit, Commerzbank. “Regulators consider a cloud provider’s controls as part of our internal control system and expect us to audit these as such. We were able to verify documentation, reviewed samples, and interviewed subject matter experts to reasonably satisfy the CCAG participating members’ individual risk assessments.”Our annual pooled audits provide the necessary risk assessments and assurances for CCAG members to accelerate their digitization efforts and journey onto the cloud. To help build that trust, we must provide verifiable transparency and remove challenges to security and compliance. We are committed to being a dedicated digital transformation partner and continue to evolve with our customers to meet their regulatory obligations. To learn more about Google Cloud Trust & Compliance, visit our Compliance resource center.
Quelle: Google Cloud Platform

Cloud Spanner launches customer-managed encryption keys and Access Approval

Cloud Spanner is Google Cloud’s fully managed relational database that offers unlimited scale, high performance, strong consistency across regions and high availability (up to 99.999% availability SLA). In addition, enterprises trust Spanner because it provides security, transparency and complete data protection to its customers. To give enterprises greater control of how their data is secured, Spanner recently launched Customer-managed encryption keys (CMEK). CMEK enables customers to manage encryption keys in Cloud Key Management (KMS). From a security standpoint, Spanner already offers, by default, encryption for data-in-transit via its client libraries and for data at rest using Google-managed encryption keys. Customers in regulated industries such as financial services, healthcare and life sciences, and telecommunications need control of the encryption keys to meet their compliance requirements. With the launch of CMEK support for Spanner, you now have complete control of the encryption keys and can run workloads that require the highest level of security and compliance. You can also protect database backups with CMEK. Spanner also provides VPC Service Controls support and has compliance certifications and necessary approvals so that it can be used for workloads requiring ISO 27001, 27017, 27018, PCI DSS, SOC1|2|3, HIPAA and FedRamp.Spanner integrates with Cloud KMS to offer CMEK support, enabling you to generate, use, rotate, and destroy cryptographic keys in Cloud KMS. Customers who need an increased level of security can choose to use hardware-protected encryption keys, and can host encryption keys and perform cryptographic operations in FIPS 140-2 Level 3 validated Hardware Security Modules (HSMs). CMEK capability in Spanner is available in all Spanner regions and select multi-regions that support KMS and HSM.How to use CMEK with SpannerTo use CMEK for a Spanner database, users should specify the KMS key at the time of database creation. The key must be in the same location as the Spanner instance (regional or multi-regional). Spanner is able to access the key on user’s behalf after the user grants the Cloud KMS Encrypter and Decrypter role to a Google-managed Cloud Spanner service account. Once a database with CMEK is created, the access to it via APIs, DDL and DML is the same as for a database using Google-managed encryption keys. You can see the details of the encryption type and encryption key in the database overview page.Spanner calls KMS in each zone of an instance-configuration about every five minutes to ensure that the key for the Spanner database is still valid. Customers can audit the Spanner requests to KMS on their behalf in the Logs Viewer if they enable logging for Cloud KMS API in their project. Access approval support for SpannerIn addition to security controls, customers need complete visibility and control over how their data is used. Customers today use Cloud Spanner audit logs to record the admin and data access activities for members in their Google Cloud organization, whereas they enable Access Transparency logs to record the actions taken by Google personnel. Access Transparency provides near real-time logs to customers where Google support and engineering personnel logs business justification (including reference to support tickets in some scenarios) for any access to customer’s data. Expanding on this, Spanner has launched support for Access Approval in Preview. With Access Approval in Spanner, a customer blocks administrative access to their data from Google personnel and requires explicit approval from them to proceed. Hence, this is an additional layer of control on top of the transparency provided by Access Transparency Logs. Access Approval also provides a historical view of all requests that were approved, dismissed, or expired. To use Access Approval, customers have to first enable Access Transparency from the console for their organization; Access Approval can then be enabled from the console as well. With Access Approval, users will receive an email or Pub/Sub message with an access request that they are able to approve. Using the information in the message, they can use the Google Cloud Console or the Access Approval API to approve the access.Learn more Spanner bills a CMEK-enabled database the same as any other Spanner database. Customers are billed for Cloud KMS use (for the cost of the key and for cryptographic operations) whenever Spanner uses the key for encryption/decryption. We expect this cost to be minimal; see KMS pricing for details.To learn more about CMEK, see documentation. To get started with Spanner, create an instanceor try it out with a Spanner Qwiklab.Related ArticleCloud Spanner launches point-in-time-recovery capabilityCheck out Cloud Spanner’s new point-in-time recovery (PITR) capability, offering continuous data protection when you configure the databa…Read Article
Quelle: Google Cloud Platform

Customers cut document processing time and costs with DocAI solutions, now generally available

Some of the most important data at your company isn’t living in databases, but in documents, and most business processes begin, involve or end with a document. Yet most companies are still manually entering data and reliant on guesswork to make sense of it all as the volume and variety of data explodes. Organizations are also leaving heaps of value on the table in the form of new and better customer experiences that can be unlocked with artificial intelligence (AI) applied to documents. The latest releases of Document (Doc) AI platform, Lending DocAI and Procurement DocAI, built on decades of AI innovation at Google, bring powerful and useful solutions to these challenges. Under the hood are Google’s industry-leading technologies:Computer vision (including OCR) and Natural Language Processing (NLP) that creates pre-trained models for high-value, high-volume documents. Google Knowledge Graph to validate and enhance the fields in your documents.Training and creation of your own custom document models. Human interaction with AI to ensure accuracy where needed.Google Cloud DocAI platform, Lending DocAI and Procurement DocAI are now generally available. Thousands of customers have tried these products in the preview phase—and DocAI has already processed tens of billions of pages of documents across lending, insurance, government and other industries.Cut document processing costs by up to 60%Lending DocAI helps banks, mortgage brokers and other lending institutions fast track the loan application process from weeks to days, dramatically reducing the cost of issuing a loan. And Procurement DocAI enables companies to automate procurement data capture at scale, lowering processing costs by up to 60%.These solutions are built on DocAI platform, a unified console for document processing that lets you quickly access all parsers and tools. From the platform, you can automate and validate documents to streamline workflows, reduce guesswork, and keep data accurate and compliant. Get more value from AI with DocAI’s industry-specific solutionsAccording to Accenture’s AI: Built to Scale report: “Companies that scale successfully see 3x the return on their AI investments compared to those who have not fully rolled out AI capabilities.” Core to our strategy at Google Cloud is the creation of industry-specific solutions that help companies get maximum value out of their investments in AI. We announced Lending DocAI, our first solution designed specifically for the financial services industry, at the Mortgage Bankers Association convention last year. It processes borrowers’ income and asset documents using a set of specialized machine learning (ML) models, and automates routine document reviews so that mortgage providers can focus on more important work. Lending DocAI is now generally available and includes more specialized parsers for critical loan documents including paystubs, bank statements, and more. Our goal is to provide the right tools to help borrowers and lenders have a better experience and close home loans faster. For more, watch this video.Procurement DocAI is also now generally available. This solution helps companies accelerate document processing for invoices, receipts, and other valuable documents in the procurement cycle. Automating data capture is helping our customers increase accuracy and also lower their procure-to-pay processing costs. We are continually expanding the types of documents Procurement DocAI can process—the latest is a utility parser for electric, water and other bills. In addition, Procurement DocAI leverages Google Knowledge Graph to validate and enrich parsed information to make the data even more useful. Check out this overview video for more details. One company that lives and breathes AI-enabled document management is AODocs. It uses Procurement DocAI to simplify invoice processing for enterprise customers and launched a new Gmail add-on, Invoice to Sheet, for SMB customers who just want to track their invoices in Google Sheets.”Google Cloud’s Procurement DocAI service allows our document management platform to better automate the processing of invoices; AODocs customers who have tested our new account payables workflow estimate that the productivity of their A/P team has more than doubled, thanks to the reduction of manual data input brought by the Procurement DocAI.”—Stéphan Donzé, Founder and CEO, AODocsThe new specialized parsers for Lending and Procurement DocAI can be used alongside our existing AutoML Text & Document Classification and AutoML Document Extraction services. These technologies provide a state-of-the-art toolset for creating new document models and have been widely deployed by customers in financial services and other industries. Partner to accelerate your AI deployment and resultsHaving the right partner to ease the complexity of rolling out your AI-strategy in mortgage document processing is critical to transforming your customers’ experience. We’re excited to announce a partnership with Mr. Cooper, a leader in mortgage servicing, to provide customers with more automation and workflow tools throughout their entire mortgage life cycle. As part of this agreement, both companies will collaborate on digitizing Mr. Cooper’s core mortgage platform, creating a more personal customer experience utilizing AI, and driving a broader culture of innovation to imagine and develop services and solutions that will transform the mortgage experience for American homeowners.“Over the last few years, we have made substantial investments in our servicing technology and core mortgage platform that have revolutionized the customer experience, while providing dramatic efficiencies in operating cost. Our partnership with Google Cloud AI will build on those advances and help make these technologies available for the mortgage industry.” —Jay Bray, Chairman and CEO, Mr. Cooper GroupThis builds upon the robust partner ecosystem we’re creating to help customers revolutionize the home loan experience, which includes last year’s partnership announcement with Roostify.Integrate human review into ML predictions Next up is the general availability of Human-in-the-Loop AI, a new DocAI feature that will help companies achieve higher document processing accuracy with the assurance of human review. Adding human review can increase accuracy and help businesses interpret predictions using purpose-built tools to enable those reviews. Processing documents quickly and cost-effectively is important. But it’s often necessary to have a high level of assurance on data accuracy for compliance. CIOs and IT decision-makers need highly accurate ML predictions to fulfill compliance requirements, improve employee experience (e.g. less rework), and raise customer satisfaction (e.g. fewer data errors). Including human participation in ML processes allows AI and humans to work together for the best possible results.Human-in-the-Loop AI provides the workflow to manage human review tasks and produces a percentage confidence score of how “sure” it is that the AI ingested the document correctly. Document AI extracts data from documents with ML, and when paired with Human-in-the-Loop AI, human reviewers are able to verify the data captured. This system is customizable, providing the flexibility to set different thresholds and assign individual groups of reviewers to various stages of the workflow. With Human-in-the-Loop AI, developers can choose trusted reviewers to assign to the task; these reviewers can be from within their own or partner organizations.More Document AI resourcesTo learn more, check out the Document AI webpage and watch a demo of how to process sample forms in AI Platform notebooks to inspect data extraction and confidence scores. For more on how customers and partners like Workday, AODocs, and Mr. Cooper are using Document AI, listen to our fireside chat. And stay tuned for the exciting evolution of these technologies in future releases of DocAI.
Quelle: Google Cloud Platform

New Redis Enterprise for Anthos and GKE

Among forward-looking developers, the open-source Redis in-memory data structure store is a popular option for anyone looking for a database, cache, and message broker. At Google Cloud Next 2019, Redis Labs, the home of Redis announced Redis Enterprise, a fully managed Database-as-a-Service (DBaaS) running on Google Cloud. This week at RedisConf 2021, we are building on that collaboration to bring Redis Enterprise for Anthos and Google Kubernetes Engine (GKE) onto the Google Cloud Marketplace in private preview. This brings a self-managed Redis Enterprise solution for Google Cloud customers who need to run co-located apps and services in container clusters. Anthos is built on the foundational elements of GKE. It provides a managed hybrid and multi-cloud platform for deploying, managing, and scaling containerized applications on Google Cloud, on-premises, on AWS and soon on Azure. This enables enterprise customers with heterogeneous (on-prem and cloud) environments to seamlessly orchestrate their applications estate across a broad range of deployment topologies. The addition of new Kubernetes-based data services from Redis Labs makes it easier to couple apps and data services together, allowing both to operate from a global control plane with unified billing through Google Cloud. “As we worked to deliver a more customized and tailored experience for our customers, we needed a solution that allowed us to scale quickly with low overhead and low maintenance,” said Avneendra Arun, IT Director, Belk. “With GKE and Redis Enterprise we have a flexible, cost-effective solution that has been a wonderful combination, both quick to deploy and easy to maintain.”To learn more about Redis Enterprise with Anthos on GKE, please see the Redis Labs press release or contact the Redis team.
Quelle: Google Cloud Platform