The role of the Orchestrator in GitOps-driven operations

Typically, when a new technology or pattern emerges, various approaches are taken to determine how it best fits into a transformed model. Proponents of traditional methods are sometimes initially met with resistance by eager early adopters, who may discard lessons learned from foundational practices. 

Established orchestration approaches versus GitOps methodologies demonstrate this inflection point.
Quelle: CloudForms

Built-in Duotone Image Filter, Editor Navigation via Persistent List View, and Other Block Editor Improvements

The next batch of exciting updates to the block editor is live on WordPress.com. Powerful duotone image editing, a persistent list view to edit your page or post, and an update for picking table colors are all ready for you to build and improve the look of your site. Let’s take a closer look.

Built-in duotone image filter

Duotone is a much anticipated feature that allows you to choose between any two colors for an image’s shadows and highlights. Think black and white photos, but in any color combo of your choosing. What was once the power of standalone image-editing software is now at your fingertips in the block editor.

Establish a consistent tone across your site’s images or draw your audience’s eye with a catchy color combo for a big post.

Try it out by adding an Image or Cover block, select “Apply duotone filter” from the floating block menu, and experiment by choosing colors.

Editor navigation via persistent List View

Level-up your editing game by selecting the List View in the top menu bar—this displays the entire block structure of your page or post. It’s essentially a Table of Contents of all the blocks currently in use. List View is now persistent, meaning it will remain expanded in the sidebar as you edit. 

This update will vastly improve the navigation of complex content and the editing of nested blocks.

And there’s a bonus! Anchors added to blocks are also shown in this List View for easy identification.

Customize your Table colors

Background and text colors are easily edited within the Table block for a more consistent feel across your site.

Keep building, keep exploring

Your feedback is crucial to expanding the block editor’s capabilities, so keep it coming. Watch here for more updates, and in the meantime, go forth and create!
Quelle: RedHat Stack

Are we there yet? Thoughts on assessing an SRE team’s maturity

One facet of our work as Customer Reliability Engineers—Google Site Reliability Engineers (SREs) tapped to help Google Cloud customers develop that practice in their own organizations—is advising operations or SRE teams to improve their operational maturity. We’ve noticed a recurring question cropping up across many of these discussions, usually phrased along the lines of “is what we’re currently doing ‘SRE work’?” or, with a little more existential dread, “can we call ourselves SREs yet?”We’ve answered this question before with a list of practices from the SRE workbook. But the list is long on the what and short on the why, which can make it hard to digest for folks already suffering an identity crisis. Instead, we hope to help answer this question by discussing some principles we consider fundamental to how an SRE team operates. We’ll examine why they’re important and suggest questions that characterize a team’s progress towards embodying them. Are we there yet?This question is asked in different ways, for a myriad of different reasons, and it can be quite hard to answer due to the wide range of different circumstances that our customers operate in. Moreover, CRE, and Google in general, is not the final arbiter of what is and isn’t “SRE” for your organization, so we can’t provide an authoritative answer, if one even exists. We can only influence you and the community at large by expressing our opinions and experiences, in person or via our books and blog posts. Further, discussions of this topic tend to be complicated by the fact that the term “SRE” is used interchangeably to mean three things:A job role primarily focused on maintaining the reliability of a service or product.A group of peopleworking within an organization, usually in the above job role.A set of principles and practices that the above people can utilize to improve service reliability.When people ask “can we call ourselves SREs yet?” we could interpret it as a desire to link these three definitions together. A clearer restatement of this interpretation might be: “Is our group sufficiently advanced in our application of the principles and practices that we can justifiably term our job role SRE?” We should stress that we’re not saying that you need a clearly defined job role—or even a team—before you can start utilizing the principles and practices to do things that are recognizably SRE-like. Job roles and teams crystallize from a more fluid set of responsibilities as organizations grow larger. But as this process plays out, the people involved may feel less certain of the scope of their responsibilities, precipitating the ‘are we there yet?’ question. We suspect that’s where the tone of existential dread comes from…Key SRE indicatorsWithin the CRE team here at Google Cloud, the ‘are we there yet?’ question surfaced a wide variety of opinions about the core principles that should guide an SRE team. We did manage to reach a rough consensus, with one proviso—the answer is partially dependent on how a team engages with the services it supports.We’ve chosen to structure this post around a set of principles that we would broadly expect groups of people working as SREs that directly support services in production to adhere to. As in a litmus test, this won’t provide pin-point accuracy; but in our collective opinion at least, alignment with most of the principles laid out below is a good signal that a team is practicing something that can recognizably be termed Site Reliability Engineering.Directly engaged SRE teams are usually considered Accountable (in RACI terms) for the service’s reliability, with Responsibility shared between the SRE and development teams. As a team provides less direct support these indicators may be less applicable. We hope those teams can still adapt the principles to their own circumstances. To illustrate how you might do this, for each principle we’ve given a counter-example of a team of SREs operating in an advisory capacity. They’re subject-matter experts who are Consulted by development teams who are themselves Responsible and Accountable for service reliability.Wherever your engagement model lies on the spectrum, being perceived by the rest of the organization as jointly responsible for a service’s reliability, or as reliability subject-matter experts, is a key indicator of SRE-hood.Principle #1: SREs mitigate present and future incidentsThis principle is the one that usually underlies the perception of responsibility and accountability for a service’s reliability. All the careful engineering and active regulation in the world can’t guarantee reliability, especially in complex distributed systems—sometimes, things go wrong unexpectedly and the only thing left to do is react, mitigate, and fix. SREs have both the authority and the technical capability to act fast to restore service in these situations.But mitigating the immediate problem isn’t enough. If it can happen again tomorrow, then tomorrow isn’t better than today, so SREs should work to understand the precipitating factors of incidents and propose changes that remediate the entire class of problem in the infrastructure they are responsible for. Don’t have the same outage again next month!How unique are your outages? Ask yourself these questions:Can you mitigate the majority of the incidents without needing specialist knowledge from the development team?Do you maintain training materials and practice incident response scenarios?After a major outage happens to your service, are you a key participant in blamelessly figuring out what really went wrong, and how to prevent future outages?Now, for a counter example. In many organizations, SREs are a scarce resource and may add more value by developing platforms and best practices to uplift large swathes of the company, rather than being primarily focused on incident response. Thus, a consulting SRE team would probably not be directly involved in mitigating most incidents, though they may be called on to coordinate incident response for a widespread outage. Rather than authoring training materials and postmortems, they would be responsible for reviewing those created by the teams they advise.Principle #2: SREs actively regulate service reliabilityReliability goals and feedback signals are fundamental for both motivating SRE work and influencing the prioritization of development work. At Google, we call our reliability goals Service Level Objectives and our feedback signals Error Budgets, and you can read more about how we use them in the Site Reliability Workbook.Do your reliability signals affect your organization’s priorities? Ask yourself these questions:Do you agree on goals for the reliability of the services you support with your organization, and track performance against those goals in real time?Do you have an established feedback loop that moderates the behaviour of the organization based on recent service reliability?Do you have the influence to effect change within the organization in pursuit of the reliability goals?Do you have the agency to refuse, or negotiate looser goals, when asked to make changes that may cause a service to miss its current reliability goals?Each question builds on the last. It is almost impossible to establish a data-driven feedback loop without a well-defined and measured reliability goal. For those goals to be meaningful, SREs must have the capability to defend them. Periods of lower service reliability should result in consequences that temporarily reduce the aggregate risk of future production changes and shift engineering priorities towards reliability. When it comes down to a choice between service reliability and the rollout of new but unreliable features, SREs need to be able to say “no”. This should be a data driven decision—when there’s not enough spare error budget, there needs to be a valid business reason for making users unhappy. Sometimes, of course, there will be, and this can be accommodated with new, lower SLO targets that reflect the relaxed reliability requirements.Consultant SREs, in contrast, help teams draft their reliability goals and may develop shared monitoring infrastructure for measuring them across the organization. They are the de-facto regulators of the reliability feedback loop and maintain the policy documents that underpin it. Their connection to many teams and services gives them broader insights that can spark cross-functional reliability improvements. Principle #3: SREs engage early and comprehensivelyAs we said earlier, SREs should be empowered to make tomorrow better than today. Without the ability to change the code and configuration of the services they support, they cannot fix problems as they encounter them. Involving SREs earlier in the design process can head off common reliability anti-patterns that are costly to correct post-facto. And, with the ability to influence architectural decision making, SREs can drive convergence across an organization so that work to increase the reliability of one service can benefit the entire company.Is your team actively working to make tomorrow better than today? Ask yourself these questions, which go from fine detail to a broad, high-level scope:Do you engineer your service now to improve its reliability, e.g. by viewing and modifying the source code and/or configuration?Are you involved in analysis and design of future iterations of your service, providing a lens on reliability/operability/maintainability?Can you influence your organization’s wider architectural decision making?Advising other teams naturally shifts priorities away from directly modifying the configuration or code of individual services. But consultant SREs may still maintain frameworks or shared libraries providing core reliability features, like exporting common metrics or graceful service degradation. Their breadth of engagements across many teams makes them naturally suited for providing high-level architectural advice to improve reliability across an entire organization.Principle #4: SREs automate anything repetitiveFinally, SREs believe that computers are fundamentally better suited to doing repetitive work than humans are. People often underestimate the returns on investment when considering whether to automate a routine task, and that’s before factoring in the exponential growth curve that comes with running a large, successful service. Moreover, computers never become inattentive and make mistakes when doing the same task for the hundredth time, or become demoralized and quit. Hiring or training SREs is expensive and time-consuming, so a successful SRE organization depends heavily on making computers do the grunt work.Are you sufficiently automating your work? Ask yourself these questions:Do you use—or create—automation and other tools to ensure that operational load won’t scale linearly with organic growth or the number of services you support?Do you try to measure repetitive work on your team, and reduce it over time?A call for reflectionMost blog posts end with a call for action. We’d rather you took time to reflect, instead of jumping up to make changes straight after reading. There’s a risk, when writing an opinionated piece like this, that the lines drawn in the sand are used to divide, not to grow and improve. We promise this isn’t a deliberate effort to gatekeep SRE and exclude those who don’t tick the boxes; we see no value in that. But in some ways gatekeeping is what job roles are designed to do, because specialization and the division of labour is critical to the success of any organization, and this makes it hard to avoid drawing those lines.For those who aspire to call themselves SREs, or are concerned that others may disagree with their characterization of themselves as SREs, perhaps these opinions can assuage some of that existential dread.
Quelle: Google Cloud Platform

Cloud SQL extends PostgreSQL 9.6 version support beyond end-of-life

Our mission with Cloud SQL for PostgreSQL is compatibility; we want our users to be able to run their applications unchanged while using Cloud SQL for PostgreSQL as their relational database management system (RDBMS). Thefinal release for PostgreSQL 9.6 is slated for November 11th, 2021. This is a good time to consider upgrading to a more recent version of PostgreSQL. Cloud SQL for PostgreSQL strives to maintain compatibility with the latest releases, and currently supports 10, 11, 12 and 13. You can find more information in the release notes. We realize, however, that your timelines may not allow you to upgrade before this final release date for 9.6, and there may be concerns on supportability moving forward.For these reasons, Cloud SQL will continue to support the final release of PostgreSQL 9.6 for at least one year after the general availability of Cloud SQL for PostgreSQL in-place major version upgrade. The final update for PostgreSQL 9.6 will come in November 2021, and Cloud SQL will make it available shortly after, and support it as is. While we won’t be able to provide patches, we will provide platform-level support. We hope this will give you peace of mind as you contemplate your upgrade options.In order to provide simple upgrade paths, we are working on features like support for Cloud SQL to Cloud SQL in the database migration service (DMS) and in-place major version upgrades for Cloud SQL for PostgreSQL.Read more about PostgreSQL versions on Cloud SQL here.Related ArticleMigrate your MySQL and PostgreSQL databases using Database Migration Service, now GACheck out how to migrate your on-premises databases to the cloud with Database Migration Service, now generally available for PostgreSQL …Read Article
Quelle: Google Cloud Platform

Learn Cloud Functions in a snap!

Cloud Functions is a fully managed event-driven serverless function-as-a-service (FaaS). It is a small piece of code that runs in response to an event. Because it is fully managed, developers can just write the code and deploy it without worrying about managing the servers or scaling up/down with traffic spikes. It is also fully integrated with Cloud Operations for observability and diagnosis. Cloud Functions is based on an open source FaaS framework which makes it easy to migrate and debug locally. Click to enlargeTo use Cloud Functions, just write the logic in any of the supported languages (Go, Python, Java, Node.js, PHP, Ruby, .NET), deploy it using the console, API or Cloud SDK and then trigger it via HTTP(s) request from any service, for example: file uploads to Cloud Storage, events in Pub/Sub or Firebase, or even direct call via Command Line Interface CLI. There is a generous free tier and the pricing is based on number of events, compute time, memory and ingress/egress requests and costs nothing if the function is idle. For security, using Identity and Access Management IAM you can define which services or personnel can access the function and using the VPC controls you can define network based access. Cloud Functions use casesSome Cloud Functions use cases include:Integration with third-party services and APIsAsynchronous workloads like lightweight ETLLightweight APIs and webhooksIoT processing and update of the sensors/devices in the fieldReal-time file processing for use cases such as media transcoding or resizing as soon as the file is uploaded in Google Cloud Storage.Real-time ML solutions for use cases such as media translation or image recognition for files uploaded in GCS.Backend for chat applications and mobile apps.Firebase Functions and Cloud Functions, are they different? If you are a Firebase developer, you’d probably use Firebase Functions. Those are created from the Firebase dashboard / website. Both Cloud Functions and Firebase Functions can do the same things, they just have slightly different signatures and slightly different ways of deploying. Firebase Functions have a local emulator, which Cloud Functions uses the Functions Framework.For a more in-depth look into Cloud Functions check out the documentation.  Once you’ve got your Function up and running, check out some tips and tricks.For more #GCPSketchnote, follow the GitHub repo. For similar cloud content follow me on Twitter @pvergadia and keep an eye out on thecloudgirl.devRelated Article5 cheat sheets to help you get started on your Google Cloud journeyWhether you need to determine the best way to move to the cloud, or decide on the best storage option, we’ve built a number of cheat shee…Read Article
Quelle: Google Cloud Platform

AWS gibt allgemeine Verfügbarkeit von AWS Wavelength in London bekannt

Heute geben wir die allgemeine Verfügbarkeit von AWS Wavelength im Vodafone 4G/5G-Netzwerk in London an. Unabhängige Softwareanbieter (Independent Software Vendors, ISVs), Unternehmen und Entwickler können jetzt die AWS Wavelength Zone in London nutzen, um Anwendungen mit extrem niedriger Latenz für mobile Geräte und Benutzer im Vereinigten Königreich zu erstellen.
Quelle: aws.amazon.com