Here’s a Look at Our Favorite New Patterns

Since we launched Patterns in 2020, we’ve been steadily adding to our library of prebuilt Block templates for you to easily make your site stand out even more. There are now over 260 Patterns that can be inserted into your pages and posts in just seconds.

If you’ve never used Patterns before, you can access them by hitting the “+” button at the top left of any page or post you’re working on. From there, you can do a few things:

Use the search box to search for a term like “Header,” “Subscription,” or “Link in Bio” and select from the results Or click on the “Patterns” tab and use the drop down menu to explore the top results across various categories Or click on the “Explore” button to bring up our entire library of Patterns, organized by category

Here’s a quick demo that shows how to add an image gallery using the new Pattern explorer:

We’re always adding more Patterns, month by month — we’ve added over 45 new ones since July! — and we can’t wait for you to see some of the fun designs coming up. Think of them as an ever-growing library of sophisticated slices of web design you can customize and add to your posts, pages, and Block themes.

Below is a quick look at some of our favorites from the year so far.

If you use a lot of images in your posts, we have numerous options for you. There are some great Patterns for galleries, portfolios, and even product listings.

You can find these in the “Gallery” category.

The adage “good things come in small packages” holds true with some of our smaller Patterns that take up less real estate. It all depends on the nature of your website, but a Pattern for donations on a non-profit site or call to subscribe on almost any blog can make a great end for a post or page.

Find these in the “Subscribe” and “Earn” categories, respectively.

And finally, simply exploring some of our bolder Patterns can provide inspiration on working with color, type, and images on your site!

Patterns are an incredibly useful resource in your website design toolbox. Customize, experiment, and take advantage of them whenever you can.

If you need help with Patterns, click here for a more detailed guide.

And be sure to let us know in the comments how you’ve used Patterns on your site and any ideas you have for new ones. We’re always working on more!
Quelle: RedHat Stack

How Google Cloud blocked the largest Layer 7 DDoS attack at 46 million rps

Over the past few years, Google has observed that distributed denial-of-service (DDoS) attacks are increasing in frequency and growing in size exponentially. Today’s internet-facing workloads are at constant risk of attack with impacts ranging from degraded performance and user experience for legitimate users, to increased operating and hosting costs, to full unavailability of mission critical workloads. Google Cloud customers are able to use Cloud Armor to leverage the global scale and capacity of Google’s network edge to protect their environment from some of the largest DDoS attacks ever seen.On June 1, a Google Cloud Armor customer was targeted with a series of HTTPS DDoS attacks which peaked at 46 million requests per second. This is the largest Layer 7 DDoS reported to date—at least 76% larger than the previously reported record. To give a sense of the scale of the attack, that is like receiving all the daily requests to Wikipedia (one of the top 10 trafficked websites in the world) in just 10 seconds.Cloud Armor Adaptive Protection was able to detect and analyze the traffic early in the attack lifecycle. Cloud Armor alerted the customer with a recommended protective rule which was then deployed before the attack ramped up to its full magnitude. Cloud Armor blocked the attack ensuring the customer’s service stayed online and continued serving their end-users.Figure 1: DDoS attack graph peaking at 46M requests per second.What happened: Attack analysis and timelineStarting around 9:45 a.m. PT on June 1, 2022, an attack of more than 10,000 requests per second (rps) began targeting our customer’s HTTP/S Load Balancer. Eight minutes later, the attack grew to 100,000 requests per second. Cloud Armor Adaptive Protection detected the attack and generated an alert containing the attack signature by assessing the traffic across several dozen features and attributes. The alert included a recommended rule to block on the malicious signature. The following is the alert showing details of the attack before it ramped to its peaks.Figure 2: Cloud Armor Adaptive Protection alert listing the top region codes detected as a part of the attack.Our customer’s network security team deployed the Cloud Armor-recommended rule into their security policy, and it immediately started blocking the attack traffic. In the two minutes that followed, the attack began to ramp up, growing from 100,000 rps to a peak of 46 million rps. Since Cloud Armor was already blocking the attack traffic, the target workload continued to operate normally. Over the next few minutes, the attack started to decrease in size, ultimately ending 69 minutes later at 10:54 a.m. Presumably the attacker likely determined they were not having the desired impact while incurring significant expenses to execute the attack. Analyzing the attackIn addition to its unexpectedly high volume of traffic, the attack had other noteworthy characteristics. There were 5,256 source IPs from 132 countries contributing to the attack. As you can see in Figure 2 above, the top 4 countries contributed approximately 31% of the total attack traffic. The attack leveraged encrypted requests (HTTPS) which would have taken added computing resources to generate. Although terminating the encryption was necessary to inspect the traffic and effectively mitigate the attack, the use of HTTP Pipelining required Google to complete relatively few TLS handshakes.  Approximately 22% (1,169) of the source IPs corresponded to Tor exit nodes, although the request volume coming from those nodes represented just 3% of the attack traffic. While we believe Tor participation in the attack was incidental due to the nature of the vulnerable services, even at 3% of the peak (greater than 1.3 million rps) our analysis shows that Tor exit-nodes can send a significant amount of unwelcome traffic to web applications and services.The geographic distribution and types of unsecured services leveraged to generate the attack matches the Mēris family of attacks. Known for its massive attacks that have broken DDoS records, the Mēris method abuses unsecured proxies to obfuscate the true origin of the attacks.  How we stopped the attackThe attack was stopped at the edge of Google’s network, with the malicious requests blocked upstream from the customer’s application. Before the attack started, the customer had already configured Adaptive Protection in their relevant Cloud Armor security policy to learn and establish a baseline model of the normal traffic patterns for their service. As a result, Adaptive Protection was able to detect the DDoS attack early in its life cycle, analyze its incoming traffic, and generate an alert with a recommended protective rule–all before the attack ramped up. The customer acted on the alert by deploying the recommended rule leveraging Cloud Armor’s recently launched rate limiting capability to throttle the attack traffic. They chose the ‘throttle’ action over a ‘deny’ action in order to reduce chance of impact on legitimate traffic while severely limiting the attack capability by dropping most of the attack volume at Google’s network edge. Before deploying the rule in enforcement mode, it was first deployed in preview mode, which enabled the customer to validate that only the unwelcome traffic would be denied while legitimate users could continue accessing the service. As the attack ramped up to its 46 million rps peak, the Cloud Armor-suggested rule was already in place to block the bulk of the attack and ensure the targeted applications and services remained available. Protecting your applications in the cloudAttack sizes will continue to grow and tactics will continue to evolve. To be prepared, Google recommends using a defense-in-depth strategy by deploying defenses and controls at multiple layers of your environment and your infrastructure providers’ network to protect your web applications and services from targeted web attacks. This strategy includes performing threat modeling to understand your applications’ attack surfaces, developing proactive and reactive strategies to protect them, and architecting your applications with sufficient capacity to manage unanticipated increases in traffic volume. With Google Cloud Armor, you are able to protect your internet facing applications at the edge of Google’s network and absorb unwelcome traffic far upstream from your applications.Related ArticleIntroducing new Cloud Armor features including rate limiting, adaptive protection, and bot defenseCloud Armor strengthens its already formidable defenses with new features to counter advanced L7 attacks and block malicious bots.Read Article
Quelle: Google Cloud Platform

This engineering manager has spent 15+ years across Google — here’s how she leads through empowerment

Editor’s note: Since joining Google in 2007, Carrie Bell has gone from working on search ads to managing multiple teams of highly skilled engineers. That may seem unusual, but she says there’s a common theme across it all: empowerment.What was your path to Google?I joined the Army Reserve out of high school, and after a year of training, began attending Marquette University as an English major. That was interrupted by the decision to invade Iraq, where I then spent a year. I came back and took a heavy load to graduate in 2006. Google had just opened an office in Ann Arbor, and they were interested in hiring people with unusual backgrounds who were capable of learning and growing with the company. I spent a lot of time in customer support and sales. People don’t realize how great that experience can be! A customer support person picks up a phone, and they don’t know who the customer is, what their problem is, or what kind of day they are having, and 90%-95% of the time they make that customer happy. It’s a huge deal. In 2013, I started working in Privacy, and began working directly with more engineers with technical skills. Many of my peers were new to Google, and I could help them with understanding things like promotion, job calibration, and understanding how Google works. As my coworkers saw me stepping up, they put me forward as a manager.Most English majors don’t aspire to manage teams of software engineers. I have a non-traditional career path, true. Some opportunities came from taking advantage of what Google offered. Whenever I didn’t feel like I was growing, or I wasn’t well aligned with my manager, I could always find something new. I am highly cognizant that I don’t have my team’s engineering expertise, and I never will. I bring other skills I put a lot of time and effort into—aligning people to a common purpose and aligning people with each other. I want the vision and mission of a team to arise from the collective genius of the group and for it to be owned by everyone, so that even the most junior person on a team knows that what they are doing is important. How does that make things stronger?Every team, in every part of the company, needs a mission, a vision, and a strategy. Often you get a brilliant leader who drives the vision. If that leader moves on to another role, it makes the team feel like they don’t know what to do. That’s bad for everyone, and it’s a lot less likely to happen if everyone has had a role in creating the mission, vision, and strategy.Caring for human beings has always been a core part of my leadership philosophy.  As a manager, I’ve found that if I take care of my team, my team will have my back when it comes to our business priorities.Do you think cloud technology has the kind of opportunity you saw 15 years ago?Yes. Serverless computing, where I’m working now, really excites me for how it can transform a small business. The same way search ads helped little businesses find customers and grow quickly, serverless can abstract out a lot of complex issues and be a democratizing force for smaller players. We’re still working on the vision, but that feels right.Related ArticleThe Invisible Cloud: How this Googler keeps the internet moving worldwideMeet Stacey Cline and hear how she came to enable the worldwide movement of Google Cloud’s global technical infrastructure.Read Article
Quelle: Google Cloud Platform

SUSE Linux Enterprise Server (SLES) with 24/7 support – now available with Committed Use Discounts

Optimizing your costs is a major priority for Google Cloud. We do this with products that deliver a great combination of price and performance, recommendations that help you right size your deployment, and by offering the right pricing models. Committed Use Discounts is one such model that allows you to get high discounts against commitments to use for a defined period.Today, we are excited to announce the general availability of committed use discounts (“CUDs”) for SUSE Linux Enterprise Server (“SLES”) with 24/7 support. CUDs are a very effective way of saving on your cloud costs when you have some predictability in your workloads. Now you can take the same advantage for SLES licenses. SLES CUDs can save you as much as 79% on license costs compared to pay-as-you-go prices.SUSE was our first partner to offer software license committed use discounts. Our close collaboration with SUSE enabled the expansion of these CUD offerings. This is an important step in helping our partners and customers grow their business on Google Cloud. Here is what Frank Powell, President of Managecore, a Google Cloud Partner had to say about this offering: “We are excited to join Google and SUSE on this new CUD offering for SLES. This will enable our joint customers to accelerate their workload migrations to cloud, from proprietary to more open source solutions. At the same time, it allows them to leverage the maximum discounts, providing choice and flexibility to run their modern workloads on Google Cloud.” Manish Patil, SUSE Sr. Director for Global Cloud Alliances had to say this: “Expansion of our CUD offering for SLES in addition to SLES for SAP is another exceptional joint innovative offering for customers resulting in more savings, choice, and elasticity in terms of running more stable and predictable workloads securely on Google Cloud.” How do committed use discounts work for SLES?SLES CUDs are region-specific — similar to how SLES for SAP CUDs work today. Therefore, you will need to buy commitments in the same region as the instances consuming these licenses. When you purchase SLES commitments, they form a “pool” of licenses that automatically apply to your running VM instances within a selected project in a specified region. Commitments can also be shared across projects within the same billing account by turning on billing account sharing. Discounts apply to any active VMs, so the commitment is not tied to any particular VM.When commitments expire, your running VMs continue to run at on-demand rates. However, it is important to note that after you purchase a commitment, it is not editable or cancelable. You must pay the agreed upon monthly amount for the duration of the commitment. Refer to Purchasing commitments for licenses for more information. How can I purchase committed use discounts for SLES?SLES CUDs can be purchased on a one-year or three-year contract, and each are priced according to your virtual machine vCPU counts. After purchasing, you will be billed monthly for the commitments regardless of your usage.* Price as of this article’s publish dateHow much can I save by using committed use discounts for SLES?By purchasing SLES committed use discounts, you can save as much as 79% on SLES license costs compared to the current pay-as-you-go prices. Here is a helpful comparison of discounts possible on CUDs relative to pay-as-you-go prices:* Approximate effective hourly price as of blog publish date, calculated using VMs running 730 hours per month,12 months per year. ** Discounts compared to current pay-as-you-go pricing.What if I need to upgrade my SLES version after purchasing a commitment? SLES CUDs are version-agnostic and are not affected when you perform OS upgrades or downgrades. For example, if you purchased a commitment for SLES 12, you may upgrade to SLES 15 and continue to use the same commitment without any action from your end. Additionally, commitments are not affected by future pricing changes to the pay-as-you-go prices for Compute Engine resources.You can find more information on purchasing CUDs for SLES on our CUDs documentation. We are always looking to diversify our offerings to help customers optimize costs on Google Cloud. We hope this helps you find the most cost-optimal plan for your SUSE Linux Enterprise Server deployment needs.
Quelle: Google Cloud Platform

Dive deep into NAT gateway’s SNAT port behavior

In our last blog, we examined a scenario on how network address translation (NAT) gateway mitigates connection failures happening at the same destination endpoint with its randomized source network address translation (SNAT) port selection and reuse timers. In addition to handling these scenarios, NAT gateway’s unique SNAT port allocation is beneficial to dynamic, scaling workloads connecting to several different destination endpoints over the internet. In this blog, let’s deep dive into the key aspects of NAT gateway’s SNAT port behavior that makes it the preferred solution for different outbound scenarios in Azure.

Why SNAT ports are important to outbound connectivity

For anyone working in a virtual cloud space, it is likely that you will encounter internet connection failures at some point. One of the most common reasons for connection failures is SNAT port exhaustion, which happens when the source endpoint of a connection runs out of SNAT ports to make new connections over the internet.

Source endpoints use ports through a process called SNAT, which allows destination endpoints to identify where traffic was sent and where to send return traffic. NAT gateway SNATs the private IPs and ports of virtual machines (VMs) within a subnet to NAT gateway’s public IP address and ports before connecting outbound, and in turn provides a scalable and secure means to connect outbound.

Figure 1: Source network address translation by NAT gateway: connections going to the same destination endpoint over the internet are differentiated by the use of different source ports.

With each new connection to the same destination IP and port, a new source port is used. A new source port is necessary so that each connection can be distinguished from one another. SNAT port exhaustion is an all too easy issue to encounter with recurring connections going to the same destination endpoint since a different source port must be used for each new connection.

How NAT gateway allocates SNAT ports

NAT gateway solves the problem of SNAT port exhaustion by providing a dynamic pool of SNAT ports, consumable by all virtual machines in its associated subnets. This means that customers don’t need to worry about knowing the traffic patterns of their individual virtual machines since ports are not pool-based in fixed amounts to each virtual machine. By providing SNAT ports on-demand to virtual machines, the risk of SNAT exhaustion is significantly reduced, which in turn helps prevent connection failures.

Figure 2: SNAT ports are allocated on-demand by NAT gateway, which alleviates the risk of SNAT port exhaustion. 

Customers can ensure that they have enough SNAT ports for connecting outbound by scaling their NAT gateway with public IP addresses. Each NAT gateway public IP address provides 64,512 SNAT ports, and NAT gateway can scale to use up to 16 public IP addresses. This means that NAT gateway can provide over one million SNAT ports for connecting outbound.

How NAT gateway selects and reuses SNAT ports

Another key component of NAT gateway’s SNAT port behavior that helps prevent outbound connectivity failures is how it selects SNAT ports. Whether connecting to the same or different destination endpoints over the internet, NAT gateway selects a SNAT port at random from its available inventory.

Figure 3: NAT gateway randomly selects SNAT ports from its available inventory to make new outbound connections.

A SNAT port can be reused to connect to the same destination endpoint. However, before doing so, NAT gateway places a reuse cooldown timer on that port after the initial connection closes.

NAT gateway’s SNAT port reuse cooldown timer helps prevent ports from being selected too quickly for connecting to the same destination endpoint. This is advantageous when destination endpoints have their own source port reuse cooldown timers in place.

Figure 4: SNAT port 111 is released and placed in a cooldown period before it can connect to the same destination endpoint again. In the meantime, port 106 (dotted outline) is selected at random from the available inventory of ports to connect to the destination endpoint. The destination endpoint has a firewall with its own source port cooldown timer. There is no issue getting past the on-premise destination’s firewall since the connection from source port 106 is new.

What happens then when all SNAT ports are in use? When NAT gateway cannot find any available SNAT ports to make new outbound connections, it can reuse a SNAT port that is currently in use so long as that SNAT port connects to a different destination endpoint. This specific behavior is beneficial to any customer who is making outbound connections to multiple destination endpoints with NAT gateway.

Figure 5: When all SNAT ports are in use, NAT gateway can reuse a SNAT port to connect outbound so long as the port actively in use goes to a different destination endpoint. Ports in use by destination 1 are shown in blue. Port connecting to destination 2 is shown in yellow. Port 111 is yellow with a blue outline to show it is connected to destinations 1 and 2 simultaneously.

What have we learned about NAT gateway’s SNAT port behavior?

In this blog, we explored how NAT gateway allocates, selects, and reuses SNAT ports for connecting outbound. To summarize:

Function
NAT gateway SNAT port behavior
Benefit

SNAT port capacity
Up to 16 public IP addresses.
 
64,512 SNAT ports / NAT gateway public IP addresses.   
Easy to scale for large and variable workloads.

SNAT port allocation
Dynamic and On-demand.
Great for flexible, unknown, and large-scale workloads.

SNAT port selection
Randomized.
Reduces risk of connection failures to the same destination endpoint.

SNAT port reuse
Reuse to a different destination—connect outbound immediately.
 
Reuse to the same destination—set on a cooldown timer.
Reduces risk of connection failures to the same destination endpoint with source port reuse cooldown timers.

Deploy NAT gateway today

Whether your outbound scenario requires you to make many connections to the same or to several different destination endpoints, NAT gateway provides a highly scalable and reliable way to make these connections over the internet. See the NAT gateway SNAT behavior article to learn more.

NAT gateway is easy to use and can be deployed to your virtual network with just a few clicks of a button. Deploy NAT gateway today and follow along on how with: Create a NAT gateway using the Azure portal.
Quelle: Azure