K3S with MetalLB on Multipass VMs

blog.kubernauts.io – Last update: May 22nd, 2020 The repo has been renamed to Bonsai :-) https://github.com/arashkaffamanesh/bonsai k3s from Rancher Labs surpassed recently 10k stars on Github from the Kommunity during K…
Quelle: news.kubernauts.io

Edge Computing

openstack.org – Defining common architectures for edge solutions is a complicated challenge in itself, but it is only the beginning of the journey. The next step is to be able to deploy and test the solution to veri…
Quelle: news.kubernauts.io

Enjoy a Smoother Experience with the Updated Block Editor

Little details make a big difference. The latest block editor improvements incorporate some common feedback you’ve shared with us and make the editing experience even more intuitive than before.

We’ve also updated the categories we use to organize blocks, so you can find exactly what you need, fast. Read on to learn about recent changes you’ll notice next time you open the editor.

Move on quickly after citations and captions

Have you ever felt as if you were stuck inside a block after adding a citation? Now, when you hit Enter or Return at the end of the citation, you’ll be ready to start typing in a new text block.

Quotes were a bit sticky…

Much smoother now!

Quotes, images, embeds, and other blocks now offer this smoother experience. It’s a small change that will save you a little bit of time, but those seconds add up, and less frustration is priceless.

Streamlined heading selection

Another subtle-yet-helpful change we’ve introduced is simplified heading levels. Before, the block toolbar included a few limited options with additional ones in the sidebar. Now, you can find all available heading levels right in the block toolbar, and adjust the heading directly from the block you’re working on. (For even more simplicity, we’ve also removed the dropdown in the sidebar.)

Select a parent block with ease

Working with nested blocks to create advanced page layouts is now considerably smoother. Some users told us it was too difficult to select a parent block, se we’ve added an easier way to find it right from the toolbar. Now it’s a breeze to make picture-perfect layouts!

Filter your latest posts by author

Sites and blogs with multiple authors will love this update: you can now choose a specific author to feature in the Latest Posts block.

To highlight recent articles from a particular writer, just select their name in the block’s settings.

Renamed block categories

Finally, the next time you click the + symbol to add a new block, you’ll notice new, intuitive block categories that make it both easier and faster to find just the block you’re looking for.

What’s new:

TextMediaDesign

What’s gone:

CommonFormattingLayout

You keep building, we’ll keep improving

Thank you for all your input on how the block editor can be better! We’re listening. If you have more ideas, leave a comment below.

Happy editing!
Quelle: RedHat Stack

Father’s Day present of the past: 30 years of family videos in an AI archive

My dad got his first video camera the day I was born nearly three decades ago, which also happened to be Father’s Day. “Say hello to the camera!” are the first words he caught on tape, as he pointed it at a red, puffy baby (me) in a hospital bassinet. The clips got more embarrassing from there, as he continued to film through many a diaper change, temper tantrum, and—worst of all—puberty.Most of those potential blackmail tokens sat trapped on miniDV tapes or scattered across SD cards until two years ago when my dad uploaded them all to Google Drive. Theoretically, since they were now stored in the cloud, my family and I could watch them whenever we wanted. But with more than 456 hours of footage, watching it all would have been a herculean effort. You can only watch old family friends open Christmas gifts so many times. So this year, for Father’s Day, I decided to build my dad an AI-powered searchable archive of our family videos.If you’ve ever used Google Photos, you’ve seen the power of using AI to search and organize images and videos. The app uses machine learning to identify people and pets, as well as objects and text in images. So, if I search “pool” in the Google Photos app, it’ll show me all the pictures and videos I ever took of pools.The Photos app is a great way to index photos and videos in a hurry, but as a developer (just like my dad), I wanted to get my hands dirty and build my own custom video archive. In addition to doing some very custom file processing, I wanted to add the ability to search my videos by things people said (the transcripts) rather than just what’s shown on camera, a feature the Photos app doesn’t currently support. This way, I could search using my family’s lingo (“skutch” for someone who’s being a pain) and for phrases like “first word” or “first steps” or “whoops.” Plus, my dad is a privacy nut who’d never give his fingerprint for fingerprint unlock, and I wanted to make sure I understood where all of our sensitive family video data was being stored and have concrete privacy guarantees. So, I built my archive on Google Cloud. Here’s what it looked like:Building a searchable, indexed video archive is fun for personal projects, but it’s useful in the business world, too. Companies can use this technology to automatically generate metadata for large video datasets, caption and translate clips, or quickly search brand and creative assetsSo how do you build an AI-powered video archive? Let’s take a look.How to build an AI-powered video archiveThe main workhorse of this project was the Video Intelligence API, a tool that can:Transcribe audio (i.e. “automatic subtitles”)Recognize objects (i.e. plane, beach, snow, bicycle, cake, wedding)Extract text (i.e. on street signs, T-shirts, banners, and posters)Detect shot changesFlag explicit contentMy colleague Zack Akil built a fantastic demo showing off all these features, which you can check out here.Making videos searchableI used the Video Intelligence API in a couple of different ways. First, and most importantly, I used it to pull out features I could later use for search. For example, the audio transcription feature allowed me to find the video of my first steps by pulling out this cute quote:“All right, this is one of Dale’s First Steps. Even we have it on camera. Let’s see. What are you playing with Dale?” (This is the word-for-word transcription output from the Video Intelligence API.)The object recognition feature, powered by computer vision, recognized entities like “bridal shower,” “wedding,” “bat and ball games,” “baby,” and “performance,” which were great sentimental searchable attributes.And the text extraction feature let me search videos by text featured on the screen, so I could search for writing on signs, posters, t-shirts, and even birthday cakes. That’s how I was able to find both my brother’s and my first birthdays:The Video Intelligence API read the writing right off of our cakes!Splitting long videos and extracting the datesOne of the most challenging parts of this project was dealing with all the different file types from all the different cameras my dad has owned over the years. His digital camera produced mostly small video clips with the date stored in the filename (i.e. clip-2007-12-31 22;44;51.mp4). But before 2001, he used a camera that wrote video to miniDV tapes. When he digitized it, all the clips got slammed together into one big, two-hour file per tape. The clips contained no information about when they were filmed, unless my dad chose to manually hit a button that showed a date marker on the screen:Happily, the Video Intelligence API was able to solve both of these problems. Automatic Shot Change detection recognized where one video ended and another began, even though they were mashed up into one long MOV file, so I was able to automatically split the long clips into smaller chunks. The API also extracted the dates shown on the screen, so I could match videos with timestamps. Since these long videos amounted to about 18 hours of film, I saved myself some 18 hours (minus development time) of manual labor.Keeping big data in the cloudOne of the challenges of dealing with videos is that they’re beefy data files, and doing any development locally, on your personal computer, is slow and cumbersome. It’s best to keep all data handling and processing in the cloud. So, I started off by transferring all the clips my dad stored in Google Drive into a Cloud Storage bucket. To do this efficiently, keeping all data within Google’s network, I followed this tutorial, which uses a colab notebook to do the transfer.My goal was to upload all video files to the Google Cloud, analyze them with the Video Intelligence API, and write the resulting metadata to a source I could later query and search from my app.For this, I used a technique I use all the time to build machine learning pipelines: upload data to a Cloud Storage bucket, use a Cloud Function to kick off analysis, and write the results to a database (like Firestore). Here’s what that architecture looks like for this project:Click to enlargeIf you’ve never used these tools before, Cloud Storage provides a place to store all kinds of files, like movies, images, text files, PDFs—really anything. Cloud Functions are a “serverless” way of running code in the cloud: Rather than use an entire virtual machine or container to run your code, you upload a single function or set of functions (in Python or Go or Node.js or Java) which runs in response to an event—an HTTP request, a Pub/Sub event, or when a file is uploaded to Cloud Storage. Here, I uploaded a video to a Cloud Storage bucket (“gs://input_videos”) which triggered a Cloud Function that called the Video Intelligence API to analyze the uploaded video. Because this analysis can take a while, it runs in the background and finishes by writing data to a JSON file in a second Cloud Storage bucket (“gs://video_json”). As soon as this JSON file is written to storage, a second Cloud Function is triggered, which parses the JSON data and writes it to a database—in this case, Firestore. If you want an even more in-depth review of this design and the code that goes with it, take a look at this post.Firestore is a real-time, NoSQL database designed with app and web developers in mind. As soon as I wrote the video metadata to Firestore, I could access that data in my app quickly and easily.Screenshot of the Firestore database, where we keep track of all analyzed videos.Simple search with AlgoliaWith all this information extracted from my videos—transcriptions, screen text, object labels—I needed a good way to search through it all. I needed something that could take a search word or phrase, even if the user made a typo (i.e. “birthdy party”), and search through all my metadata to return the best matches. I considered using Elasticsearch, an open-source search and analytics engine that’s often used for tasks like this, but decided it looked a bit heavy-handed for my use case. I didn’t want to create a whole search cluster just to search through videos. Instead, I turned to Search API from a company called Algolia. It’s a neat tool that lets you upload JSON data and provides a slick interface to easily search through it all, handling things like typo correction and sorting. It was the perfect serverless search solution to complement the rest of my serverless app.A screenshot of me searching through all the data I uploaded to Algolia.Putting it all togetherAnd that’s pretty much it! After analyzing all the videos and making them searchable, all I had to do was build a nice UI. I decided to use Flutter, but you could build a frontend using Angular or React, or even a mobile app. Here’s what mine looks like:Finding lost memoriesWhat I hoped more than anything for this project was that it would let my dad search for memories that he knew he’d once recorded but that were almost impossible to find. So when I gifted it to him a few days before Father’s Day, that’s exactly what I asked: Dad, is there a memory of us you want to find?He remembered the time he surprised me with a Barbie bicycle for my fourth birthday. We searched “bicycle” and the clip appeared. I barely remembered that day and had never seen the video before, but from the looks of it, I was literally agape. “I love it!” I yelled as I pedaled around the living room. It might be the best birthday/Father’s Day we have on record.Want to see for yourself? Take a look
Quelle: Google Cloud Platform

How Unity analyzes petabytes of data in BigQuery for reporting and ML initiatives

Editor’s note: We’re hearing today from Unity Technologies, which offers a development platform for gaming, architecture, film and other industries. Here, Director of Engineering and Data Sampsa Jaatinen shares valuable insights for modern technology decision makers, whatever industry they’re in. Unity Technologies is the world’s leading platform for creating and operating real-time 3D (RT3D) content. We’ve built and operated services touching billions of endpoints a month, as well as external services benefiting financial operations, customer success, marketing and many other functions. All of these services and systems generate information that is essential for understanding and operating our company’s business and services. For complete visibility, and to unlock the full potential of our data, we needed to break down silos and consolidate numerous data sources in order to efficiently manage and serve this data. Centralizing data servicesData platforms are essential to keeping a business running, and ensuring that we can continue serving our customers—no matter what disruptions or events are happening. Before migrating to Google Cloud, we used one solution where datasets were stored for machine learning, an enterprise data warehouse for enterprise data, and yet another solution for processing reports from streaming data. We saw an opportunity to reduce overhead and serve all our needs from the same source. We wanted to centralize data services so we could build one set of solutions with a focused team instead of having different teams and business units creating their own siloed environments. A centralized data service can build once and serve multiple use cases. It also makes it easy to understand and govern the environment for compliance and privacy.Of course, centralization has its challenges. If the internal central service provider is the gatekeeper for numerous things, the team will eventually become a bottleneck, especially if the central team members’ direct involvement is needed to unlock other teams to move forward. To avoid this scenario, the centralized data services team assumes a strategy of building an environment where customer teams can operate more independently by employing self-service tooling. With easy-to-use capabilities, our data users would be able to manage their own data and development schedules independently, while maintaining high standards and good practices for data privacy and access. These cornerstones, together with the specific features and capabilities we wanted to provide, guided our decision to choose a foundational technology. We needed to build atop a solution that fully supports our mission of connecting the data to business and machine learning needs within Unity.Why we chose BigQueryFor these reasons, we decided to migrate our entire infrastructure, over two years ago, from another cloud service into Google Cloud, and based our analytics on top of BigQuery. We focused on a few main areas for this decision: scalability, features to support our diverse inputs and use cases, cost effectiveness that best fits our needs, and strong security and privacy.The scale of data that Unity processes is massive. With more than 3 billion downloads of apps per month, and 50% of all games (averaged across console, mobile, and PC) powered with Unity, we operate one of the largest ad networks in the world. We also support billions of game players around the world. Our systems ingest and process tens of billions of events every day from Unity services. In addition, we operate with outside enterprise services like CRM systems needed for our operations, whose data we want to integrate, combine, and serve alongside our own immense streaming datasets. This means that our data platform has to process billions of events per day. Furthermore, it had to be able to ingest petabytes of data per month, and enable a variety of company stakeholders to use the platform and its analytics results to make critical business decisions.The data we capture and store is used to serve insights to various internal teams. Product managers at Unity need to understand how their features and services are adopted, which also helps with development of future releases. Marketing uses the data to understand how markets are evolving and how to best engage with our existing and potential new customers. And decision makers from finance, business development, business operations, customer success, account representatives, and other teams need information about their respective domains to understand the present and recognize future gaming opportunities. In addition, the solution we chose needed to support Unity’s strong security and privacy practices. We enforce strict limitations on Unity employees’ access to datasets—the anonymization and encryption of this data is an absolute requirement and was important in making this decision.  In addition, the data platform we chose had to support the use of machine learning that sits at the heart of many Unity services. Machine learning relies on a fast closed feedback loop of the data, where the services generate data and then read it back to adjust behavior toward a more optimal behavior—for example, providing a better user experience by offering more relevant recommendations on Unity’s learning material. We wanted a data platform that could easily handle these activities.  Migrating to BigQueryThe migration started as a regular lift and shift, but required some careful tweaking of table schemas and ETL jobs and queries. The migration took slightly over six months and was a very complex engineering project—primarily because we had to meet the requirement to conform to GDPR policies. Another key factor was transforming our fragmented ecosystem of databases and tools toward a single unified data platform.Throughout this process, we learned some valuable lessons that we hope will be useful to other companies with extreme analytics requirements. Here are a few of the considerations to understand.Migration considerationsBigQuery requires a fixed schema, which has pros and cons (and differs from other products). A fixed schema removes flexibility on the side of the applications that write events, and forces stricter discipline on developers. But on the positive side, we can use this to our advantage, providing safe downstream operations since erroneous incoming records won’t break the data. This required us to build a schema management system. This allows the teams within Unity who generate data and need to store and process it to create schemas, change the schemas, and reprocess data that did not reach the target table because of a schema mismatch. The security provided by schema enforcement, and the flexibility of self-serve schema management, are essential for us to roll these data ingestion capabilities out to our teams.  Another consideration for us was data platform flexibility. On top of the ingested data, we aim to provide data aggregates for easy reporting and analysis, and an easy-to-use data processing toolset for anyone to create new aggregates, joins, and samples of the data. Both the aggregates and the event-level data are available for reporting, analysis, and machine learning targets of the data usage—all accessible in BigQuery in a flexible, scalable manner.  Something else to keep in mind with any complex analytics system is that it’s important to understand who the target users are. Some people in our company only need a simple dashboard, and BigQuery’s integration with products like Data Studio makes that easy. Sometimes these users require more sophisticated reporting and the ability to create complex dashboards, and the Looker option may make more sense.Support for machine learning was important for us. Some machine learning use cases benefit from easy-to-develop loops, where data stored in BigQuery allows easy usage of AutoML and BigQuery ML. At the same time, other machine learning use cases may require highly customizable production solutions. For these situations, we’re developing Kubeflow-based solutions that also are capable of consuming data from BigQuery. Next steps to modernize your analytics infrastructureAt Unity, we’ve been able to deploy a world-class analytics infrastructure, capable of ingesting petabytes of data from billions of events per day. We can now make that data available to key stakeholders in the organization within hours. After bringing together our previously siloed data solutions, we have seen improved internal processes, the possibility to operationalize reporting, and quicker turnaround times for many requests. Ingesting all the different data into one system, serving all the different use cases from a single source, and consolidating into BigQuery have resulted in a managed service that’s now highly scalable, flexible, and comes with minimal overhead.Check out all that is happening in machine learning at Unity, and if you want to work on similar challenges with a stellar team of engineers and scientists, browse our open ML roles.
Quelle: Google Cloud Platform

Bringing Modern Transport Security to Google Cloud with TLS 1.3

We spend a lot of time thinking about how to improve internet protocols at Google—both for our Google Cloud customers and for our own properties. From our work developing SPDY into HTTP/2 and QUIC into HTTP/3, we know that improving the protocols we use across the Internet is critical to improving user experience.Transport Layer Security, or TLS, is a family of internet protocols that Google has played an important role in developing. Formerly known as SSL, TLS is the main method of securing internet connections between servers and their clients. We first enabled TLS 1.3 in Chrome in October 2018, at the same time as Mozilla brought it to Firefox. Today, the majority of modern clients support TLS 1.3, including recent versions of Android, Apple’s iOS and Microsoft’s Edge browser, as well as BoringSSL, OpenSSL and libcurl. Support for TLS 1.3 is wide-ranging, and brings performance and security benefits to a large part of the Internet.Given this, we recently rolled out TLS 1.3 as the default for all new and existing Cloud CDN and Global Load Balancing customers. TLS 1.3 is already used in more than half of TLS connections across Google Cloud, nearly on-par with Google at large.To gain confidence that we could do this safely and without negatively impacting end users, we previously enabled TLS 1.3 across Search, Gmail, YouTube and numerous other Google services. We also monitored the feedback we received when we rolled out TLS 1.3 in Chrome. This prior experience showed that we could safely enable TLS 1.3 in Google Cloud by default, without requiring customers to update their configurations manually.What is TLS 1.3, and what does it bring?TLS 1.3 is the latest version of the TLS protocol and brings notable security improvements to you and your users, aligned with our goal of securing the Internet.Specifically, TLS 1.3 provides:Modern ciphers and key-exchange algorithms, with forward secrecy as a baseline.Removal of older, less-secure ciphers and key exchange methods, as well as an overall reduction in the complexity of the protocol.Low handshake latency (one round-trip between client and server) for full handshakes, which directly contributes to a good end-user experience.This combination of performance and security benefits is particularly notable: the perception is often that one must trade off one for the other, but modern designs can improve both. Notably, TLS 1.3 can have outsized benefits for users on:Congested networks, which is particularly relevant during times of increased internet usage.Higher-latency connections—especially cellular (mobile) devices—where the reduction in handshake round-trips is particularly meaningful.Low-powered devices, thanks to the curated list of ciphers.For example, Netflix also recently adopted TLS 1.3, and observed improvements in user experience around playback delay (network related) and rebuffers (often CPU related).As an added benefit, customers who have to meet NIST requirements, including many U.S. government agencies and their contractors, can begin to address the requirement to support TLS 1.3 ahead of NIST’s Jan 1, 2024 deadline.What’s next?TLS 1.3 has quickly taken responsibility for securing large swaths of Google Cloud customers’ internet traffic, and we expect that proportion to grow as more clients gain support for it. We’re (already!) working on the next set of modern protocols to bring to our Google Cloud customers—including TCP BBRv2, as well as IETF QUIC and HTTP/3, which are close to being finalized. We’re also planning to support TLS 1.3 0-RTT (though customers will need to update their applications to benefit from it) and certificate compression. Click here to learn more about how Google Cloud secures customer traffic using TLS across our edge network, and how to secure your global load balancer using SSL policies.
Quelle: Google Cloud Platform

Achieve higher performance and cost savings on Azure with virtual machine bursting

Selecting the right combination of virtual machines (VMs) and disks is extremely important as the wrong mix can impact your application’s performance. One way to choose which VMs and disks to use is based on your disk performance pattern, but it’s not always easy. For example, a common scenario is unexpected or cyclical disk traffic where the peak disk performance is temporary and significantly higher than the baseline performance pattern. We frequently get asked by our customers, "should I provision my VM for baseline or peak performance?" Over-provisioning can lead to higher costs, while under-provisioning can result in poor application performance and customer dissatisfaction. Azure Disk Storage now makes it easier for you to decide, and we’re pleased to share VM bursting support on your Azure virtual machines.

Get short-term, higher performance with no additional steps or costs

VM bursting, which is enabled by default, offers you the ability to achieve higher throughput for a short duration on your virtual machine instance with no additional steps or cost. Currently available on all Lsv2-series VMs in all supported regions, VM bursting is great for a wide range of scenarios like handling unforeseen spiky disk traffic smoothly, or processing batched jobs with speed. With VM bursting, you can see up to 8X improvement in throughput when bursting. Additionally, you can combine both VM and disk bursting (generally available in April) to get higher performance on your VM or disks without overprovisioning. If you have workloads running on-premises with unpredictable or cyclical disk traffic, you can migrate to Azure and take advantage of our VM bursting support to improve your application performance.

Bursting flow

VM bursting is regulated on a credit-based system. Your VM starts with a full amount of credits and these credits allow you to burst for 30 minutes at the maximum burst rate. Bursting credits accumulate when your VM instance is running under their performance disk storage limits. Bursting credits are consumed when your VM instance is running over their performance limits. For detailed examples on how bursting works, check out the disk bursting documentation. 

Benefits of virtual machine bursting

Cost savings: If your daily peak performance time is less than the burst duration, you can use bursting VMs or disks as a cost-effective solution. You can build your VM and disk combination so the bursting limits match the required peak performance and the baseline limits match the average performance.
Preparedness for traffic spikes: Web servers and their applications can experience traffic surges at any time. If your web server is backed by VMs or disks using bursting, the servers are better equipped to handle traffic spikes.
Handling batch jobs: Some application’s workloads are cyclical in nature and require a baseline performance for most of the time and require higher performance for a short period of time. An example of this would be an accounting program that processes transactions daily that require a small amount of disk traffic, but at the end of the month does reconciling reports that need a much higher amount of disk traffic.

Get started with disk bursting

Create new virtual machines on the burst supported virtual machines using the Azure portal, PowerShell, or command-line interface (CLI) now. Bursting comes enabled by default on VMs that support it, so you don't need to do anything but deploy the instance to get the benefits. Any of your exisiting VMs that support bursting will have the capability enabled automatically. You can find the specifications of burst eligible virtual machines in the table below. Bursting feature is available in all regions where Lsv2-series VMs are available.

Size

Uncached data disk throughput (MB/s)

Max burst uncached data disk throughput (MB/s)

Standard_L8s_v2

160

1280

Standard_L16s_v2

320

1280

Standard_L32s_v2

640

1280

Standard_L48s_v2

960

2000

Standard_L64s_v2

1280

2000

Standard_L80s_v2

1400

2000

Next steps

Support for more VM types as well as IOPS bursting on VMs will be available soon.

If you’d like to learn more about how the bursting feature works for both our virtual machines and disks, check out the disk bursting documentation.

Please email us at AzureDisks@microsoft.com to share your feedback on our bursting feature, or leave a post in the Azure Storage feedback forum.
Quelle: Azure

Cost optimization strategies for cloud-native application development

Today, we’ll explore some strategies that you can leverage on Azure to optimize your cloud-native application development process using Azure Kubernetes Service (AKS) and managed databases, such as Azure Cosmos DB and Azure Database for PostgreSQL.

Optimize compute resources with Azure Kubernetes Service

AKS makes it simple to deploy a managed Kubernetes cluster in Azure. AKS reduces the complexity and operational overhead of managing Kubernetes by offloading much of that responsibility to Azure. As a managed Kubernetes service, Azure handles critical tasks like health monitoring and maintenance for you.

When you’re using AKS to deploy your container workloads, there are a few strategies to save costs and optimize the way you run development and testing environments.

Create multiple user node pools and enable scale to zero

In AKS, nodes of the same configuration are grouped together into node pools. To support applications that have different compute or storage demands, you can create additional user node pools. User node pools serve the primary purpose of hosting your application pods. For example, you can use these additional user node pools to provide GPUs for compute-intensive applications or access to high-performance SSD storage.

When you have multiple node pools, which run on virtual machine scale sets, you can configure the cluster autoscaler to set the minimum number of nodes, and you can also manually scale down the node pool size to zero when it is not needed, for example, outside of working hours.

For more information, learn how to manage node pools in AKS.

Spot node pools with cluster autoscaler

A spot node pool in AKS is a node pool backed by a virtual machine scale set running spot virtual machines. Using spot VMs allows you to take advantage of unused capacity in Azure at significant cost savings. Spot instances are great for workloads that can handle interruptions like batch processing jobs and developer and test environments.

When you create a spot node pool. You can define the maximum price you want to pay per hour as well as enable the cluster autoscaler, which is recommended to use with spot node pools. Based on the workloads running in your cluster, the cluster autoscaler scales up and scales down the number of nodes in the node pool. For spot node pools, the cluster autoscaler will scale up the number of nodes after an eviction if additional nodes are still needed.

Follow the documentation for more details and guidance on how to add a spot node pool to an AKS cluster.

Enforce Kubernetes resource quotas using Azure Policy

Apply Kubernetes resource quotas at the namespace level and monitor resource usage to adjust quotas as needed. This provides a way to reserve and limit resources across a development team or project. These quotas are defined on a namespace and can be used to set quotas for compute resources, such as CPU and memory, GPUs, or storage resources. Quotas for storage resources include the total number of volumes or amount of disk space for a given storage class and object count, such as a maximum number of secrets, services, or jobs that can be created.

Azure Policy integrates with AKS through built-in policies to apply at-scale enforcements and safeguards on your cluster in a centralized, consistent manner. When you enable the Azure Policy add-on, it checks with Azure Policy for assignments to the AKS cluster, downloads and caches the policy details, runs a full scan, and enforces the policies.

Follow the documentation to enable the Azure Policy add-on on your cluster and apply the Ensure CPU and memory resource limits policy which ensures CPU and memory resource limits are defined on containers in an Azure Kubernetes Service cluster.

Optimize the data tier with Azure Cosmos DB

Azure Cosmos DB is Microsoft's fast NoSQL database with open APIs for any scale. A fully managed service, Azure Cosmos DB offers guaranteed speed and performance with service-level agreements (SLAs) for single-digital millisecond latency and 99.999 percent availability, along with instant and elastic scalability worldwide. With the click of a button, Azure Cosmos DB enables your data to be replicated across all Azure regions worldwide and use a variety of open-source APIs including MongoDB, Cassandra, and Gremlin.

When you’re using Azure Cosmos DB as part of your development and testing environment, there are a few ways you can save some costs. With Azure Cosmos DB, you pay for provisioned throughput (Request Units, RUs) and the storage that you consume (GBs).

Use the Azure Cosmos DB free tier

Azure Cosmos DB free tier makes it easy to get started, develop, and test your applications, or even run small production workloads for free. When a free tier is enabled on an account, you'll get the first 400 RUs per second (RU/s) throughput and 5 GB of storage. You can also create a shared throughput database with 25 containers that share 400 RU/s at the database level, all covered by free tier (limit 5 shared throughput databases in a free tier account). Free tier lasts indefinitely for the lifetime of the account and comes with all the benefits and features of a regular Azure Cosmos DB account, including unlimited storage and throughput (RU/s), SLAs, high availability, turnkey global distribution in all Azure regions, and more.

Try Azure Cosmos DB for free.

Autoscale provisioned throughput with Azure Cosmos DB

Provisioned throughput can automatically scale up or down in response to application patterns.  Once a throughput maximum is set, Azure Cosmos DB containers and databases will automatically and instantly scale provisioned throughput based on application needs.

Autoscale removes the requirement for capacity planning and management while maintaining SLAs. For that reason, it is ideally suited for scenarios of highly variable and unpredictable workloads with peaks in activity. It is also suitable for when you’re deploying a new application and you’re unsure about how much provisioned throughput you need. For development and test databases, Azure Cosmos DB containers will scale down to a pre-set minimum (starting at 400 RU/s or 10 percent of maximum) when not in use. Autoscale can also be paired with the free tier.

Follow the documentation for more details on the scenarios and how to use Azure Cosmos DB autoscale.

Share throughput at the database level

In a shared throughput database, all containers inside the database share the provisioned throughput (RU/s) of the database. For example, if you provision a database with 400 RU/s and have four containers, all four containers will share the 400 RU/s. In a development or testing environment, where each container may be accessed less frequently and thus require lower than the minimum of 400 RU/s, putting containers in a shared throughput database can help optimize cost.

For example, suppose your development or test account has four containers. If you create four containers with dedicated throughput (minimum of 400 RU/s), your total RU/s will be 1,600 RU/s. In contrast, if you create a shared throughput database (minimum 400 RU/s) and put your containers there, your total RU/s will be just 400 RU/s. In general, shared throughput databases are great for scenarios where you don't need guaranteed throughput on any individual container

Follow the documentation to create a shared throughput database that can be used for development and testing environments.

Optimize the data tier with Azure Database for PostgreSQL

Azure Database for PostgreSQL is a fully-managed service providing enterprise-grade features for community edition PostgreSQL. With the continued growth of open source technologies especially in times of crisis, PostgreSQL has been seeing increased adoption by users to ensure the consistency, performance, security, and durability of their applications while continuing to stay open source with PostgreSQL. With developer-focused experiences and new features optimized for cost, Azure Database for PostgreSQL enables the developer to focus on their application while database management is taken care of by Azure Database for PostgreSQL.

Reserved capacity pricing—Now on Azure Database for PostgreSQL

Manage the cost of running your fully-managed PostgreSQL database on Azure through reserved capacity now made available on Azure Database for PostgreSQL. Save up to 60 percent compared to regular pay-as-you-go payment options available today.

Check out pricing on Azure Database for PostgreSQL to learn more.

High performance scale-out on PostgreSQL

Leverage the power of high-performance horizontal scale-out of your single-node PostgreSQL database through Hyperscale. Save time by doing transactions and analytics in one database while avoiding the high costs and efforts of manual sharding.

Get started with Hyperscale on Azure Database for PostgreSQL today.

Stay compatible with open source PostgreSQL

By leveraging Azure Database for PostgreSQL, you can continue enjoying the many innovations, versions, and tools of community edition PostgreSQL without major re-architecture of your application. Azure Database for PostgreSQL is extension-friendly so you can continue achieving your best scenarios on PostgreSQL while ensuring top-quality, enterprise-grade features like Intelligent Performance, Query Performance Insights, and Advanced Threat Protection are constantly at your fingertips.

Check out the product documentation on Azure Database for PostgreSQL to learn more.
Quelle: Azure

Making your data residency choices easier with Azure

Azure is now available in over 140 countries and offers customers more than 60 datacenter regions worldwide (and growing) from which to choose. These Azure regions provide customers with the benefits of data residency and latency optimization and may enable regional compliance.

We understand that with Azure’s over 200 services, advances in architecture, and data protection promises, there are a lot of options available to customers. To help you make the right decisions, we have summarized the answers to your questions on Azure regions, data residency, data access, and retention. Download the white paper, Enabling Data Residency and Data Protection in Azure Regions to learn more.

When customers move workloads to Azure, they face a number of choices, such as datacenter regions, high availability (HA) and disaster recovery (DR) architecture, and encryption models. To make the right decisions, customer need to consider both technical and regulatory requirements. To optimize latency, customers should determine the appropriate region based on the location of their users or customer base.

For regulatory compliance considerations, data residency considerations may support or even mandate the physical locations where data can be stored, and how and when it can be transferred internationally. These regulations can differ significantly depending on jurisdiction. Azure’s regions and service features provide customers with different avenues so they can select and limit data residency and data access. This enables customers in regulated industries to successfully run mission-critical workloads in the cloud and leverage all the advantages of the Microsoft hyperscale cloud.

The purpose of the white paper is to give customer-specific guidance in navigating these decisions, including:

Understanding Azure’s regional infrastructure, including high availability, availability zones, disaster recovery, latency, and service availability considerations, and how to make optimal architecture decisions.
Data residency assurances and how customers can control data residency. Most Azure services are deployed regionally and enable the customer to specify the region into which the service will be deployed and control where the customer data will be stored. Certain services and regions have some exceptions and limitations to these rules, which are outlined fully in the white paper.
Data access to telemetry data, including elevated access for support data, and how customers can manage data access. The collection and use of telemetry and support data issues  has raised questions from some of our customers, and the white paper provides detailed answers.
How Microsoft protects customer data from unauthorized access and how Microsoft handles government requests, including implications of the Cloud ACT. Customers have asked us for specific details about when Microsoft engineers may access data and how we respond to government requests for data. The white paper provides clarity.
Tools customers can use to protect from unauthorized and authorized data access. Customers have a wealth of tools available to restrict, protect, and encrypt data at rest, in transit, and in some cases, in use.
Data retention and deletion. The white paper details Microsoft’s policies and practices for the retention and disposal of customer data.

We appreciate all of the feedback and questions we have received from customers regarding data residency and data protection in recent months, and we will continue to strive to provide you the most complete and current answers we can, so expect this white paper to be updated in the future.

Download Enabling Data Residency and Data Protection in Azure Regions, and visit Azure Global Infrastructure and Microsoft Trust Center to learn more.
Quelle: Azure

Minimize disruption with cost-effective backup and disaster recovery solutions on Azure

A top of mind concern among our customers is keeping their applications and data workloads running and recoverable in the case of unforeseen events or disasters. For example, COVID-19 has presented daunting challenges for IT, which are only compounded by growing threats from ransomware or setbacks related to technical or operational failure. These considerations further highlight the importance of a plan to ensure business continuity. IT admins are looking to cloud-based backup and disaster recovery solutions as part of their business continuity strategy because of the ability to quickly onboard, scale based on storage needs, remotely manage, and save costs by avoiding additional on-premises investments.

Azure provides native cloud solutions for customers to implement simple, secure and cost-effective business continuity and disaster recovery (BCDR) strategies for their applications and data whether they are on-premises or on Azure. Once enabled, customers benefit from minimal maintenance and monitoring overhead, remote management capabilities, enhanced security, and the ability to immutably recover services in a timely and orchestrated manner. Customers can also use their preferred backup and disaster recovery providers from a range of our partner solutions to extend their on-premises BCDR solutions to Azure.

All of this is possible without the need to learn new tools for configuration or management. Simply create an Azure Storage account and you have Petabytes of available offsite storage to add to your BCDR solution within a few minutes.

Reduce complexity, cost, and enhance security with Azure solutions

Azure Backup is a service designed to back up and restore data, and Azure Site Recovery is designed to perform seamless application disaster recovery. Together, these services provide a more complete backup and recovery solution that can be implemented and scaled with just a few clicks.

By not having to build on-premises solutions or maintain a costly secondary datacenter, users can reduce the cost of deploying, monitoring, and patching disaster recovery infrastructure. Azure Backup uses flexible policies to automatically allocate and manage storage to optimize cost and meet business objectives. Together, Azure Backup and Azure Site Recovery use the underlying power of Azure’s highly available storage to store customer data. These native capabilities are available through a pay-as-you-use model that only bills for storage consumed.

Azure’s centralized management interface for Azure Backup and Azure Site Recovery makes it simple and easy to define policies to natively protect a wide range of enterprise workloads including Azure Virtual Machines, SQL and SAP databases, Azure File shares and on-premises Windows servers or Linux VMs. Using Azure Site Recovery, users can set up and manage replication, failover, and failback from the Azure portal. Customers can also take advantage of the Windows Admin Center Azure Hybrid Services Hub to protect on-premises virtual machines (VMs) and enable Azure Backup and Site Recovery right from the Windows Admin Center console.

We are committed to providing the best-in-class security capabilities to protect customer resources on Azure. Azure Backup protects backups of on-premises and cloud-resources from ransomware attacks by isolating backup data from source data, combined with multi-factor authentication (MFA) and the ability to recover maliciously or accidentally deleted backup data. With Azure Site Recovery you can fail over VMs to the cloud or between cloud datacenters and secure them with network security groups.

Peace of mind is paramount when it comes to recovering from the unexpected. In the case of a disruption, accidental deletion, or corruption of data, customers can rest assured that they will be able to recover their business services and data in a timely and orchestrated manner. These native capabilities support low recovery-point objective (RPO) and recovery-time objective (RTO) targets for any critical workload. Azure is here to help customers pivot towards a strengthened BCDR strategy.

Extend solutions to Azure with our trusted partner ecosystem

We understand that organizations may be using an on-premises BCDR solution from another technology provider. A number of popular BCDR solutions are integrated with Azure enabling customers to extend their existing solutions into the cloud.

Some examples include:

Commvault supports all tiers of Azure Storage as an offsite backup and data management target and enables backup and recovery from on-premises to Azure and for Azure VMs. Customers can quickly and easily restore applications, workloads and data to Azure as a cost-effective disaster recovery (DR) site and use Commvault Live Sync to achieve low recovery point objectives (RPOs).
Rubrik offers built-for-Azure features like Smart Tiering for easy backup to Azure, cost-effective data storage in the tier of choice, and quick recovery of data and apps to Azure in the event of a disaster or for dev-test scenarios. Rubrik enables backup and recovery from on-premises to Azure and for Azure VMs.
Veeam Backup and Replication integrates with Azure to easily protect and recover on-premises VMs, physical servers, and endpoints into Azure. Veeam Backup for Microsoft Azure leverages native Azure functionality and a built-in cost-calculator to provide an integrated, simple and cost-effective backup for Azure VMs.
Veritas’ NetBackup and Backup Exec offer backup, disaster recovery and migration to Azure. NetBackup CloudCatalyst and CloudPoint enable backup and recovery of on-premises assets to Azure, and protection of Azure VMs respectively. NetBackup Resiliency enables integrated disaster recovery and migration experiences to Azure, between Azure regions and Azure Stack.

Discover the available partner solutions in the Azure Marketplace.

Learn more

Strengthen your BCDR strategy today by taking these next steps:

Sign up for the webinar, Minimize Business Disruption with Azure BCDR Solutions.
Review options to extend your current BCDR solution to Azure with our trusted partners.
Get started with Azure Backup and Azure Site Recovery today.

Quelle: Azure