BigQuery helps Soundtrack Your Brand hit the high notes without breaking a sweat

Editor’s note: Soundtrack Your Brand is an award-winning streaming service with the world’s largest  licensed music catalog built just for businesses, backed by Spotify. Today, we hear how BigQuery has been a foundational component in helping them transform big data into music. Soundtrack Your Brand is a music company at its heart, but big data is our soul. Playing the right music at the right time has a huge influence on the emotions a brand inspires, the overall customer experience, and sales.  We have a catalog of over 58 million songs and their associated metadata from our music providers and a vast amount of user data that helps us deliver personalized recommendations, curate playlists and stations, and even generate listening schedules. As an example, through our Schedules feature our customers can set up what to play during the week.  Taking that one step further, we provide suggestions on what to use in different time slots and recommend entire schedules.Using BigQuery, we built a data lake to empower our employees to access all this content and metadata in a structured way. Ensuring that our data is easily discoverable and accessible allows us to build any type of analytics or machine learning (ML) use case and run queries reliably and consistently across the complete data set. Today, our users are benefiting from this advanced analytics through the personalized recommendations we offer across our core features: Home, Search, Playlists, Stations, and Schedules.Fine-tuning developer productivityThe biggest business value that comes from BigQuery is how much it speeds up our development capabilities and allows us to ship features faster. In the past 3 years, we have built more than 150 pipelines and more than 30 new APIs within our ML and data teams that total about 10 people. That is an impressive rate of a new pipeline every week and a new API every month.  With everything in BigQuery, it’s easy to simply write SQL and have it be orchestrated within a CI/CD toolchain to automate our data processing pipelines. An in-house tool built as a github template, in many ways very similar to Dataform, helps us build very complex ETL processes in minutes, significantly reducing the time spent on data wrangling. BigQuery acts as a cornerstone for our entire data ecosystem, a place to anchor all our data and be our single source of truth. This single source of truth has expanded the limits of what we can do with our data. Most of our pipelines start from a data lake, or end at a data lake, increasing re-usability of data and collaboration. For example, one of our interns built an entire churn prediction pipeline in a couple of days on top of existing tables that are produced daily. Nearly a year later, this pipeline is still running without failure largely due to its simplicity. The pipeline is BigQuery queries chained together into a BigQuery ML model running on a schedule withKubeflow Pipelines. Once we made BigQuery the anchor for our data operations, we discovered we could apply it to use cases that you might not expect, such as maintaining our configurations or supporting our content management system. For instance, we created a Google Sheet where our music experts are able to correct genre classification mistakes for songs by simply adding a row to a Google Sheet. Instead of hours or days to create a bespoke tool, we were able to set everything up in a few minutes. BigQuery’s ability to consume Excel spreadsheets allows business users who play key roles in improving our recommendations engine and curating our music, such as our content managers and DJs, to contribute to the data pipeline.Another example is our use of BigQuery as an index for some of our large Cloud Storage buckets. By using cloud functions to subscribe to read/write events for a bucket, and writing those events to partitioned tables, our pipelines can easily and in a natural way quickly search and access files, such as downloading and processing the audio of new track releases. We also make use of Log Events when a table is added to a dataset to trigger pipelines that process data on demand, such as JSON/CSV files from some of our data providers that are newly imported into BQ. Being the place for all file integration and processing, BQ allows new data to be quickly available to our entire data ecosystem in a timely and cost effective manner while allowing for data retention, ETL, ACL and easy introspection.BigQuery makes everything simple. We can make a quick partitioned table and run queries that use thousands of CPU hours to sift through a massive volume of data in seconds — and only pay a few dollars for the service. The result? Very quick, cost-effective ETL pipelines. In addition, centralizing all of our data in BigQuery makes it possible to easily establish connections between pipelines providing developers with a clear understanding of what specific type of data a pipeline will produce. If a developer wants a different outcome, she can copy the github template and change some settings to create a new, independent pipeline.Another benefit is that developers don’t have to coordinate schedules or sync with each other’s pipelines: they just need to know that a table that is updated daily exists and can be relied on as a data source for an application. Each developer can progress their work independently without worrying about interfering with other developers’ use of the platform.Making iteration our forteOut of the box, BigQuery met and exceeded our performance expectations, but ML performance was the area that really took us by surprise. Suddenly, we found ourselves going through millions of rows in a few seconds, where the previous method might have taken an hour.  This performance boost ultimately led to us improving our artist clustering workload from more than 24 hours on a job running 100 CPU workers to 10 minutes on a BigQuery pipeline running inference queries in a loop until convergence.  This more than 140x performance improvement also came at 3% of the cost. Currently we have more than 100 Neural Network ML models being trained and run regularly in batch in BQML. This setup has become our favorite method for both fast prototyping and creating production ready models. Not only is it fast and easy to hypertune in BQML, but our benchmarks show comparable performance metrics to using our own Tensorflow code. We now use Tensorflow sparingly. Differences in input data can have an even greater impact on the experience of the end user than individual tweaks to the models. BigQuery’s performance makes it easy to iterate with the domain experts who help shape our recommendations engine or who are concerned about churn, as we are able to show them the outcome on our recommendations from changes to input data in real time. One of our favorite things to do is to build a Data Studio report that has the ML.predict query as part of its data source query. This report shows examples of good/bad predictions in the report along with bias/variance summaries and a series of drop-downs, thresholds and toggles to control the input features and the output threshold. We give that report to our team of domain experts to help manually tune the models, putting the model tuning right in the hands of the domain experts. Having humans in the loop has become trivial for our team. In addition to fast iteration, the BigQuery ML approach is also very low maintenance. You don’t need to write a lot of Python or Scala code or maintain and update multiple frameworks—everything can be written as SQL queries run against the data store.Helping brands to beat the band—and the competition BigQuery has allowed us to establish a single source of truth for our company that our developers and domain experts can build on to create new and innovative applications that help our customers find the sound that fits their brand. Instead of cobbling together data from arbitrary sources, our developers now always start with a data set from BigQuery and build forward.  This guarantees the stability of our data pipeline and makes it possible to build outward into new applications with confidence. Moreover, the performance of BigQuery means domain experts can interact with the analytics and applications that developers create more easily and see the results of their recommended improvements to ML models or data inputs quickly. This rapid iteration drives better business results, keeps our developers and domain experts aligned, and ensures Soundtrack Your Brand keeps delivering sound that stands out from the crowd.Related ArticleHow Telus Insights is using BigQuery to deliver on the potential of real-world big dataBigQuery’s impressive performance reduces processing time from months to hours and delivers on-demand real-world insights for Telus.Read Article
Quelle: Google Cloud Platform

What can you build with the new Google Cloud developer subscription?

To help you grow and build faster – and take advantage of the 123 product announcements from Next ‘22 – last month we launched theGoogle Cloud Skills Boost annual subscription with new Innovators Plus benefits. We’re already hearing rave reviews from subscribers from England to Indonesia, and want to share what others are learning and doing to help inspire your next wave of Google Cloud learning and creativity.First, here’s a summary of what the Google Cloud Skills Boost annual subscription1 with Innovators Plus benefits includes;Access to 700+ hands-on labs, skill badges, and courses$500 Google Cloud creditsA Google Cloud certification exam voucherBonus $500 Google Cloud credits after the first certification earned each yearLive learning events led by Google Cloud expertsQuarterly technical briefings hosted by Google Cloud executivesCelebrating learning achievementsSubscribers get access to everything needed to prepare for a Google Cloud certification exam, which are among the top paying IT certifications in 20222. Subscribers also receive a certification exam voucher to redeem when booking the exam.Jochen Kirstätter, a Google Developer Expert and Innovator Champion is using the subscription to prepare for his next Google Cloud Professional certification exam, and has found the labs and courses on Google Cloud Skills Boost have helped him feel ready to go get #GoogleCloudCertified “‘The only frontiers are in your mind’ – with the benefits of #InnovatorsPlus I can explore more services and practice real-life scenarios intensively for another Google Cloud Professional certification.”Martin Coombes, a web developer from PageHub Design, is a new subscriber and has already become certified as a Cloud Digital Leader. That means he’s been able to unlock the bonus $500 of Google Cloud credit benefit to use on his next project. “For me, purchasing the annual subscription was a no brainer. The #InnovatorsPlus benefits more than pay back the investment and I’ve managed to get my first Google Cloud certification within a week using the amazing Google Cloud Skills Boost learning resources. I’m looking forward to further progressing my knowledge of Google Cloud products.”Experimenting and building with $500 of Google Cloud credits We know how important it is to learn by doing. And isn’t hands-on more fun? Another great benefit of the annual subscription is $500 of Google Cloud credits every year you are a subscriber. And even better, once you complete a Google Cloud certification, you will unlock a bonus $500 of credits to help build your next project just like Martin and Jeff did. Rendy Junior, Head of Data at Ruangguru and a Google Cloud Innovator Champion, has already been able to apply the credits to an interesting data analysis project he’s working on. “I used the Google Cloud credits to explore new features and data technology in DataPlex. I tried features such as governance federation and data governance whilst data is located in multiple places, even in different clouds. I also tried DataPlex data cataloging; I ran a DLP (Data Loss Prevention) inspection and fed the tag where data is sensitive into the DataPlex catalog. The credits enable me to do real world hands-on testing which is definitely helpful towards preparing for certification too.”Jeff Zemerick, recently discovered the subscription and has been able to achieve his Professional Cloud Database certification using the voucher and Google Cloud credits to prepare.  “I was preparing for the Google Cloud Certified Professional Cloud Database exam and the exam voucher was almost worth it by itself. I used some of the $500 cloud credits to prepare for the exam by learning about some of the Google Cloud services where I felt I might need more hands-on experience. I will be using the rest of the credits and the additional $500 I received from passing the exam to help further the development of our software to identify and redact sensitive information in the Google Cloud environment. I’m looking forward to using the materials available in Google Cloud Skills Boost to continue growing my Google Cloud skills!”Grow your cloud skills with live learning events Subscribers gain access to live learning events, where a Google Cloud trainer teaches popular topics in a virtual classroom environment. Live-learning events cover topics like BigQuery, Kubernetes, CloudRun, Cloud Storage, networking and security. We’ve set these up to go deep: mini live-learning courses consist of two highly efficient hours of interactive instruction, and gamified live learning events are three hours of challenges and fun. We’ve already had over 400 annual subscribers reserve a spot for upcoming live learning events. Seats are filling up fast for the November and December events, so claim yours before it’s too late. Shape the future of Google Cloud products through the quarterly technical briefings  As a subscriber, you are invited to join quarterly technical briefings, getting insight into the latest product developments and new features, with the opportunity for subscribers to engage and shape future product development for Google Cloud. Coming up this quarter, get face time with Matt Thompson, Google Cloud’s Director of Developer Adoption, who will demonstrate some of the best replicable uses of Google Cloud he’s seen from leading developers. Start your subscription today Take charge of your cloud career today by visiting cloudskillsboost.google to get started with your annual subscription. Make sure to activate your Innovators Plus badge once you do and enjoy your new benefits. 1. Subject to eligibility limitations. 2. Based on responses from the Global Knowledge 2022 IT Skills and Salary Survey.
Quelle: Google Cloud Platform

NBA and Microsoft team up to transform fan experiences with cloud application modernization

There’s nothing quite like watching a basketball game and cheering on your favorite team as they battle it out for points before the buzzer sounds. From the players and employees to the technology, all need to work in lockstep to deliver a truly immersive experience.

As fans, we expect personalized experiences that bring the virtual world and the real world together on and off the court. This means brand new viewing experiences and virtual reality, real-time highlights of our favorite basketball games, and seamless ways to connect with other fans (and rivals!) when we want, how we want.

Having the right technology partner and cloud-based app transformation strategy is necessary to help organizations like the National Basketball Association (NBA) continue to deliver such unforgettable experiences and exceed fan expectations. Successful app modernization requires teamwork, which is why we’re proud to share our latest customer story featuring our partnership with the NBA.

Inside the customer playbook: NBA’s IT Application Development Group

Our latest customer story takes you into the world of the NBA’s IT Application Development Group, a dedicated team responsible for developing and maintaining the NBA's applications for internal and external users. The NBA leveraged Microsoft Azure application platform services for app modernization to accelerate the time to market of apps for multiple use cases that have elevated the NBA experience wherever fans, referees, and employees engage.

This process involved consolidating the apps and data the NBA was running from multiple locations into one place, including those that were on-premises. Modernizing a large app estate requires the NBA’s IT Application Development Group to plan for many tasks, from configuration and security to provisioning and scaling, and optimizing the networking and storage needs. Utilizing cloud technologies such as Azure App Service enabled the NBA to accelerate time to market by offloading these routine but important tasks to a fully managed application platform. They further streamlined the app development process with low-code and no-code capabilities using Azure and PowerApps.

How did this translate for fans, referees, and employees? Here’s a sneak peek of the use cases that you can read in detail in our customer story:

Fans: See how the NBA used virtual simulations and digital in-game experiences, to ensure fans felt connected to the game (and one another) when gathering in person was still difficult during the COVID-19 pandemic.

Referees (but really, fans!): Learn about REPS (Referee Engagement and Performance System), an app designed to aid referees and management in evaluation, collaboration, training, and development to ensure game consistency—and no bad calls.

Employees: Discover NBAOne, an internal mobile-first app the NBA created for its 1,800 employees consolidating no fewer than 50+ different applications into a single-sign-on experience. This simple-to-use app helped employees do everything from booking game tickets to marking time off, significantly improving their day-to-day employee experience.

Achieving a faster time to market

When it comes to delivering new experiences, we know that faster time to market is what keeps customers coming back. Azure brings not only the technology but also a number of fully managed services to support faster app and data modernization at scale:

Leverage fully managed application and data services such as Azure App Service, Azure Spring Apps, Azure SQL Database Hyperscale, and Azure Cosmos DB.
Quickly deploy line of business apps with low-code application development using Power Apps and Azure.
Build on containers with Azure Kubernetes Service (AKS).
Manage continuous deployment and development workstreams with AzureDevOps.
Get unmatched technical expertise through Microsoft United Support.

As a versatile platform with global scale, built-in security, and high availability, Azure is the all-star in your playbook to accelerate time-to-market with modern apps.

Choose your modern apps transformation strategy

Every customer is a potential fan, and when it comes to choosing the right technology partner, accelerating time to market, enabling higher productivity, and global scale are factors that deliver memorable customer experiences time and time again. We’re thrilled to have the NBA partner with Azure on this important mission and love the opportunity to this customer story.

Is your organization exploring app modernization? Learn more about Application and data modernization and how Azure can help you accelerate time to market to deliver incredible experiences.
Quelle: Azure

AI and the need for purpose-built cloud infrastructure

The progress of AI has been astounding with solutions pushing the envelope by augmenting human understanding, preferences, intent, and even spoken language. AI is improving our knowledge and understanding by helping us provide faster, more insightful solutions that fuel transformation beyond our imagination. However, with this rapid growth and transformation, AI’s demand for compute power has grown by leaps and bounds, outpacing Moore’s Law’s ability to keep up. With AI powering a wide array of important applications that include natural language processing, robot-powered process automation, and machine learning and deep learning, AI silicon manufacturers are finding new, innovative ways to get more out of each piece of silicon such as integration of advanced, mixed-precision capabilities, to enable AI innovators to do more with less. At Microsoft, our mission is to empower every person and every organization on the planet to achieve more, and with Azure’s purpose-built AI infrastructure we intend to deliver on that promise.

Azure high-performance computing provides scalable solutions

The need for purpose-built infrastructure for AI is evident—one that can not only scale up to take advantage of multiple accelerators within a single server but also scale out to combine many servers (with multi-accelerators) distributed across a high-performance network. High-performance computing (HPC) technologies have significantly advanced multi-disciplinary science and engineering simulations—including innovations in hardware, software, and the modernization and acceleration of applications by exposing parallelism and advancements in communications to advance AI infrastructure. Scale-up AI computing infrastructure combines memory from individual graphics processing units (GPUs) into a large, shared pool to tackle larger and more complex models. When combined with the incredible vector-processing capabilities of the GPUs, high-speed memory pools have proven to be extremely effective at processing large multidimensional arrays of data to enhance insights and accelerate innovations.

With the added capability of a high-bandwidth, low-latency interconnect fabric, scale-out AI-first infrastructure can significantly accelerate time to solution via advanced parallel communication methods, interleaving computation and communication across a vast number of compute nodes. Azure scale-up-and scale-out AI-first infrastructure combines the attributes of both vertical and horizontal system scaling to address the most demanding AI workloads. Azure’s AI-first infrastructure delivers leadership-class price, compute, and energy-efficient performance today.

Cloud infrastructure purpose-built for AI

Microsoft Azure, in partnership with NVIDIA, delivers purpose-built AI supercomputers in the cloud to meet the most demanding real-world workloads at scale while meeting price/performance and time-to-solution requirements. And with available advanced machine learning tools, you can accelerate incorporating AI into your workloads to drive smarter simulations and accelerate intelligent decision-making.

Microsoft Azure is the only global public cloud service provider that offers purpose-built AI supercomputers with massively scalable scale-up-and-scale-out IT infrastructure comprised of NVIDIA InfiniBand interconnected NVIDIA Ampere A100 Tensor Core GPUs. Optional and available Azure Machine Learning tools facilitate the uptake of Azure’s AI-first infrastructure—from early development stages through enterprise-grade production deployments.

Scale-up-and-scale-out infrastructures powered by NVIDIA GPUs and NVIDIA Quantum InfiniBand networking rank amongst the most powerful supercomputers on the planet. Microsoft Azure placed in the top 15 of the Top500 supercomputers worldwide and currently, five systems in the top 50 use Azure infrastructure with NVIDIA A100 Tensor Core GPUs. Twelve of the top twenty ranked supercomputers in the Green500 list use NVIDIA A100 Tensor Core GPUs.

Source: Top 500 The List: Top500 November 2022, Green500 November 2022.

With a total solution approach that combines the latest GPU architectures, designed for the most compute-intensive AI training and inference workloads, and optimized software to leverage the power of the GPUs, Azure is paving the way to beyond exascale AI supercomputing. And this supercomputer-class AI infrastructure is made broadly accessible to researchers and developers in organizations of any size around the world in support of Microsoft’s stated mission. Organizations that need to augment their existing on-premises HPC or AI infrastructure can take advantage of Azure’s dynamically scalable cloud infrastructure.

In fact, Microsoft Azure works closely with customers across industry segments. Their increasing need for AI technology, research, and applications is fulfilled, augmented, and/or accelerated with Azure’s AI-first infrastructure. Some of these collaborations and applications are explained below:

Retail and AI

AI-first cloud infrastructure and toolchain from Microsoft Azure featuring NVIDIA are having a significant impact in retail. With a GPU-accelerated computing platform, customers can churn through models quickly and determine the best-performing model. Benefits include:

Deliver 50x performance improvements for classical data analytics and machine learning (ML) processes at scale with AI-first cloud infrastructure.
Leveraging RAPIDS with NVIDIA GPUs, retailers can accelerate the training of their machine learning algorithms up to 20x. This means they can use larger data sets and process them faster with more accuracy, allowing them to react in real-time to shopping trends and realize inventory cost savings at scale.
Reduce the total cost of ownership (TCO) for large data science operations.
Increase ROI for forecasting, resulting in cost savings from reduced out-of-stock and poorly placed inventory.

With autonomous checkout, retailers can provide customers with frictionless and faster shopping experiences while increasing revenue and margins. Benefits include:

Deliver better and faster customer checkout experience and reduce queue wait time.
Increase revenue and margins.
Reduce shrinkage—the loss of inventory due to theft such as shoplifting or ticket switching at self-checkout lanes, which costs retailers $62 billion annually, according to the National Retail Federation.

In both cases, these data-driven solutions require sophisticated deep learning models—models that are much more sophisticated than those offered by machine learning alone. In turn, this level of sophistication requires AI-first infrastructure and an optimized AI toolchain.

Customer story (video): Everseen and NVIDIA create a seamless shopping experience that benefits the bottom line.

Manufacturing

In manufacturing, compared to routine-based or time-based preventative maintenance, proactive predictive maintenance can get ahead of the problem before it happens and save businesses from costly downtime. Benefits of Azure and NVIDIA cloud infrastructure purpose-built for AI include:

GPU-accelerated compute enables AI at an industrial scale, taking advantage of unprecedented amounts of sensor and operational data to optimize operations, improve time-to-insight, and reduce costs.
Process more data faster with higher accuracy, allowing faster reaction time to potential equipment failures before they even happen.
Achieve a 50 percent reduction in false positives and a 300 percent reduction in false negatives.

Traditional computer vision methods that are typically used in automated optical inspection (AOI) machines in production environments require intensive human and capital investment. Benefits of GPU-accelerated infrastructure include:

Consistent performance with guaranteed quality of service, whether on-premises or in the cloud.
GPU-accelerated compute enables AI at an industrial scale, taking advantage of unprecedented amounts of sensor and operational data to optimize operations, improve quality, time to insight, and reduce costs.
Leveraging RAPIDS with NVIDIA GPUs, manufacturers can accelerate the training of their machine-learning algorithms up to 20x.

Each of these examples require an AI-first infrastructure and toolchain to significantly reduce false positives and negatives in predictive maintenance and to account for subtle nuances in ensuring overall product quality.

Customer story (video): Microsoft Azure and NVIDIA gives BMW the computing power for automated quality control.

As we have seen, AI is everywhere, and its application is growing rapidly. The reason is simple. AI enables organizations of any size to gain greater insights and apply those insights to accelerating innovations and business results. Optimized AI-first infrastructure is critical in the development and deployment of AI applications.

Azure is the only cloud service provider that has a purpose-built, AI-optimized infrastructure comprised of Mellanox InfiniBand interconnected NVIDIA Ampere A100 Tensor Core GPUs for AI applications of any scale for organizations of any size. At Azure, we have a purpose-built AI-first infrastructure that empowers every person and every organization on the planet to achieve more. Come and do more with Azure!

Learn more about purpose-built infrastructure for AI

Watch the Understanding AI and AI Infrastructure webcast.
Read the An AI-First Infrastructure and Toolchain for Any Scale whitepaper.
Read the Accelerating AI and HPC in the Cloud whitepaper.
Learn more about Azure HPC + AI.
Keep up to date on the Azure + NVIDIA partnership and offerings. 

Quelle: Azure

Announcing new capabilities for Azure Firewall

We are happy to share several key Azure Firewall capabilities that are now generally available as well as updates on recent important releases into general availability (GA) and preview.

New GA regions in Qatar central, China East, and China North
IDPS Private IP ranges now generally available.
Single Click Upgrade/Downgrade now in preview.
Enhanced Threat Intelligence now in preview.
KeyVault with zero internet exposure now in preview.

Azure Firewall is a cloud-native firewall as a service offering that enables customers to centrally govern and log all their traffic flows using a DevOps approach. The service supports both application and network-level filtering rules and is integrated with the Microsoft Threat Intelligence feed to filter known malicious IP addresses and domains. Azure Firewall is highly available with built-in auto-scaling.

New GA regions in Qatar central, China East, and China North

We are happy to announce that Azure Firewall Standard, Azure Firewall Premium, and Azure Firewall Manager are now generally available in three new regions: Qatar Central, China East, and China North.

With these three new regions, Azure Firewall is now available in 38 regions worldwide!

IDPS Private IP ranges now GA

A network intrusion detection and prevention system (IDPS) allow you to monitor network activities for malicious activity, log information about this activity, report it, and optionally attempt to block it.

In Azure Firewall Premium IDPS, Private IP address ranges are used to identify traffic direction (inbound, outbound, or internal) to allow accurate matches with IDPS signatures. By default, only ranges defined by Internet Assigned Numbers Authority (IANA) RFC 1918 are considered private IP addresses. To modify your private IP addresses, you can now easily edit, remove, or add ranges as needed.

Single Click Upgrade/Downgrade (preview)

With this new capability, customers can easily upgrade their existing Firewall Standard SKU to Premium SKU as well as downgrade from Premium to Standard SKU. The process is fully automated and has zero service downtime.
In the upgrade process, users can select the policy to be attached to the upgraded Premium SKU. Either by using an existing Premium Policy or by utilizing their existing Standard Policy. Customers can utilize their existing Standard policy and let the system automatically duplicate, upgrade to Premium Policy, and attach it to the newly created Premium Firewall.

This new capability is available through the Azure portal as seen in the screenshot below, as well as via PowerShell and Terraform.

Enhanced Threat Intelligence (preview)

Threat Intelligence is information an organization uses to understand the threats that have, will, or are currently targeting the organization. This info is used to prepare, prevent, and identify cyber threats looking to take advantage of valuable resources. Azure Firewall Threat intelligence information is sourced from the Microsoft Threat Intelligence feed, which includes multiple sources including the Microsoft Cyber Security team.

Threat Intelligence-based filtering can be enabled for your firewall to alert and deny traffic from/to known malicious IP addresses and FQDNs. With the new enhancement, Azure Firewall Threat Intelligence has more granularity for filtering based on malicious URLs. This means that customers may have access to a certain domain through a specific URL in this domain will be denied by Azure Firewall if identified as malicious.

For optimal granularity, customers can utilize Threat Intelligence allow list to bypass threat intelligence validation on trusted FQDNs, IP addresses, ranges, and subnets.

In HTTPS, the URL is encrypted, thus customers can utilize Azure Firewall Premium TLS inspection to allow URL-based Threat Intelligence also for their encrypted traffic.

With Azure Firewall IDPS, Threat Intelligence, and TLS inspection, customers can improve their security posture to become better protected against future threats.

KeyVault with zero internet exposure (preview)

In Azure Firewall Premium TLS inspection, customers are required to deploy their intermediate CA certificate in Azure KeyVault. Now that Azure firewall is listed as a trusted Azure KeyVault service, customers can eliminate any internet exposure of their Azure KeyVault.

At Microsoft, we are constantly evolving Azure Firewall to meet our customers’ needs and help them strengthen their security and gain efficiencies. Last month, we announced the preview of Policy Analytics for Azure Firewall, which helps improve your security posture by providing critical insights and recommendations for optimizing firewall rules. We also recently announced the preview of Azure Firewall Basic, a new SKU of Azure Firewall designed to meet the needs of SMBs by providing enterprise-grade protection of their cloud environment at an affordable price point. We plan to share further enhancements to Azure Firewall very soon, including new troubleshooting capabilities. Please stay tuned!

Learn more

Get started with Azure Firewall.
Azure Firewall Documentation.
Azure Firewall Preview Features.
Azure Firewall Premium.
Azure network security resources.

Quelle: Azure

Amazon ElastiCache unterstützt jetzt Internet Protocol Version 6 (IPv6)

Amazon-ElastiCache-Cluster unterstützen jetzt das IPv6-Protokoll, so dass sich Clients über IPv6 mit ElastiCache-Clustern verbinden können. Du kannst deinen Cluster jetzt so konfigurieren, dass er nur IPv6-Verbindungen oder sowohl IPv4- als auch IPv6-Verbindungen akzeptiert. So kannst du die Anforderungen an die IPv6-Konformität erfüllen und die Integration mit bestehenden IPv6-basierten Anwendungen effizienter gestalten.
Quelle: aws.amazon.com

Ankündigung der allgemeinen Verfügbarkeit von Amazon Redshift Serverless in den AWS-Regionen USA West (Nordkalifornien) und Europa (Paris)

Amazon Redshift Serverless, mit dem du Analysen ausführen und skalieren kannst, ohne Data-Warehouse-Cluster bereitstellen und verwalten zu müssen, ist jetzt allgemein in den zusätzlichen AWS-Regionen USA West (Nordkalifornien) und Europa (Paris) verfügbar. Mit Amazon Redshift Serverless können alle Benutzer, einschließlich Datenanalysten, Entwickler und Datenwissenschaftler, Amazon Redshift verwenden, um in Sekundenschnelle Erkenntnisse aus Daten zu gewinnen. Amazon Redshift Serverless stellt automatisch die Data-Warehouse-Kapazität bereit und skaliert sie intelligent, um eine erstklassige Leistung für all deine Analysen zu bieten. Du zahlst nur für die Computingleistung, die für die Dauer der Workloads verwendet wird, pro Sekunde. Du kannst von dieser Einfachheit profitieren, ohne Änderungen an deinen bestehenden Analytics- und Business-Intelligence-Anwendungen vornehmen zu müssen.
Quelle: aws.amazon.com

Amazon SageMaker Canvas kündigt Unterstützung für Korrelationsmatrizen für die erweiterte Datenanalyse an

Amazon SageMaker Canvas bietet Unterstützung für Korrelationsmatrizen für die erweiterte Datenanalyse und erweitert damit die Möglichkeiten, vor der Erstellung von ML-Modellen Einblicke aus Daten zu gewinnen. Amazon SageMaker Canvas ist eine visuelle Point-and-Click-Benutzeroberfläche, mit der Geschäftsanalysten selbst genaue ML-Prognosen erstellen können – ohne Erfahrung mit Machine Learning zu haben oder eine einzige Zeile Code schreiben zu müssen.  
Quelle: aws.amazon.com

AWS Private 5G Service bietet jetzt Unterstützung für mehrere Funkeinheiten

AWS Private 5G ist ein verwalteter Service, der die Bereitstellung, den Betrieb und die Skalierung Ihres eigenen privaten Mobilfunknetzes vereinfacht, wobei die gesamte erforderliche Hardware und Software von AWS bereitgestellt wird. Um ein privates Mobilfunknetz einzurichten und Geräte anzuschließen, liefert und wartet AWS die folgenden erforderlichen Komponenten: Kleinzellen-Funkeinheit, Teilnehmeridentitätsmodule (SIM-Karten) und Mobilfunksoftware, die in der AWS Cloud ausgeführt wird. Wir freuen uns, heute bekannt geben zu können, dass du dein Netzwerk jetzt jederzeit skalieren kannst, indem du zusätzliche Kleinzellen-Funkeinheiten bestellst, um die Abdeckung zu erweitern, oder zusätzliche SIM-Karten, um mehr Geräte anzuschließen.
Quelle: aws.amazon.com