Defend against DDoS attacks with Azure DDoS IP Protection

Distributed denial of service (DDoS) attacks continue to rise as new threats and attack techniques emerge. With DDoS attacks becoming more frequent, it’s important for organizations of all sizes to be proactive and stay protected all year round. Small and medium businesses (SMBs) face the same risks as larger organizations though are more vulnerable as they often lack resources and specialized expertise.

We are committed to providing security solutions to all our customers. We are announcing the general availability of Azure DDoS IP Protection SKU, a new SKU of Azure DDoS Protection designed to meet the needs of SMBs.

Enterprise-grade DDoS protection at an affordable price point

Azure DDoS IP Protection provides enterprise-grade DDoS protection at an affordable price point. It offers the same essential capabilities as Azure DDoS Network Protection (previously known as Azure DDoS Protection Standard) to protect your resources and applications against evolving DDoS attacks. Customers also have the flexibility to enable protection on individual public IP addresses.

“DDoS protection is a must have today for critical websites. Azure DDoS Protection provides comprehensive protection though the existing DDoS Network Protection SKU did not fit the price point for smaller organizations. We are happy that the DDoS IP Protection SKU provides the same level of protection as the Network Protection SKU at an affordable price point and the flexibility to protect individual public IPs.”—Derk van der Woude, CTO, Nedscaper.

“We are excited that the DDoS IP Protection SKU provides enterprise-grade, cost effective DDoS protection for customers with smaller cloud environments with only a few public IP endpoints in the cloud.”—Markus Lintuala, Senior Technical Consultant, Elisa.

Key features of Azure DDoS IP Protection

Massive mitigation capacity and scale– Defend your workloads against the largest and most sophisticated attacks with cloud scale DDoS protection backed by Azure’s global network. This ensures that we can mitigate the largest attacks reported in history and thousands of attacks daily.
Protection against attack vectors– DDoS IP Protection mitigates volumetric attacks that flood the network with a substantial amount of seemingly legitimate traffic. They include UDP floods, amplification floods, and other spoofed-packet floods. DDoS IP Protection mitigates these potential multi-gigabyte attacks by absorbing and scrubbing them, with Azure's global network scale, automatically. It also protects against protocol attacks that may render a target inaccessible, by exploiting a weakness in the layer 3 and layer 4 protocol stack. They include SYN flood attacks, reflection attacks, and other protocol attacks. DDoS IP Protection mitigates these attacks, differentiating between malicious and legitimate traffic, by interacting with the client, and blocking malicious traffic. Resource (application) layer attacks target web applications and include HTTP/S floods and low and slow attacks. Use Azure Web Application Firewall to defend against these attacks.
Native integration into Azure portal– DDoS IP Protection is natively integrated into the Azure portal for easy setup and deployment. This level of integration enables DDoS IP Protection to identify your Azure resources and their configuration automatically.
Seamless protection– DDoS IP Protection seamlessly safeguards your resources. There’s no need to deploy anything in your Azure Virtual Network (VNet), or to change your current networking architecture. DDoS is deployed as an overlay on top of your current networking services.
Adaptive tuning– Protect your apps and resources while minimizing false-negatives with adaptive tuning tuned to the scale and actual traffic patterns of your application. Applications running in Azure are inherently protected by the default infrastructure-level DDoS protection. However, the protection that safeguards the infrastructure has a much higher threshold than most applications have the capacity to handle, so while a traffic volume may be perceived as harmless by the Azure platform, it can be devastating to the application that receives it. Adaptive tuning guarantees your applications are protected when application-targeted attacks are undetected by Azure’s DDoS infrastructure-level protection offered to all Azure customers.
Attack analytics, metrics, and logging– Monitor DDoS attacks near real-time and respond quickly to attacks with visibility into attack lifecycle, vectors, and mitigation. With DDoS IP Protection, customers can monitor when the attack is taking place, collect statistics on mitigation, and view the detection thresholds assigned by the adaptive tuning engine to make sure they align with expected traffic baselines. Diagnostic logs offer a deep-dive view on attack insights, allowing customers to investigate attack vectors, traffic flows, and mitigations to support them in their DDoS response strategy.
Integration with Microsoft Sentinel and Microsoft Defender for Cloud– Strengthen your security posture with rich attack analytics and telemetry integrated with Microsoft Sentinel. We offer a Sentinel solution that includes comprehensive analytics and alert rules to support customers in their Security Orchestration, Automation, and Response (SOAR) strategy. Customers can setup and view security alerts and recommendations provided by Defender for Cloud.

Choosing the right Azure DDoS protection SKU for your needs

Azure DDoS protection is available in two SKUs:

DDoS IP Protection is recommended for SMB customers with a few public IP resources who need a comprehensive DDoS protection solution that is fully managed, easy to deploy, and monitor.
DDoS Network Protection is recommended for larger enterprises and organizations looking to protect their entire deployment that spans multiple virtual networks and includes many public IP addresses. It also offers additional features like cost protection, DDoS Rapid Response, and discounts on Azure Web Application Firewall.

Let’s see a detailed comparison between these two SKUs:

Get started

DDoS IP Protection can be enabled from the public IP address resource Overview blade.

Protection status in the Properties tab shows if the resource is DDoS protected, and what is the protection type (either Network or IP Protection).

For more information on DDoS IP Protection, see Azure DDoS IP Protection documentation.

Azure DDoS IP Protection pricing

With DDoS IP Protection, you only pay for the public IP resources protected. The cost is a fixed monthly amount for each public IP resource protected with no additional variable costs. For more details on pricing, visit the Azure DDoS Protection pricing page.

Next Steps

Azure portal
Configure DDoS telemetry
Configure DDoS diagnostic logging
Monitoring Azure DDoS Protection
Test with simulation partners

Quelle: Azure

Discover an Azure learning community with Microsoft Learn rooms

Microsoft Learn has many options for remote learning—and now we’re expanding our catalogue with even more. We’re excited to announce our newest offering to help connect you with our Azure learning community: Microsoft Learn rooms. Learning rooms are a core part of the Microsoft Learn community, and they’re designed to connect you with other learners and technical experts.

Whether you’re a tenured techie, looking to jumpstart your career, or begin a pathway, learning rooms open the door to a world of opportunities. The Learn community can help you grow your network, meet others in the field, explore topic-specific technologies in the real-world, and sharpen your Microsoft Azure Cloud skills. With learning rooms, you can join peers and experts on your pathway to skill up on Azure at your own pace in a safe and supportive environment—so you can strengthen your knowledge and propel your cloud computing career. 

What are learning rooms?

Learning rooms are free and open to anyone seeking a connected, supportive, and engaging community experience to learn. Designed for cohort learning and guided by a Learn expert, with asynchronous conversations and office hours, a focus for many of the learning rooms will be towards Microsoft Azure. They bring together individuals with a common learning interest—such as the Azure cloud—and unite them with experts in the community who will look to support and guide learners in their journey and foster an engaging and supportive educational environment.  They’re a part of the Microsoft Learn Azure community, a broader space where learners from all over the world can engage directly with technology experts and others who share common Azure interests.

Learning rooms are also connected to Microsoft Tech Community, which is a network of resources that supports the Azure, Windows Server, and SQL Server interest groups. Within the Tech Community are smaller tech communities for specific topics, like Azure infrastructure, and you can visit these smaller forums and browse all learning rooms that connect to it. Once you’ve found a room that you like, joining is easy—you simply request access using a registration form and accept a learner agreement. From there, you’ll be able to bring all your Azure questions, at any time of the day, to your cohort who will guide you through them—giving you the classroom experience right at home. Discover more about learning rooms here.

What will I learn?

Learning rooms focus across several technology areas. They include Microsoft Cloud and Azure subjects, such as Azure Infrastructure, Data and AI, and Digital and Application Innovation, and their small size ensures that you get exactly the support you need. Each room is led by Microsoft Learn experts, who are validated technical subject matter experts present throughout our community resources with experience in technical skilling, community support, and a deep knowledge in the room’s specific topic area. Not just anyone can be an expert—they’re proven community leaders that are selected by invitation only, such as Microsoft Most Valuable Professionals (MVPs), Microsoft Certified Trainers (MCTs), and Microsoft Technical and Trainers (MTTs).

Besides offering one-on-one support, Microsoft Learn experts are also knowledgeable resources who can direct you to other programs that fit your skillset, such as our Microsoft Azure Connected Learning Experience (CLX), 30 Days to Learn It, and Azure Skills Navigator Guides. They can also give you studying tips, advise you on best learning practices, and they can invite other Microsoft leaders to a room to discuss even the most difficult technical topic areas on your learning path. Finally, experts can help you prep for many Microsoft Certification Exams, ensuring you get the knowledge you need to exceed your professional goals, wherever you are in your learning journey.

What happens in a room?

Once you’ve joined a room, you’ll be immersed in a lively discussion. You can post questions on complex or more straightforward topics, like how to sign up for a Virtual Training Day. If you’re preparing for a Microsoft Certification Exam, you can even use the room for study prep by crowdsourcing study guides and practice tests, learning about which questions are most likely to appear on your upcoming exam, or figuring out if you qualify for an exam discount. Whatever your need may be, you’ll get answers from a peer, pro, or invited Azure authority, giving you a myriad of thoughtful and diverse perspectives.

When you’re not posting something of your own, you can explore past threads from your peers to discover questions you may have never thought to ask. The most recent and popular threads will appear at the top of the room to ensure you’re always in the loop. You can also vote on questions and answers, boosting the most helpful responses directly to the top of forum. If you’re feeling like a pro yourself, you can even answer questions on your own—experts and other Azure authorities, who you’ll recognize by their blue and green nametag icons, will be there to validate your answers and support you every step of the way.

How do I sign up?

If you’re interested in joining a room, check out our Microsoft Learn community. Here, you can explore rooms like Azure infrastructure, data & AI, and digital & application innovation, and you can tap into other Microsoft Learn community resources.

Don’t forget to also explore our infrastructure skilling resources, read about the myriad of other Azure skilling content we’ve launched recently, sign up for a Virtual Training Day, or simply explore our Microsoft Learn Azure community for even more helpful resources.

If you’re interested in other Azure programs, explore our resources below:

Microsoft Azure CLX
Microsoft Cloud Skills Challenge: 30 Days to Learn It
Azure Skills Navigator Guides

With Microsoft Learn, the knowledge is out there—it’s up to you to harness it.
Quelle: Azure

Announcing Azure Firewall enhancements for troubleshooting network performance and traffic visibility

IT security administrators are often called on to troubleshoot network issues. For instance, a critical application may exhibit latency or disconnections, frustrating end users. These issues may be caused by a recent routing update or changes in security. In some cases, the cause may be due to a sudden burst in network traffic—overwhelming the network resources.

Microsoft Azure Firewall now offers new logging and metric enhancements designed to increase visibility and provide more insights into traffic processed by the firewall. IT security administrators may use a combination of the following to root cause application performance issues:

o    Latency Probe metric is now in preview.
o    Flow Trace Log is now in preview.
o    Fat Flows Log is now in preview.

Azure Firewall is a cloud-native firewall as a service offering that enables customers to centrally govern and log all their traffic flows using a DevOps approach. The service supports both application and network-level filtering rules and is integrated with the Microsoft Defender Threat Intelligence feed to filter known malicious IP addresses and domains. Azure Firewall is highly available with built-in auto-scaling.

Latency Probe metric—now in preview

In a network infrastructure, one may observe increases in latency depending on various factors. The ability to monitor the latency of the firewall is essential for proactively engaging in any potential issues with traffic or services in the infrastructure.

The Latency Probe metric is designed to measure the overall latency of Azure Firewall and provide insight into the health of the service. IT administrators can use the metric for monitoring and alerting if there is observable latency and diagnosing if the Azure Firewall is the cause of latency in a network.

In the case that Azure Firewall is experiencing latency, this can be due to various reasons, such as high CPU utilization, traffic throughput, or networking issues. As an important note, this tool is powered by Pingmesh technology, which means that the metric measures the average latency of the firewall itself. The metric does not measure end-to-end latency or the latency of individual packets.

 

Figure 1: Dashboard view of healthy firewall latency measured by the Latency Probe (Preview) metric.

 

Flow Trace logs—now in preview

Azure Firewall logging provides logs for various traffic—such as network, application, and threat intelligence traffic. Today, these logs show traffic through the firewall in the first attempt at a Transmission Control Protocol (TCP) connection, also known as the SYN packet. However, this fails to show the full journey of the packet in the TCP handshake. The ability to monitor and track every packet through the firewall is paramount for identifying packet drops or asymmetric routes.

To dive further into an asymmetric routing example, Azure Firewall—as a stateful firewall—maintains state connections and automatically and dynamically allows traffic to successfully come back to the firewall. However, asymmetric routing can occur when a packet takes one path to the destination through the firewall and takes a different path when attempting to return to the source. This can be due to user misconfiguration, such as adding an unnecessary route in the path of the firewall.

As a result, one can verify if a packet has successfully flowed through the firewall or if there is asymmetric routing by viewing the additional TCP handshake logs in Flow Trace.

To do so, you can monitor network logs to view the first SYN packet and click "enable Flow Trace" to see the additional flags for verification:

o    SYN-ACK
o    FIN
o    FIN-ACK
o    RST
o    INVALID

By adding these additional flags in Flow Trace logs, IT administrators can now see the return packet, if there was a failed connection, or an unrecognized packet. To enable these logs, please read the documentation linked below.

Figure 2: Flow Trace logs displaying SYN-ACK and FIN packets.

 

 

Top Flows—now in preview

Today, Microsoft Azure Firewall Standard can support up to 30 Gbps and Azure Firewall Premium can support up to 100 Gbps of traffic processing. However, in any case, sometimes traffic flows can either be unintentionally or intentionally “heavy” depending on the size, duration, and other factors of the packets. Since these flows can potentially impact other flows and the processing of the firewall, it’s important to monitor these traffic flows, to ensure that the firewall can perform optimally.

The Top Flows log—or industry-known as Fat Flows—log shows the top connections that are contributing to the highest bandwidth in a given time frame through the firewall.

This visibility provides the following benefits for IT administrators:

o    Identifying the top traffic flows traversing through the firewall.
o    Identifying any unexpected or anomaly traffic.
o    Deciding what traffic should be allowed or denied, based on results and goals.

To enable these logs, please read the documentation linked below.

Figure 3: Top Flow logs displaying traffic with the top flow rates.

Next steps

For more information on Azure Firewall and everything we covered in this blog post, see the following resources:

· Azure Firewall documentation.

· Azure Firewall Manager documentation.

· Deploy and configure Azure Firewall logs and metrics.

· Enable Flow Trace and Top Flows Logs Tutorial.
Quelle: Azure

The Net Zero journey: Why digital twins are a powerful ally

Azure Digital Twins leverages IoT for powerful modeling that can ease transition to greater sustainability.

Climate impacts raise stakes for Net Zero transition

Following weeks of vital discussions at COP27 in Egypt, the urgency to bring the world to a more sustainable path has never been greater. Scientists have warned that the world needs to cut global emissions by 5 percent to 7 percent per year to limit the damage caused by climate change. At present, however, emissions are rising by 1 percent to 2 percent per year. Discovering new routes to a Net Zero economy is critical if we are to limit the economic and social damage of a rapidly changing climate. And that means we all have a part to play in ensuring we strike the optimal balance between greenhouse gas production and the amount of greenhouse gas that gets removed from the atmosphere.

A Microsoft and PWC blueprint for the transition to Net Zero highlights the importance of innovation and the harnessing of new technologies that enable organizations to deliver on their Net Zero ambitions, at pace. A key innovation that aims to accelerate organizations’ journey to Net Zero is digital twin technology supported by AI Infrastructure capabilities. A digital twin can be considered as a virtual working representation of assets, products, and production plants. Powered by Microsoft Azure AI-optimized infrastructure that leverages NVIDIA accelerated computing and networking technologies, digital twins allow organizations to visualize, simulate, and predict operations, whether those are at a manufacturing plant, a wind farm, a mining operation, or any other type of operation.

Adoption of digital twin technology offers early adopters the potential of truly accelerated and differentiated business value realization. Innovative companies can leverage this potent toolset to accelerate their innovation journeys and drive strategic business outcomes powered by technology innovation at scale. A recent study by Microsoft and Intel found that globally, only 28 percent of manufacturers have started rolling out a digital twin solution, and of those, only one in seven have fully deployed it at their manufacturing plants. One of the key findings of this study highlighted that when digital twins are utilized effectively, they can realize huge efficiency, optimization, and cost-saving gains while unlocking mission-critical insights that can drive innovation and improve decision-making for those who adopt the technology.

Maximizing wind energy production with digital twins

Digital twins have emerged as a powerful tool for renewable energy producers seeking optimization gains in their production processes too. Take South Korea's Doosan Heavy Industries & Construction as an example. As a leader in engineering, procurement, heavy manufacturing, power generation and desalination services, Doosan Heavy Industries & Construction was appointed by the South Korean government to help it meet the goals of its Green New Deal plan, which includes a target of generating 20 percent of the country's electricity needs through renewables by 2030.

Seeking improvements in the efficiency of their wind turbines, Doosan Heavy Industries & Construction partnered with Microsoft and Bentley Systems to develop a digital twin of its wind farms that helps it maximize energy production and reduce maintenance costs. The company currently has 16 South Korean wind farms in operation, which generate enough electricity to power as many as 35,000 homes per year. Its innovative digital controls and operations enables Doosan to remotely monitor wind farm operations, predict maintenance before failures occur, and limit the need for maintenance teams to physically inspect the wind turbines.

Leveraging Azure Digital Twins and Azure IoT Hub powered by NVIDIA-accelerated Azure AI Infrastructure capabilities, Doosan can simulate, visualize, and optimize every aspect of its infrastructure planning, deployment, and ongoing monitoring. This has led to greater energy efficiency, boosted employee safety, and improved asset resilience. And with Bentley seeing their Azure-powered digital twin technology reduce operational and maintenance costs by 15 percent at other facilities, Doosan is well-positioned to continue benefiting from their digital twin solution and unlocking new efficiency gains by leveraging the power of cloud-based AI infrastructure capabilities.

Leveraging digital twins to power Net Zero transition

In the oil and gas sector, digital twin technology is helping one of the world's leading carbon-emitting industries to identify opportunities for optimization and carbon reduction. A noteworthy showcase can be found with Tata Consulting Services who delivered a Clever Energy solution to a global consumer goods giant. Using digital twins, real-time data and cognitive intelligence to improve energy savings at this consumer goods customer’s production plants, the solution helped reduce energy use by up to 15 percent as well as an equivalent CO2 emissions reduction. Considering that buildings consume nearly 40 percent of the world’s energy and emit one third of greenhouse gasses, this solution also helps the customer alleviate some of the pressures of significant energy cost increases in Europe.

In another example, a large multinational supplier that aims to achieve Net Zero carbon status by no later than 2050 is today leveraging the power of digital twins to support its sustainability goals.

From the vast global network of complex assets this company manages, a digital twin of one of their facilities was developed to calculate real-time carbon intensity and energy efficiency. Microsoft Azure provided the perfect platform: the IoT Hub receives more than 250 billion data signals per month from the company’s global operating assets, with AI providing key insights into how they could become a safer and more efficient business and Azure AI Infrastructure and High-Performance Computing enabling the seamless processing of huge volumes of data.

With long-term plans in place to scale the digital twin solution to all of the company’s global facilities, Microsoft Azure's security, scalability, and powerful high-performance computing capabilities will be key supporting factors in how successfully they could transition to more carbon-aware operations.

Powering the Next Era of Industrial Digitalization

At NVIDIA GTC, a global AI conference, NVIDIA and Microsoft announced a collaboration to connect the NVIDIA Omniverse platform for developing and operating industrial metaverse applications with Azure Cloud Services. Enterprises of every scale will soon be able to use the Omniverse Cloud platform-as-a-service on Microsoft Azure to fast-track development and deployment of physically accurate, connected, secure, AI-enabled digital twin simulations.

Key takeaways about a Net Zero economy and digital twins

Shifting to a Net Zero economy is one of the defining challenges of our time. As the devastating impact of climate change continues to disrupt global economies, businesses will need novel ways of reducing their carbon footprint and help bring the world to a more sustainable path.

Considering the vast complexity of modern businesses—especially resource-intensive industries such as oil and gas, and manufacturing—finding ways to optimize processes, reduce waste, and accelerate time to value can be extremely cumbersome unless novel technology solutions are found to help provide differentiated strategic capabilities.

Digital twin technology offers organizations a powerful option to run detailed simulations generating vast amounts of data. By integrating that data to the power and scalability of Azure high performance computing (HPC) and leveraging the visualization power of Nvidia’s GPU-accelerated virtual computing capabilities, organizations can discover new opportunities for greater efficiency, optimization, and carbon-neutrality gains.

Read more about how companies are using IoT spatial intelligence to create detailed digital twins of physical assets by downloading the latest IoT Signals Report.

Learn more

To learn more about Azure HPC and AI, read more about Azure HPC solutions https://www.azure.com/hpc or to request a demo, contact HPCdemo@microsoft.com.
Quelle: Azure

What’s new in Azure Data & AI: Azure is built for generative AI apps

OpenAI launched ChatGPT in December 2022, immediately inspiring people and companies to pioneer novel use cases for large language models. It’s no wonder that ChatGPT reached 1 million users within a week of launch and 100 million users within two months, making it the fastest-growing consumer application in history.1 It’s likely several use cases could transform industries across the globe.

As you may know, ChatGPT and similar generative AI capabilities found in Microsoft products like Microsoft 365, Microsoft Bing, and Microsoft Power Platform are powered by Azure. Now, with the recent addition of ChatGPT to Azure OpenAI Service as well as the preview of GPT-4, developers can build their own enterprise-grade conversational apps with state-of-the-art generative AI to solve pressing business problems in new ways. For example, The ODP Corporation is building a ChatGPT-powered chatbot to support internal processes and communications, while Icertis is building an intelligent assistant to unlock insights throughout the contract lifecycle for one of the largest curated repositories of contract data in the world. Public sector customers like Singapore's Smart Nation Digital Government Office are also looking to ChatGPT and large language models more generally to build better services for constituents and employees. You can read more about their use cases here.

Broadly speaking, generative AI represents a significant advancement in the field of AI and has the potential to revolutionize many aspects of our lives. This is not hype. These early customer examples demonstrate how much farther we can go to make information more accessible and relevant for people around the planet to save finite time and attention—all while using natural language. Forward-looking organizations are taking advantage of Azure OpenAI to understand and harness generative AI for real-world solutions today and in the future.

A question we often hear is “how do I build something like ChatGPT that uses my own data as the basis for its responses?” Azure Cognitive Search and Azure OpenAI Service are a perfect pair for this scenario. Organizations can now integrate the enterprise-grade characteristics of Azure, the ability of Cognitive Search to index, understand and retrieve the right pieces of your own data across large knowledge bases, and ChatGPT’s impressive capability for interacting in natural language to answer questions or take turns in a conversation. Distinguished engineer Pablo Castro published a great walk-through of this approach on TechCommunity. We encourage you to take a look.

What if you’re ready to make AI real for your organization? Don’t miss these upcoming events:

Uncover Predictive Insights with Analytics and AI: Watch this webcast to learn how data, analytics, and machine learning can lay the foundation for a new wave of innovation. You’ll hear from leaders at Amadeus, a travel technology company, on why they chose the Microsoft Intelligent Data Platform, how they migrated to innovate, and their ongoing data-driven transformation. Register here.

HIMSS 2023: The Healthcare Information and Management Systems Society will host its annual conference in Chicago on April 17 to 21, 2023. The opening keynote on the topic of responsible AI will be presented by Microsoft Corporate Vice President, Peter Lee. Drop by the Microsoft booth (#1201) for product demos of AI, health information management, privacy and security, and supply chain management solutions. Register here.

Microsoft AI Webinar featuring Forrester Research: Join us for a conversation with guest speaker Mike Gualtieri, Vice President, Principal Analyst of Forrester Research on April 20, 2023, to learn about a variety of enterprise use cases for intelligent apps and ways to make AI actionable within your organization. This is a great event for business leaders and technologists looking to build machine learning and AI practices within their companies. Register here.

March 2023 was a banner month in terms of expanding the reasons why Azure is built for generative AI applications. These new capabilities highlight the critical interplay between data, AI, and infrastructure to increase developer productivity and optimize costs in the cloud.

Accelerate data migration and modernization with new support for MongoDB data in Azure Cosmos DB

At Azure Cosmos DB Conf 2023, we announced the public preview of Azure Cosmos DB for MongoDB vCore, providing a familiar architecture for MongoDB developers in a fully-managed integrated native Azure service. Now, developers familiar with MongoDB can take advantage of the scalability and flexibility of Azure Cosmos DB for their workloads with two database architecture options: the vCore service for modernizing existing MongoDB workloads and the request unit-based service for cloud-native app development.

Startups and growing businesses build with Azure Cosmos DB to achieve predictable performance, pivot fast, and scale while keeping costs in check. For example, The Postage, a cloud-first startup recently featured in WIRED magazine, built their estate-planning platform using Azure Cosmos DB. Despite tall barriers to entry for regulated industries, the startup secured deals with financial services companies by leaning on the enterprise-grade security, stability, and data-handling capabilities of Microsoft. Similarly, analyst firm Enterprise Strategy Group (ESG) recently interviewed three cloud-first startups that chose Azure Cosmos DB to achieve cost-effective scale, high performance, security, and fast deployments. The startup founders highlighted serverless and auto-scale, free tiers, and flexible schema as features helping them do more with less. Any company looking to be more agile and get the most out of Azure Cosmos DB will find some good takeaways.

Save time and increase developer productivity with new Azure database capabilities

In March 2023, we announced Data API builder, enabling modern developers to create full-stack or backend solutions in a fraction of the time. Previously, developers had to manually develop the backend APIs required to enable applications for data within modern access database objects like collections, tables, views, or stored procedures. Now, those objects can easily and automatically be exposed via a REST or GraphQL API, increasing developer velocity. Data API builder supports all Azure Database services.

We also announced the Azure PostgreSQL migration extension for Azure Data Studio. Powered by the Azure Database Migration Service. It helps customers evaluate migration readiness to Azure Database for PostgreSQL-Flexible Server, identify the right-sized Azure target, calculate the total cost of ownership (TCO), and create a business case for migration from PostgreSQL. At Azure Open Source Day, we also shared new Microsoft Power Platform integrations that automate business processes more efficiently in Azure Database for MySQL as well as new observability and enterprise security features in Azure Database for PostgreSQL. You can register to watch Azure Open Source Day presentations on demand.

One recent “migrate to innovate” story I love comes from Peapod Digital Labs (PDL), the digital and commercial engine for the retail grocery group Ahold Delhaize USA. PDL is modernizing to become a cloud-first operation, with development, operations, and a collection of on-premises databases migrated to Azure Database for PostgreSQL. By moving away from a monolithic data setup towards a modular data and analytics architecture with the Microsoft Intelligent Data Platform, PDL developers are building and scaling solutions for in-store associates faster, resulting in fewer service errors and higher associate productivity.

Announcing a renaissance in computer vision AI with the Microsoft Florence foundation model

Earlier this month, we announced the public preview of the Microsoft Florence foundation model, now in preview in Azure Cognitive Service for Vision. With Florence, state-of-the-art computer vision capabilities translate visual data into downstream applications. Capabilities such as automatic captioning, smart cropping, classifying, and searching for images can help organizations improve content discoverability, accessibility, and moderation. Reddit has added automatic captioning to every image. LinkedIn uses Vision Services to deliver automatic captioning and alt-text descriptions, enabling more people to access content and join the conversation. Because Microsoft Research trained Florence on billions of text-image pairs, developers can customize the model at high precision with just a handful of images.

Microsoft was recently named a Leader in the IDC Marketspace for Vision, even before the release of Florence. Our comprehensive Cognitive Services for Vision offer a collection of prebuilt and custom APIs for image and video analysis, text recognition, facial recognition, image captioning, model customization, and more, that developers can easily integrate into their applications. These capabilities are useful across industries. For example, USA Surfing uses computer vision to improve the performance and safety of surfers by analyzing surfing videos to quantify and compare variables like speed, power, and flow. H&R Block uses computer vision to make data entry and retrieval more efficient, saving customers and employees valuable time. Uber uses computer vision to quickly verify drivers’ identities against photos on file to safeguard against fraud and provide drivers and riders with peace of mind. Now, Florence makes these vision capabilities even easier to deploy in apps, with no machine learning experience required.

Build and operationalize open-source large AI models in Azure Machine Learning

At Azure Open Source Day in March 2023, we announced the upcoming public preview of foundation models in Azure Machine Learning. Azure Machine Learning will offer native capabilities so customers can build and operationalize open-source foundation models at scale. With these new capabilities, organizations will get access to curated environments and Azure AI Infrastructure without having to manually manage and optimize dependencies. Azure Machine Learning professionals can easily start their data science tasks to fine-tune and deploy foundation models from multiple open-source repositories, including Hugging Face, using Azure Machine Learning components and pipelines. Watch the on-demand demo session from Azure Open Source Day to learn more and see the feature in action.

Microsoft AI at NVIDIA GTC 2023

In February 2023, I shared how Azure’s purpose-built AI infrastructure supports the successful deployment and scalability of AI systems for large models like ChatGPT. These systems require infrastructure that can rapidly expand with enough parallel processing power, low latency, and interconnected graphics processing units (GPUs) to train and inference complex AI models—something Microsoft has been working on for years. Microsoft and our partners continue to advance this infrastructure to keep up with increasing demand for exponentially more complex and larger models.

At NVIDIA GTC in March 2023, we announced the preview of the ND H100 v5 Series AI Optimized Virtual Machines (VMs) to power large AI workloads and high-performance compute GPUs. The ND H100 v5 is our most performant and purpose-built AI virtual machine yet, utilizing GPU, Mellanox InfiniBand for lightning-fast throughput. This means industries that rely on large AI models, such as healthcare, manufacturing, entertainment, and financial services, will have easy access to enough computing power to run large AI models and workloads without requiring the capital for massive physical hardware or software investments. We are excited to bring this capability to customers, along with access from Azure Machine Learning, over the coming weeks with general availability later this year.

Additionally, we are excited to announce Azure Confidential Virtual Machines for GPU workloads. These VMs offer hardware-based security enhancements to better protect GPU data-in-use. We are happy to bring this capability to the latest NVIDIA GPUs—Hopper. In healthcare, confidential computing is used in multi-party computing scenarios to accelerate the discovery of new therapies while protecting personal health information.2 In financial services and multi-bank environments, confidential computing is used to analyze financial transactions across multiple financial institutions to detect and prevent fraud. Azure confidential computing helps accelerate innovation while providing security, governance, and compliance safeguards to protect sensitive data and code, in use and in memory.

What’s next

The energy I feel at Microsoft and in conversations with customers and partners is simply electric. We all have huge opportunities ahead to help improve global productivity securely and responsibly, harnessing the power of data and AI for the benefit of all. I look forward to sharing more news and opportunities in April 2023.

1ChatGPT sets record for fastest-growing user base—analyst note, Reuters, February 2, 2023.

2Azure Confidential VMs are not designed, intended or made available as a medical device(s), and are not designed or intended to be a substitute for professional medical advice, diagnosis, treatment, or judgment and should not be used to replace or as a substitute for professional medical advice, diagnosis, treatment, or judgment.
Quelle: Azure

Enhanced Azure Arc integration with Datadog simplifies hybrid and multicloud observability

Businesses today are managing complex, distributed environments and need a ubiquitous computing platform for all workloads that can meet them where they are. We’ve seen an increasing need for customers to not only deploy, manage, and operate across on-premises and one or more clouds, but also to have better visibility and insights across all IT investments spanning cloud to edge.

Today, we’re delivering improved observability and management with the general availability of our enhanced Microsoft Azure Arc integration with Datadog. Building on our established collaboration, we are natively integrating Datadog with Azure Arc to meet customers where they are and provide rich insights from Azure Arc–enabled resources directly into Datadog dashboards. Customers can monitor real-time data during cloud migrations and performance of applications running both in the public cloud and in hybrid or multicloud environments.

Benefits of Azure Arc integration with Datadog

With the Azure Arc integration with Datadog, customers can:

Monitor the connection status and agent version of Azure Arc–enabled servers, wherever they are running.
Automatically add Azure tags to associated hosts in Datadog for additional context.
Identify which Azure Arc–enabled servers have the Datadog Agent installed.
Deploy the Datadog Agent onto your Azure Arc–enabled servers as an extension.
Get unified billing for the Datadog service through Azure subscription invoicing.

Datadog is a cloud-scale monitoring and security platform for large-scale applications that aggregates data across your entire stack with more than 600 integrations for centralized visibility and faster troubleshooting on dynamic architectures. This provides developers and operations teams observability into every layer of their applications on Azure, so they diagnose performance issues quickly.

When Datadog first became an Azure Native ISV Service, it allowed customers to streamline their experience for purchasing, configuring, and managing Datadog directly inside the Azure portal. It reduced the learning curve for using Datadog to monitor the health and performance of your applications in Azure and sets customers up for a successful cloud migration or modernization.

For many customers, hybrid deployments are a durable and long-term strategy due to factors such as latency and compliance requirements, and we are committed to meeting customers wherever they are. With Azure Arc, we provide a consistent set of tools and services for customers to extend cloud technology across your distributed infrastructure. More than 12,000 customers are using Azure Arc, double the number a year ago. By partnering with organizations like Datadog, we are unlocking even more innovation and bringing Azure services into the tools our customers are already using.

Enhanced Azure Arc integration features

Features available with today’s general availability include:

Monitor the Arc connection status and agent version

Customers can easily identify any Azure Arc–enabled resources that are not in a connected state. You can also set up Datadog monitors to alert you immediately if the connection is unhealthy. Before this new integration, Azure Arc resources would look like any other virtual machine on-premises or in Azure. Now, you can access critical metadata to ensure your Azure Arc–enabled Windows and Linux servers, SQL servers, and Kubernetes clusters are secured and connected. IT operators will be able to troubleshoot much faster if a resource is disconnected and can quickly restore the connectivity to Azure Arc.

Datadog can also show which hosts are running an older version of Azure Arc. It then becomes easy to update the agent using Azure Update Management and utilize Azure Automation for latest updates to the Azure Arc agent whenever there is a new version.

 

Automatically add Azure tags for easy management and compliance tracking

A popular benefit of Azure Arc is using tags in Azure Resource Manager. Many organizations tag on-premises resources by cost center or datacenter server groups that are subject to specific regulations or requirements. Tags also create an audit trail to help trace the history of a particular resource and identify potential security issues when performing audits.

With the Azure Arc integration, Datadog can build rich visualizations and actionable alerts using the tags you have already created for Azure Arc–enabled resources. Now, when you perform patching or updates for Azure Arc–enabled servers, you get much richer insights to help validate software patches and troubleshoot application issues.

Easily identify which Azure Arc–enabled servers have the Datadog Agent

Azure Arc brings your hybrid and multicloud servers, Kubernetes clusters, and data services into a single dashboard for seamless management between environments. Aside from grouping resources with Azure Resource Manager, Azure Arc–enabled resources benefit from Azure role-based access control (RBAC), so different IT and developer teams can easily delegate access to their applications. For a centralized IT monitoring team, you can now ensure your Azure Arc–enabled resources have the Datadog Agent by cross-referencing these servers with agent data to get a real-time view of which Arc resources have Datadog Agent reporting.

Learn more about integrating Datadog and Azure Arc

Read more about the integration between Datadog and Azure Arc and access the Datadog’s Azure native integration service on Azure Marketplace.

Azure Marketplace offers thousands of industry-leading apps and services—all certified and optimized to run on Azure—so you can find, try, buy, and deploy the solutions you need quickly and confidently.
Quelle: Azure

Microsoft Cost Management updates—March 2023

Whether you're a new student, a thriving startup, or the largest enterprise, you have financial constraints, and you need to know what you're spending, where it’s being spent, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Microsoft Cost Management comes in.

We're always looking for ways to learn more about your challenges and how Microsoft Cost Management can help you better understand where you're accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:

Scheduled alerts for built-in views in Cost analysis.
Details about included costs in the Cost analysis preview.
Enable preview features and share your feedback.
What's new in Cost Management Labs.
New ways to save money with Microsoft Cloud.
New videos and learning opportunities.
Documentation updates.

Let's dig into the details.

Scheduled alerts for built-in views in Cost analysis

Cost Management offers numerous ways to stay on top of your costs and catch unexpected charges, like defining budgets to get notified as costs approach or exceed predefined thresholds, or configuring anomaly alerts to get notified when we detect atypical spending patterns in your subscription costs. But sometimes you’re looking for something a little simpler. Wouldn’t it be nice to just get a quick email letting you know how things have been going over the last week? Maybe you want to see how you’re trending against your budget or what you’re forecasted to spend for the month or maybe you want to see what your daily run rate has been over the last 30 days. Perhaps you simply want to check in once a month to see how your usage trends have changed compared to the previous months. These are exactly the types of reasons why you might want to use scheduled alerts in Cost analysis. You’ve been able to save a custom view and schedule alerts for a while. Now, you can also schedule alerts using the built-in chart views available in Cost analysis:

Accumulated costs

Daily costs

Cost by service

To get started, open Cost analysis, choose one of the built-in (or saved) chart views from the view menu, and select the Subscribe command at the top of the page.

To learn more, see Save and share customized views, and stay tuned for even more opportunities to monitor your costs.

Details about included costs in the Cost analysis preview

Knowing what’s included in your costs is a critical part of understanding what you’re spending and where. While this is covered in documentation, there’s nothing better than surfacing these details directly in the experiences you use. To that end, you can now view additional details about your cost in the Cost analysis preview, including:

The total (non-abbreviated) cost.
Dates the change in cost is referring to.
What costs are included or not included.
Additional notes about usage processing.

Let us know what you’d like to see next. There are a lot of great things on the horizon in the Cost analysis space. If you haven’t tried the latest changes, check out the Cost analysis preview today.

Enable preview features and share your feedback

Getting feedback has always been a critical part of the Cost Management experience. We introduced Cost Management Labs for that exact purpose—to get your early feedback on the latest features and enhancements that are in development. The earlier we get your feedback, the more we can improve the experience for you. This is your chance to drive the direction and impact the future of Cost Management.

Participating in Cost Management Labs is as easy as selecting Try preview from the Cost Management overview. You’ll see a list of preview features with links to share ideas or report any bugs that may pop up. Reporting a bug is a direct line back to the Cost Management engineering team, where we'll work with you to understand and resolve the issue. Of course, you may have seen all this before. Try preview isn’t new. What is new this month is the fact that your preview features are remembered across portal sessions. When you enable a feature, we’ll keep that enabled when you come back to the portal, making it easier than ever to get the most out of each preview.

We hope you find this update useful. Let us know what you’d like to see next and don’t forget to share your feedback about each preview. To learn more, see Enable preview features in Cost Management Labs.

What's new in Cost Management Labs

With Cost Management Labs, you get a sneak peek at what's coming in Microsoft Cost Management and can engage directly with us to share feedback and help us better understand how you use the service, so we can deliver more tuned and optimized experiences. Here are a few features you can see in Cost Management Labs:

Update: Remember preview features across sessions—Now available in the public portal.
Select the preview features you're interested in from the Try preview menu and you'll see them enabled by default the next time you visit the portal. No need to enable this option—preview features will be remembered automatically in the preview portal.
Update: Total KPI tooltip—Now available in the public portal.
View additional details about what costs are included in the Cost analysis preview.
Merge cost analysis menu items.
Only show one cost analysis item in the Cost Management menu. All classic and saved views are one-click away, making them easier than ever to find and access. You can enable this option from the Try preview menu.
Customers view for Cloud Solution Provider partners.
View a breakdown of costs by customer and subscription in the Cost analysis preview. Note this view is only available for CSP billing accounts and billing profiles. You can enable this option from the Try preview menu.
Recommendations view.
View a summary of cost recommendations that help you optimize your Azure resources in the cost analysis preview. You can opt in using the Try preview menu.
Forecast in the cost analysis preview.
Show your forecast cost for the period at the top of the cost analysis preview. You can opt in using Try preview.
Group related resources in the cost analysis preview.
Group related resources, like disks under VMs or web apps under App Service plans, by adding a “cm-resource-parent” tag to the child resources with a value of the parent resource ID.
Charts in the cost analysis preview.
View your daily or monthly cost over time in the cost analysis preview. You can opt in using Try Preview.
View cost for your resources.
The cost for your resources is one click away from the resource overview in the preview portal. Just click View cost to quickly jump to the cost of that resource.
Change scope from the menu.
Change scope from the menu for quicker navigation. You can opt-in using Try Preview.

Of course, that's not all. Every change in Microsoft Cost Management is available in Cost Management Labs a week before it's in the full Azure portal or Microsoft 365 admin center. We're eager to hear your thoughts and understand what you'd like to see next. What are you waiting for? Try Cost Management Labs today.

New ways to save money in the Microsoft Cloud

Lots of updates over the last month! Here are new and updated offers you might be interested in:

General availability: Spot Priority Mix for Virtual Machine Scale Sets.
General availability: Azure Firewall Basic.
General availability: More transactions at no additional cost for Azure Standard SSD.
General availability: Larger SKUs for App Service Environment v3.
General availability: Leading price-performance for SQL Server.
Preview: Incremental snapshots for Premium SSD v2 Disk Storage.
Preview: Azure NetApp Files support for 2TiB capacity pools.
Preview: Azure Managed Lustre, a file system designed for HPC and AI workloads.
Preview: Announcing a renaissance in computer vision AI with Microsoft's Florence foundation model.

New videos and learning opportunities

Here’s a new video about cost optimization for web apps you might be interested in:

The reliable web app pattern for .NET part 4: Cost Optimization (twelve minutes).

Follow the Microsoft Cost Management YouTube channel to stay in the loop with new videos as they’re released and let us know what you'd like to see next.

Want a more guided experience? Start with Control Azure spending and manage bills with Microsoft Cost Management.

Documentation updates

Here are a few documentation updates you might be interested in:

New: Download your savings plan price sheet.
New: Optimize costs in Azure Monitor.
Updated: Start using Cost analysis—Now covers the Cost analysis preview.
Updated: Save and share customized views—Added FAQs for scheduled alerts.
Updated: Group and filter options in Cost analysis and budgets—Expanded to include budgets.
Updated: Self-service trade-in for Azure savings plans—Updated note about reservation exchanges.
Updated: View Azure savings plan cost and usage—Added details about calculating savings.
Updated: Choose an Azure saving plan commitment amount—Added details about management group recommendations.
Updated: View your Azure usage summary details and download reports for EA enrollments—Added refunded credits section.
Seven updates based on your feedback.

Want to keep an eye on all documentation updates? Check out the Cost Management and Billing documentation change history in the azure-docs repository on GitHub. If you see something missing, select Edit at the top of the document and submit a quick pull request. You can also submit a GitHub issue. We welcome and appreciate all contributions.

What's next?

These are just a few of the big updates from last month. Don't forget to check out the previous Microsoft Cost Management updates. We're always listening and making constant improvements based on your feedback, so please keep the feedback coming.

Follow @MSCostMgmt on Twitter and subscribe to the YouTube channel for updates, tips, and tricks. You can also share ideas and vote up others in the Cost Management feedback forum or join the research panel to participate in a future study and help shape the future of Microsoft Cost Management.

We know these are trying times for everyone. Best wishes from the Microsoft Cost Management team. Stay safe and stay healthy.
Quelle: Azure

Modernize your apps and accelerate business growth with AI

AI has exploded in popularity in recent years, to the point where it’s no longer considered a luxury in the business world, but a necessity. A PricewaterhouseCoopers (PwC) study revealed that the adoption of AI will fuel a 14 percent increase in the global GDP by 2030, representing an additional $15.7 trillion surge to the global economy.1

Businesses using AI solutions are discovering new ways to tap into vast amounts of data to get clear insights and accelerate innovation. Thanks to advancements in graphics processing unit (GPU) computational power and the availability of tech services through cloud marketplaces, AI is now more accessible than ever.

As companies look to do more with less, AI will play an increasingly critical role—particularly generative AI, a category of AI algorithms that generate new outputs based on data. Unlike traditional AI systems that are designed to recognize patterns and make predictions, generative AI can analyze large data sets and create entirely new content in a variety of media formats—including text, images, audio, and data—based on what’s described in the input. Generative AI is used in systems like ChatGPT, the powerful natural and language model developed by OpenAI—a global leader in AI research and development.

At Microsoft, we’re committed to democratizing AI and giving you access to advanced generative AI models. As part of that commitment, in 2019, we started a long-term partnership with OpenAI. In January 2023, we announced the third phase of this partnership, including the general availability of Azure OpenAI Service.

With Azure OpenAI Service, businesses can access cutting-edge AI models, including GPT-3.5 and DALL•E 2. This service is backed by built-in responsible AI and enterprise-grade security. Azure OpenAI Service customers also will have access to ChatGPT—a fine-tuned version of GPT-3.5 that has been trained and runs inference on Azure AI infrastructure.

Add the power of generative AI to your apps—now available through the ISV Success program

We’re excited to announce that ISV Success program members will now be eligible to apply for access to Azure OpenAI Service. In today’s highly competitive market, ISVs are under intense pressure to differentiate and elevate their app offerings. To gain an edge, many software vendors are tapping into generative AI to modernize their applications.

With AI-optimized infrastructure and tools, Azure OpenAI Service empowers developers to build and modernize apps through direct access to OpenAI models. These generative AI models offer a deep understanding of language and code to enable apps with new reasoning and comprehension capabilities, which can be applied to a variety of use cases, such as code generation, content summarization, semantic search, and natural language-to-code translation.

As a participant in the program, you will also get access to advanced AI services like custom neural voice, speaker recognition, and content filters.

Drive application modernization with the ISV Success program

Learn more about the ISV Success program.

Join the ISV Success program to get access to best-in-class developer tools, cloud credits, one-to-one technical consultations, training resources, and now Azure OpenAI Service.

Apply to use Azure OpenAI service for your solutions.

 

1 PwC, Global Artificial Intelligence Study: Sizing the prize.
Quelle: Azure

Connect, secure, and simplify your network resources with Azure Virtual Network Manager

Enterprise-scale management and configuration of your network resources in Azure are key to keeping costs down, reducing operational overhead, and properly connecting and securing your network presence in the cloud. We are happy to announce Azure Virtual Network Manager (AVNM), your one-stop shop for managing the connectivity and security of your network resources at scale, is generally available.

What is Azure Virtual Network Manager?

AVNM works through a main process of group, configure, and deploy. You’ll group your network resources across subscriptions, regions, and even tenants; configure the kind of connectivity and security you want among your grouped network resources; and finally, deploy those configurations onto those network groups in whichever and however many regions you’d like.

Common use cases

Common use cases for AVNM include the following and can be addressed by deploying AVNM’s connectivity and security admin configurations onto your defined network groups:

Interconnected virtual networks (VNets) that communicate directly with each other.
Central infrastructure services in a hub VNet that are shared by other VNets.

Establishing direct connectivity between spoke VNets to reduce latency.

Automatic maintenance of connectivity at scale, even with the addition of new network resources.
Enforced standard security rules on all existing and new VNets without risk of change.

Keeping flexibility for VNet owners to configure network security groups (NSGs) as needed for more specific traffic dictation.

Application of default security rules across an entire organization to mitigate the risk of misconfiguration and security holes.
Force-allowance of services’ traffic, such as monitoring services and program updates, to prevent accidental blocking through security rules.

Connectivity configuration

Hub and spoke topology

When you have some services in a hub VNet, such as an Azure Firewall or ExpressRoute, and you need to connect several other VNets to that hub to share those services, that means you’ll have to establish connectivity between each of those spoke VNets and the hub. In the future, if you provision new VNets, you’ll also need to make sure those new VNets are correctly connected to the hub VNet.

With AVNM, you can create groups of VNets and select those groups to be connected to your desired hub VNet, and AVNM will establish all the necessary connectivity between your hub VNet and each VNet in your selected groups behind the scenes. On top of the simplicity of creating a hub and spoke topology, new VNets that match your desired conditions can be automatically added to this topology, reducing manual interference from your part.

For the time being, establishing direct connectivity between the VNets within a spoke network group is still in preview and will become generally available (GA) at a later date.

Mesh

If you want all of your VNets to be able to communicate with each other regionally or globally, you can build a mesh topology with AVNM’s connectivity configuration. You’ll select your desired network groups and AVNM will establish connectivity between every VNet that is a part of your selected network groups. The mesh connectivity configuration feature is still in preview and will become generally available at a later date.

How to implement connectivity configurations with existing environments

Let’s say you have a cross-region hub and spoke topology in Azure that you’ve set up through manual peerings. Your hub VNet has an ExpressRoute gateway and your dozens of spoke VNets are owned by various application teams.

Here are the steps you would take to implement and automate this topology using AVNM:

Create your network manager.
Create a network group for each application team’s respective VNets using Azure Policy definitions that can be conditionally based on parameters including (but not limited to) subscription, VNet tag, and VNet name.
Create a connectivity configuration with hub and spoke selected. Select your desired hub VNet and your network groups as the spokes.
By default, all connectivity established with AVNM is additive after the connectivity configuration’s deployment. If you’d like AVNM to clean up existing peerings for you, this is an option you can select; otherwise, existing connectivity can be manually cleaned up later if desired.
Deploy your hub and spoke connectivity configuration to your desired regions.

In just a few clicks, you’ve set up a hub and spoke topology among dozens of VNets from all application teams globally through AVNM. By defining the conditions of VNet membership for your network groups representing each application team, you’ve ensured that any newly created VNet matching those conditions will automatically be added to the corresponding network group and receive the same connectivity configuration applied onto it. Whether you choose to have AVNM delete existing peerings or not, there is no downtime to connectivity between your spoke VNets and hub VNet.

Security feature

AVNM currently provides you with the ability to protect your VNets at scale with security admin configurations. This type of configuration consists of security admin rules, which are high-priority security rules defined similarly to, but with precedence over NSG rules.

The security admin configuration feature is still in preview and will GA at a later date.

Enforcement and flexibility

With NSGs alone, widespread enforcement on VNets across several applications, teams, or even entire organizations can be tricky. Often there’s a balancing act between attempts at centralized enforcement across an organization and handing over granular, flexible control to teams. The cost of hard enforcement is higher operational overhead as admins need to manage an increasing number of NSGs. The cost of individual teams tailoring their own security rules is the risk of vulnerability as misconfiguration or opened unsafe ports is possible. Security admin rules aim to eliminate this sliding scale of choosing between enforcement and flexibility altogether by providing central governance teams with the ability to establish guardrails, while intentionally allowing traffic for individual teams to flexibly pinpoint security as needed through NSG rules.

Difference from NSGs

Security admin rules are similar to NSG rules in structure and input parameters, but they are not the exact same construct. Let’s boil down these differences and similarities:

 

Target audience

Applied on

Evaluation order

Action types

Parameters

Security admin rules

Network admins, central governance team

Virtual networks

Higher priority

Allow, Deny, Always Allow

Priority, protocol, action, source, destination

NSG rules

Individual teams

Subnets, NICs

Lower priority, after security admin rules

Allow, Deny

One key difference is the security admin rule’s Allow type. Unlike its other action types of Deny and Always Allow, if you create a security admin rule to Allow a certain type of traffic, then that traffic will be further evaluated by NSG rules matching that traffic. However, Deny and Always Allow security admin rules will stop the evaluation of traffic, meaning NSGs down the line will not see or handle this traffic. As a result, regardless of NSG presence, administrators can use security admin rules to protect an organization by default.

Key Scenarios

Providing exceptions

Being able to enforce security rules throughout an organization is useful, to say the least. But one of the benefits of security admin rules that we’ve mentioned is its allowance for flexibility by teams within the organization to handle traffic differently as needed. Let’s say you’re a network administrator and you’ve enforced security admin rules to block all high-risk ports across your entire organization, but an application team 1 needs SSH traffic for a few of their resources and has requested an exception for their VNets. You’d create a network group specifically for application team 1’s VNets and create a security admin rule collection targeting only that network group—inside that rule collection, you’d create a security admin rule of action type Allow for inbound SSH traffic (port 22). The priority of this rule would need to be higher than the original rule you created that blocked this port across all of your organization’s resources. Effectively, you’ve now established an exception to the blocking of SSH traffic just for application team 1’s VNets, while still protecting your organization from that traffic by default.

Force-allowing traffic to and from monitoring services or domain controllers

Security admin rules are handy for blocking risky traffic across your organization, but they’re also useful for force-allowing traffic needed for certain services to continue running as expected. If you know that your application teams need software updates for their virtual machines, then you can create a rule collection targeting the appropriate network groups consisting of Always Allow security admin rules for the ports where the updates come through. This way, even if an application team misconfigures an NSG to deny traffic on a port necessary for updates, the security admin rule will ensure the traffic is delivered and doesn’t hit that conflicting NSG.

How to implement security admin configurations with existing environments

Let’s say you have an NSG-based security model consisting of hundreds of NSGs that are modifiable by both the central governance team and individual application teams. Your organization implemented this model originally to allow for flexibility, but there have been security vulnerabilities due to missing security rules and constant NSG modification.

Here are the steps you would take to implement and enforce organization-wide security using AVNM:

Create your network manager.
Create a network group for each application team’s respective VNets using Azure Policy definitions that can be conditionally based on parameters including (but not limited to) subscription, VNet tag, and VNet name.
Create a security admin configuration with a rule collection targeting all network groups. This rule collection represents the standard security rules that you’re enforcing across your entire organization.
Create security admin rules blocking high-risk ports. These security admin rules take precedence over NSG rules, so Deny security admin rules have no possibility of conflict with existing NSGs. Redundant or now-circumvented NSGs can be manually cleaned up if desired.
Deploy your security admin configuration to your desired regions.

You’ve now set up an organization-wide set of security guardrails among all of your application teams’ VNets globally through AVNM. You’ve established enforcement without sacrificing flexibility, as you’re able to create exceptions for any application team’s set of VNets. Your old NSGs still exist, but all traffic will hit your security admin rules first. You can clean up redundant or avoided NSGs, and your network resources are still protected by your security admin rules, so there is no downtime from a security standpoint.

Learn more about Azure Virtual Network Manager

Check out the AVNM overview, read more about AVNM in our public documentation set, and deep-dive into AVNM’s security offering through our security blog.
Quelle: Azure

Introducing GPT-4 in Azure OpenAI Service

At Microsoft, we are constantly discovering new ways to unleash creativity, unlock productivity, and uplevel skills so that more people can benefit from using AI. This is allowing our customers to build the future faster and more responsibly by powering their apps using large-scale AI models. Our collaboration with OpenAI, along with the power of Azure have been core to our journey.

Today, we are excited to announce that GPT-4 is available in preview in Azure OpenAI Service. Customers and partners already using Azure OpenAI Service can apply for access to GPT-4 and start building with OpenAI’s most advanced model yet. With this milestone, we are proud to bring the world’s most advanced AI models—including GPT-3.5, ChatGPT, and DALL•E 2—to Azure customers, backed by Azure AI-optimized infrastructure, enterprise-readiness, compliance, data security, and privacy controls, along with many integrations with other Azure services.

Customers can begin applying for access to GPT-4 today. Billing for all GPT-4 usage begins April 1, 2023, at the following prices:

GPT-4

Prompt

Completion

8k context

$0.03 per 1,000 tokens

$0.06 per 1,000 tokens

32k context

$0.06 per 1,000 tokens

$0.12 per 1,000 tokens

GPT-4 for every business

While the recently announced new Bing and Microsoft 365 Copilot products are already powered by GPT-4, today’s announcement allows businesses to take advantage of the same underlying advanced models to build their own applications leveraging Azure OpenAI Service.

With generative AI technologies, we are unlocking new efficiencies for businesses in every industry. For instance, see how Azure OpenAI Service can allow bot developers to create virtual assistants in minutes using natural language with Copilot in Power Virtual Agents.

GPT-4 has the potential to take this experience to a whole new level using its broader knowledge, problem-solving abilities, and domain expertise. With GPT-4 in Azure OpenAI Service, businesses can streamline communications internally as well as with their customers, using a model with additional safety investments to reduce harmful outputs.

Companies of all sizes are putting Azure AI to work for them, many deploying language models into production using Azure OpenAI Service, and knowing that the service is backed by the unique supercomputing and enterprise capabilities of Azure. Solutions include improving customer experiences end-to-end, summarizing long-form content, helping write software, and even reducing risk by predicting the right tax data.

Customers are accelerating the adoption of language models

We are just scratching the surface with generative AI technologies and are working to enable our customers to responsibly adopt Azure OpenAI Service to bring real impact. With GPT-4, Epic Healthcare, Coursera, and Coca-Cola plan to use this advancement in unique ways:

"Our investigation of GPT-4 has shown tremendous potential for its use in healthcare. We'll use it to help physicians and nurses spend less time at the keyboard and to help them investigate data in more conversational, easy-to-use ways."—Seth Hain, Senior Vice President of Research and Development at Epic

"Coursera is using Azure OpenAI Service to create a new AI-powered learning experience on its platform, enabling learners to get high-quality and personalized support throughout their learning journeys. Together, Azure OpenAI Service and the new GPT-4 model will help millions around the world learn even more effectively on Coursera."—Mustafa Furniturewala, Senior Vice President of Engineering at Coursera

"Words cannot express the excitement and gratitude we feel as a consumer package goods company for the boundless opportunities that Azure OpenAI has presented us. With Azure Cognitive Services at the heart of our digital services framework, we have harnessed the transformative power of OpenAI's text and image generation models to solve business problems and build a knowledge hub. But it is the sheer potential of OpenAI's upcoming GPT-4 multimodal capabilities that truly fills us with awe and wonder. The possibilities for marketing, advertising, public relations, and customer relations are endless, and we cannot wait to be at the forefront of this revolutionary technology. We know that our success is not just about technology but also about having the right enterprise features in place. That's why we're proud to have a long-standing partnership with Microsoft Azure, ensuring that we have all the tools we need to deliver exceptional experiences to our customers. Azure OpenAI is more than just cutting-edge technology—it's a true game-changer, and we're honored to be a part of this incredible journey."—Lokesh Reddy Vangala, Senior Director of Engineering, Data and AI, The Coca-Cola Company

Our commitment to responsible AI

As we described in my previous blog, Microsoft has a layered approach for generative models, guided by Microsoft’s Responsible AI Principles. In Azure OpenAI, an integrated safety system provides protection from undesirable inputs and outputs and monitors for misuse. On top of that, we provide guidance and best practices for customers to responsibly build applications using these models, and we expect customers to comply with the Azure OpenAI Code of Conduct. With GPT-4, new research advances from OpenAI have enabled an additional layer of protection. Guided by human feedback, safety is built directly into the GPT-4 model, which enables the model to be more effective at handling harmful inputs, thereby reducing the likelihood that the model will generate a harmful response.

Getting started with GPT-4 in Azure OpenAI Service

Apply for access to GPT-4 by completing this form.
Learn more about Azure OpenAI Service and more about all the latest enhancements.
Get started with GPT-4 in Azure OpenAI Service in Microsoft Learn.
Read our Partner announcement blog, Empowering partners to develop AI-powered apps and experiences with ChatGPT in Azure OpenAI Service.
Learn how to use the new Chat Completions API (preview) and model versions for ChatGPT and GPT-4 models in Azure OpenAI Service.

Quelle: Azure