Understanding the Docker USER Instruction

In the world of containerization, security and proper user management are crucial aspects that can significantly affect the stability and security of your applications. The USER instruction in a Dockerfile is a fundamental tool that determines which user will execute commands both during the image build process and when running the container. By default, if no USER is specified, Docker will run commands as the root user, which can pose significant security risks. 

In this blog post, we will delve into the best practices and common pitfalls associated with the USER instruction. Additionally, we will provide a hands-on demo to illustrate the importance of these practices. Understanding and correctly implementing the USER instruction is vital for maintaining secure and efficient Docker environments. Let’s explore how to manage user permissions effectively, ensuring that your Docker containers run securely and as intended.

Docker Desktop 

The commands and examples provided are intended for use with Docker Desktop, which includes Docker Engine as an integrated component. Running these commands on Docker Community Edition (standalone Docker Engine) is possible, but your output may not match that shown in this post. The blog post How to Check Your Docker Installation: Docker Desktop vs. Docker Engine explains the differences and how to determine what you are using.

UID/GID: A refresher

Before we discuss best practices, let’s review UID/GID concepts and why they are important when using Docker. This relationship factors heavily into the security aspects of these best practices.

Linux and other Unix-like operating systems use a numeric identifier to identify each discrete user called a UID (user ID). Groups are identified by a GID (group ID), which is another numeric identifier. These numeric identifiers are mapped to the text strings used for username and groupname, but the numeric identifiers are used by the system internally.

The operating system uses these identifiers to manage permissions and access to system resources, files, and directories. A file or directory has ownership settings including a UID and a GID, which determine which user and group have access rights to it. Users can be members of multiple groups, which can complicate permissions management but offers flexible access control.

In Docker, these concepts of UID and GID are preserved within containers. When a Docker container is run, it can be configured to run as a specific user with a designated UID and GID. Additionally, when mounting volumes, Docker respects the UID and GID of the files and directories on the host machine, which can affect how files are accessed or modified from within the container. This adherence to Unix-like UID/GID management helps maintain consistent security and access controls across both the host and containerized environments. 

Groups

Unlike USER, there is no GROUP directive in the Dockerfile instructions. To set up a group, you specify the groupname (GID) after the username (UID). For example, to run a command as the automation user in the ci group, you would write USER automation:ci in your Dockerfile.

If you do not specify a GID, the list of groups that the user account is configured as part of is used. However, if you do specify a GID, only that GID will be used. 

Current user

Because Docker Desktop uses a virtual machine (VM), the UID/GID of your user account on the host (Linux, Mac, Windows HyperV/WSL2) will almost certainly not have a match inside the Docker VM.

You can always check your UID/GID by using the id command. For example, on my desktop, I am UID 503 with a primary GID of 20:

$ id
uid=503(jschmidt) gid=20(staff) groups=20(staff),<–SNIP–>

Best practices

Use a non-root user to limit root access

As noted above, by default Docker containers will run as UID 0, or root. This means that if the Docker container is compromised, the attacker will have host-level root access to all the resources allocated to the container. By using a non-root user, even if the attacker manages to break out of the application running in the container, they will have limited permissions if the container is running as a non-root user. 

Remember, if you don’t set a USER in your Dockerfile, the user will default to root. Always explicitly set a user, even if it’s just to make it clear who the container will run as.

Specify user by UID and GID

Usernames and groupnames can easily be changed, and different Linux distributions can assign different default values to system users and groups. By using a UID/GID you can ensure that the user is consistently identified, even if the container’s /etc/passwd file changes or is different across distributions. For example:

USER 1001:1001

Create a specific user for the application

If your application requires specific permissions, consider creating a dedicated user for your application in the Dockerfile. This can be done using the RUN command to add the user. 

Note that when we are creating a user and then switching to that user within our Dockerfile, we do not need to use the UID/GID because they are being set within the context of the image via the useradd command. Similarly, you can add a user to a group (and create a group if necessary) via the RUN command.

Ensure that the user you set has the necessary privileges to run the commands in the container. For instance, a non-root user might not have the necessary permissions to bind to ports below 1024. For example:

RUN useradd -ms /bin/bash myuser
USER myuser

Switch back to root for privileged operations

If you need to perform privileged operations in the Dockerfile after setting a non-root user, you can switch to the root user and then switch back to the non-root user once those operations are complete. This approach adheres to the principle of least privilege; only tasks that require administrator privileges are run as an administrator. Note that it is not recommended to use sudo for privilege elevation in a Dockerfile. For example:

USER root
RUN apt-get update && apt-get install -y some-package
USER myuser

Combine USER with WORKDIR

As noted above, the UID/GID used within a container applies both within the container and with the host system. This leads to two common problems:

Switching to a non-root user and not having permissions to read or write to the directories you wish to use (for example, trying to create a directory under / or trying to write in /root.

Mounting a directory from the host system and switching to a user who does not have permission to read/write to the directory or files in the mount.

USER root
RUN mkdir /app&&chown ubuntu
USER ubuntu
WORKDIR /app

Example

The following example shows you how the UID and GID behave in different scenarios depending on how you write your Dockerfile. Both examples provide output that shows the UID/GID of the running Docker container. If you are following along, you need to have a running Docker Desktop installation and a basic familiarity with the docker command.

Standard Dockerfile

Most people take this approach when they first begin using Docker; they go with the defaults and do not specify a USER.

# Use the official Ubuntu image as the base
FROM ubuntu:20.04

# Print the UID and GID
CMD sh -c "echo 'Inside Container:' && echo 'User: $(whoami) UID: $(id -u) GID: $(id -g)'"

Dockerfile with USER

This example shows how to create a user with a RUN command inside a Dockerfile and then switch to that USER.

# Use the official Ubuntu image as the base
FROM ubuntu:20.04

# Create a custom user with UID 1234 and GID 1234
RUN groupadd -g 1234 customgroup &&
useradd -m -u 1234 -g customgroup customuser

# Switch to the custom user
USER customuser

# Set the workdir
WORKDIR /home/customuser

# Print the UID and GID
CMD sh -c "echo 'Inside Container:' && echo 'User: $(whoami) UID: $(id -u) GID: $(id -g)'"

Build the two images with:

$ docker build -t default-user-image -f Dockerfile1 .
$ docker build -t custom-user-image -f Dockerfile2 .

Default Docker image

Let’s run our first image, the one that does not provide a USER command. As you can see, the UID and GID are 0/0, so the superuser is root. There are two things at work here. First, we are not defining a UID/GID in the Dockerfile so Docker defaults to the superuser. But how does it become a superuser if my account is not a superuser account? This is because the Docker Engine runs with root permissions, so containers that are built to run as root inherit the permissions from the Docker Engine.

$ docker run –rm default-user-image
Inside Container:
User: root UID: 0 GID: 0
Custom User Docker Image

Custom Docker image

Let’s try to fix this — we really don’t want Docker containers running as root. So, in this version, we explicitly set the UID and GID for the user and group. Running this container, we see that our user is set appropriately.

$ docker run –rm custom-user-image
Inside Container:
User: customuser UID: 1234 GID: 1234

Enforcing best practices

Enforcing best practices in any environment can be challenging, and the best practices outlined in this post are no exception. Docker understands that organizations are continually balancing security and compliance against innovation and agility and is continually working on ways to help with that effort. Our Enhanced Container Isolation (ECI) offering, part of our Hardened Docker Desktop, was designed to address the problematic aspects of having containers running as root.

Enhanced Container Isolation mechanisms, such as user namespaces, help segregate and manage privileges more effectively. User namespaces isolate security-related identifiers and attributes, such as user IDs and group IDs, so that a root user inside a container does not map to the root user outside the container. This feature significantly reduces the risk of privileged escalations by ensuring that even if an attacker compromises the container, the potential damage and access scope remain confined to the containerized environment, dramatically enhancing overall security.

Additionally, Docker Scout can be leveraged on the user desktop to enforce policies not only around CVEs, but around best practices — for example, by ensuring that images run as a non-root user and contain mandatory LABELs.

Staying secure

Through this demonstration, we’ve seen the practical implications and benefits of configuring Docker containers to run as a non-root user, which is crucial for enhancing security by minimizing potential attack surfaces. As demonstrated, Docker inherently runs containers with root privileges unless specified otherwise. This default behavior can lead to significant security risks, particularly if a container becomes compromised, granting attackers potentially wide-ranging access to the host or Docker Engine.

Use custom user and group IDs

The use of custom user and group IDs showcases a more secure practice. By explicitly setting UID and GID, we limit the permissions and capabilities of the process running inside the Docker container, reducing the risks associated with privileged user access. The UID/GID defined inside the Docker container does not need to correspond to any actual user on the host system, which provides additional isolation and security.

User namespaces

Although this post extensively covers the USER instruction in Docker, another approach to secure Docker environments involves the use of namespaces, particularly user namespaces. User namespaces isolate security-related identifiers and attributes, such as user IDs and group IDs, between the host and the containers. 

With user namespaces enabled, Docker can map the user and group IDs inside a container to non-privileged IDs on the host system. This mapping ensures that even if a container’s processes break out and gain root privileges within the Docker container, they do not have root privileges on the host machine. This additional layer of security helps to prevent the escalation of privileges and mitigate potential damage, making it an essential consideration for those looking to bolster their Docker security framework further. Docker’s ECI offering leverages user namespaces as part of its security framework.

Conclusion

When deploying containers, especially in development environments or on Docker Desktop, consider the aspects of container configuration and isolation outlined in this post. Implementing the enhanced security features available in Docker Business, such as Hardened Docker Desktop with Enhanced Container Isolation, can further mitigate risks and ensure a secure, robust operational environment for your applications.

Learn more

Read the Dockerfile reference guide.

Get the latest release of Docker Desktop.

Explore Docker Guides.

New to Docker? Get started.

Subscribe to the Docker Newsletter.

Quelle: https://blog.docker.com/feed/

How hollow core fiber is accelerating AI  

This blog is part of the ‘Infrastructure for the era of AI’ series that focuses on emerging technology and trends in large-scale computing. This piece dives deeper into one of our newest technologies, hollow core fiber (HCF). 

AI is at the forefront of people’s minds, and innovations are happening at lightning speed. But to continue the pace of AI innovation, companies need the right infrastructure for the compute-intensive AI workloads they are trying to run. This is what we call ‘purpose-built infrastructure’ for AI, and it’s a commitment Microsoft has made to its customers. This commitment doesn’t just mean taking hardware that was developed by partners and placing it in its’ datacenters; Microsoft is dedicated to working with partners, and occasionally on its own, to develop the newest and greatest technology to power scientific breakthroughs and AI solutions.  

Infrastructure for the era of AI

Explore how you can integrate into the world of AI

Learn more

One of these technologies that was highlighted at Microsoft Ignite in November was hollow core fiber (HCF), an innovative optical fiber that is set to optimize Microsoft Azure’s global cloud infrastructure, offering superior network quality, improved latency and secure data transmission. 

Transmission by air 

HCF technology was developed to meet the heavy demands of workloads like AI and improve global latency and connectivity. It uses a proprietary design where light propagates in an air core, which has significant advantages over traditional fiber built with a solid core of glass. An interesting piece here is that the HCF structure has nested tubes which help reduce any unwanted light leakage and keep the light going in a straight path through the core.  

As light travels faster through air than glass, HCF is 47% faster than standard silica glass, delivering increased overall speed and lower latency. It also has a higher bandwidth per fiber, but what is the difference between speed, latency and bandwidth? While speed is how quickly data travels over the fiber medium, network latency is the amount of time it takes for data to travel between two end points across the network. The lower the latency, the faster the response time. Additionally, bandwidth is the amount of data that is sent and received in the network. Imagine there are two vehicles travelling from point A to point B setting off at the same time. The first vehicle is a car (representing single mode fiber (SMF)) and the second is a van (HCF). Both vehicles are carrying passengers (which is the data); the car can take four passengers, whereas the van can take 16. The vehicles can reach different speeds, with the van travelling faster than the car. This means it will take the van less time to travel to point B, therefore arriving at its destination first (demonstrating lower latency).  

For over half a century, the industry has been dedicated to making steady, yet small, advancements in silica fiber technology. Despite the progress, the gains have been modest due to the limitations of silica loss. A significant milestone with HCF technology was reached in early 2024, attaining the lowest optical fiber loss (attenuation) ever recorded at a 1550nm wavelength, even lower than pure silica core single mode fiber (SMF). 1 Along with low attenuation, HCF offers higher launch power handling, broader spectral bandwidth, and improved signal integrity and data security compared to SMF. 

The need for speed 

Imagine you’re playing an online video game. The game requires quick reactions and split-second decisions. If you have a high-speed connection with low latency, your actions in the game will be transmitted quickly to the game server and to your friends, allowing you to react in real time and enjoy a smooth gaming experience. On the other hand, if you have a slow connection with high latency, there will be a delay between your actions and what happens in the game, making it difficult to keep up with the fast-paced gameplay. Whether you’re missing key action times or lagging behind others, lagging is highly annoying and can seriously disrupt gameplay. Similarly, in AI models, having lower latency and high-speed connections can help the models process data and make decisions faster, improving their performance. 

Reducing latency for AI workloads

So how can HCF help the performance of AI infrastructure? AI workloads are tasks that involve processing large amounts of data using machine learning algorithms and neural networks. These tasks can range from image recognition, natural language processing, computer vision, speech synthesis, and more. AI workloads require fast networking and low latency because they often involve multiple steps of data processing, such as data ingestion, preprocessing, training, inference, and evaluation. Each step can involve sending and receiving data from different sources, such as cloud servers, edge devices, or other nodes in a distributed system. The speed and quality of the network connection affect how quickly and accurately the data can be transferred and processed. If the network is slow or unreliable, it can cause delays, errors, or failures in the AI workflow. This can result in poor performance, wasted resources, or inaccurate outcomes. These models often need huge amounts of processing power and ultra-fast networking and storage to handle increasingly sophisticated workloads with billions of parameters, so ultimately low latency and high-speed networking can help speed up model training and inference, improve performance and accuracy, and foster AI innovation. 

Helping AI workloads everywhere

Fast networking and low latency are especially important for AI workloads that require real-time or near-real-time responses, such as autonomous vehicles, video streaming, online gaming, or smart devices. These workloads need to process data and make decisions in milliseconds or seconds, which means they cannot afford any lag or interruption in the network. Low latency and high-speed connections help ensure that the data is delivered and processed in time, allowing the AI models to provide timely and accurate results. Autonomous vehicles exemplify AI’s real-world application, relying on AI models to swiftly identify objects, predict movements, and plan routes amid unpredictable surroundings. Rapid data processing and transmission, facilitated by low latency and high-speed connections, enable near real-time decision-making, enhancing safety and performance. HCF technology can accelerate AI performance, providing faster, more reliable, and more secure networking for AI models and applications. 

Regional implications 

Beyond the direct hardware that runs your AI models, there are more implications. Datacenter regions are expensive, and both the distance between regions, and between regions and the customer, make a world of difference to both the customer and Azure as it decides where to build these datacenters. When a region is located too far from a customer, it results in higher latency because the model is waiting for the data to go to and from a center that is further away.

If we think about the car versus van example and how that relates to a network, with the combination of higher bandwidth and faster transmission speed, more data can be transmitted between two points in a network, in two thirds of the time. Alternatively, HCF offers longer reach by extending the transmission distance in an existing network by up to 1.5x with no impact on network performance. Ultimately, you can go a further distance at the same latency envelope as traditional SMF and with more data. This has huge implications for Azure customers, minimizing the need for datacenter proximity without increasing latency and reducing performance. 

The infrastructure for the era of AI 

HCF technology was developed to improve Azure’s global connectivity and meet the demands of AI and future workloads. It offers several benefits to end users, including higher bandwidth, improved signal integrity, and increased security. In the context of AI infrastructure, HCF technology can enable fast, reliable, and secure networking, helping to improve the performance of AI workloads. 

As AI continues to evolve, infrastructure technology remains a critical piece of the puzzle, ensuring efficient and secure connectivity for the digital era. As AI advancements continue to place additional strain on existing infrastructure, AI users are increasingly seeking to benefit from new technologies like HCF, virtual machines like the recently announced ND H100 v5, and silicon like Azure’s own first partner AI accelerator, Azure Maia 100. These advancements collectively enable more efficient processing, faster data transfer, and ultimately, more powerful and responsive AI applications. 

Keep up on our “Infrastructure for the Era of AI” series to get a better understanding of these new technologies, why we are investing where we are, what these advancements mean for you, and how they enable AI workloads.   

More from the series

Navigating AI: Insights and best practices 

New infrastructure for the era of AI: Emerging technology and trends in 2024 

A year in review for AI Infrastructure 

Tech Pulse: What the rise of AI means for IT Professionals 

Sources

1 Hollow Core DNANF Optical Fiber with <0.11 dB/km Loss
The post How hollow core fiber is accelerating AI   appeared first on Azure Blog.
Quelle: Azure

Microsoft is a Leader in the 2024 Gartner® Magic Quadrant™ for Data Science and Machine Learning Platforms 

Microsoft is a Leader in this year’s Gartner® Magic Quadrant™ for Data Science and Machine Learning Platforms. Azure AI provides a powerful, flexible end-to-end platform for accelerating data science and machine learning innovation while providing the enterprise governance that every organization needs in the era of AI. 

In May 2024, Microsoft was also named a Leader for the fifth year in a row in the Gartner® Magic Quadrant™ for Cloud AI Developer Services, where we placed furthest for our Completeness of Vision. We’re pleased by these recognitions from Gartner as we continue helping customers, from large enterprises to agile startups, bring their AI and machine learning models and applications into production securely and at scale. 

Azure AI is at the forefront of purpose-built AI infrastructure, responsible AI tooling, and helping cross-functional teams collaborate effectively using Machine Learning Operations (MLOps) for generative AI and traditional machine learning projects. Azure Machine Learning provides access to a broad selection of foundation models in the Azure AI model catalog—including the recent releases of Phi-3, JAIS, and GPT-4o—and tools to fine-tune or build your own machine learning models. Additionally, the platform supports a rich library of open-source frameworks, tools, and algorithms so that data science and machine learning teams can innovate in their own way, all on a trusted foundation. 

Azure AI

Microsoft is named a Leader in the 2024 Gartner® Magic Quadrant™ for Data Science and Machine Learning Platforms 

Read the report

Accelerate time to value with Azure AI infrastructure 

“We’re now able to get a functioning model with relevant insights up and running in just a couple of weeks thanks to Azure Machine Learning. We’ve even managed to produce verified models in just four to six weeks.”
—Dr. Nico Wintergerst, Staff AI Research Engineer at relayr GmbH 

Azure Machine Learning helps organizations build, deploy, and manage high-quality AI solutions quickly and efficiently, whether building large models from scratch, running inference on pre-trained models, consuming models as a service, or fine-tuning models for specific domains. Azure Machine Learning runs on the same powerful AI infrastructure that powers some of the world’s most popular AI services, such as ChatGPT, Bing, and Azure OpenAI Service. Additionally, Azure Machine Learning’s compatibility with ONNX Runtime and DeepSpeed can help customers further optimize training and inference time for performance, scalability, and power efficiency.

Whether your organization is training a deep learning model from scratch using open source frameworks or bringing an existing model into the cloud, Azure Machine Learning enables data science teams to scale out training jobs using elastic cloud compute resources and seamlessly transition from training to deployment. With managed online endpoints, customers can deploy models across powerful CPU and graphics processing unit (GPU) machines without needing to manage the underlying infrastructure—saving time and effort. Similarly, customers do not need to provision or manage infrastructure when deploying foundation models as a service from the Azure AI model catalog. This means customers can easily deploy and manage thousands of models across production environments—from on-premises to the edge—for batch and real-time predictions.  

Streamline operations with flexible MLOps and LLMOps 

“Prompt flow helped streamline our development and testing cycles, which established the groundedness we required for making sure the customer and the solution were interacting in a realistic way.”   
—Fabon Dzogang, Senior Machine Learning Scientist at ASOS

Machine learning operations (MLOps) and large language model operations (LLMOps) sit at the intersection of people, processes, and platforms. As data science projects scale and applications become more complex, effective automation and collaboration tools become essential for achieving high-quality, repeatable outcomes.  

Azure Machine Learning is a flexible MLOps platform, built to support data science teams of any size. The platform makes it easy for teams to share and govern machine learning assets, build repeatable pipelines using built-in interoperability with Azure DevOps and GitHub Actions, and continuously monitor model performance in production. Data connectors with Microsoft sources such as Microsoft Fabric and external sources such as Snowflake and Amazon S3, further simplify MLOps. Interoperability with MLflow also makes it seamless for data scientists to scale existing workloads from local execution to the cloud and edge, while storing all MLflow experiments, run metrics, parameters, and model artifacts in a centralized workspace. 

Azure Machine Learning prompt flow helps streamline the entire development cycle for generative AI applications with its LLMOps capabilities, orchestrating executable flows comprised of models, prompts, APIs, Python code, and tools for vector database lookup and content filtering. Azure AI prompt flow can be used together with popular open-source frameworks like LangChain and Semantic Kernel, enabling developers to bring experimental flows into prompt flow to scale those experiments and run comprehensive evaluations. Developers can debug, share, and iterate on applications collaboratively, integrating built-in testing, tracing, and evaluation tools into their CI/CD system to continually reassess the quality and safety of their application. Then, developers can deploy applications when ready with one click and monitor flows for key metrics such as latency, token usage, and generation quality in production. The result is end-to-end observability and continuous improvement. 

Develop more trustworthy models and apps 

“The responsible AI dashboard provides valuable insights into the performance and behavior of computer vision models, providing a better level of understanding into why some models perform differently than others, and insights into how various underlying algorithms or parameters influence performance. The benefit is better-performing models, enabled and optimized with less time and effort.” 
—Teague Maxfield, Senior Manager at Constellation Clearsight 

AI principles such as fairness, safety, and transparency are not self-executing. That’s why Azure Machine Learning provides data scientists and developers with practical tools to operationalize responsible AI right in their flow of work, whether they need to assess and debug a traditional machine learning model for bias, protect a foundation model from prompt injection attacks, or monitor model accuracy, quality, and safety in production. 

The Responsible AI dashboard helps data scientists assess and debug traditional machine learning models for fairness, accuracy, and explainability throughout the machine learning lifecycle. Users can also generate a Responsible AI scorecard to document and share model performance details with business stakeholders, for more informed decision-making. Similarly, developers in Azure Machine Learning can review model cards and benchmarks and perform their own evaluations to select the best foundation model for their use case from the Azure AI model catalog. Then they can apply a defense-in-depth approach to mitigating AI risks using built-in capabilities for content filtering, grounding on fresh data, and prompt engineering with safety system messages. Evaluation tools in prompt flow enable developers to iteratively measure, improve, and document the impact of their mitigations at scale, using built-in metrics and custom metrics. That way, data science teams can deploy solutions with confidence while providing transparency for business stakeholders. 

Read more on Responsible AI with Azure.

Deliver enterprise security, privacy, and compliance 

“We needed to choose a platform that provided best-in-class security and compliance due to the sensitive data we require and one that also offered best-in-class services as we didn’t want to be an infrastructure hosting company. We chose Azure because of its scalability, security, and the immense support it offers in terms of infrastructure management.”
—Michael Calvin, Chief Technical Officer at Kinectify

In today’s data-driven world, effective data security, governance, and privacy require every organization to have a comprehensive understanding of their data and AI and machine learning systems. AI governance also requires effective collaboration between diverse stakeholders, such as IT administrators, AI and machine learning engineers, data scientists, and risk and compliance roles. In addition to enabling enterprise observability through MLOps and LLMOps, Azure Machine Learning helps organizations ensure that data and models are protected and compliant with the highest standards of security and privacy.  

With Azure Machine Learning, IT administrators can restrict access to resources and operations by user account or groups, control incoming and outgoing network communications, encrypt data both in transit and at rest, scan for vulnerabilities, and centrally manage and audit configuration policies through Azure Policy. Data governance teams can also connect Azure Machine Learning to Microsoft Purview, so that metadata on AI assets—including models, datasets, and jobs—is automatically published to the Microsoft Purview Data Map. This enables data scientists and data engineers to observe how components are shared and reused and examine the lineage and transformations of training data to understand the impact of any issues in dependencies. Likewise, risk and compliance professionals can track what data is used to train models, how base models are fine-tuned or extended, and where models are employed across different production applications, and use this as evidence in compliance reports and audits. 

Lastly, with the Azure Machine Learning Kubernetes extension enabled by Azure Arc, organizations can run machine learning workloads on any Kubernetes clusters, ensuring data residency, security, and privacy compliance across hybrid public clouds and on-premises environments. This allows organizations to process data where it resides, meeting stringent regulatory requirements while maintaining flexibility and control over their MLOps. Customers using federated learning techniques along with Azure Machine Learning and Azure confidential computing can also train powerful models on disparate data sources, all without copying or moving data from secure locations. 

Get started with Azure Machine Learning 

Machine learning continues to transform the way businesses operate and compete in the digital era—whether you want to optimize your business operations, enhance customer experiences, or innovate. Azure Machine Learning provides a powerful, flexible machine learning and data science platform to operationalize AI innovation responsibly.  

Read the 2024 Gartner® Magic Quadrant™ for Data Science and Machine Learning Platforms report.

Learn more about Microsoft’s placement in the blog post “Gartner® Magic Quadrant™ for Cloud AI Developer Services.”

Explore more on the Microsoft Customer Stories blog. 

*Gartner, Magic Quadrant for Data Science and Machine Learning Platforms, By Afraz Jaffri, Aura Popa, Peter Krensky, Jim Hare, Raghvender Bhati, Maryam Hassanlou, Tong Zhang, 17 June 2024. 

Gartner, Magic Quadrant for Cloud AI Developer Services, Jim Scheibmeir, Arun Batchu, Mike Fang, Published 29 April 2024. 

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, Magic Quadrant is a registered trademark of Gartner, Inc. and/or its affiliates and is used herein with permission. All rights reserved. 

Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s Research & Advisory organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. 

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from this link. 
The post Microsoft is a Leader in the 2024 Gartner® Magic Quadrant™ for Data Science and Machine Learning Platforms  appeared first on Azure Blog.
Quelle: Azure

Build exciting career opportunities with new Azure skilling options 

Microsoft Build is more than just a tech conference—it’s a celebration of innovation, a catalyst for growth, and a gateway to unlocking your professional potential through skilling opportunities on Microsoft Learn. In this blog, we’ll look back at some of the most exciting Microsoft Azure tools that were featured at Build 2024 and put you on the path to attain proficiency.  

Start your skilling journey today on Microsoft Learn

Build intelligent apps with AI and cloud-native technologies

Learn more

Jump to a section: 

Unleash the power of AI by mastering intelligent app development 

Empower your developers to achieve unprecedented productivity 

Accelerate your cloud journey with seamless Azure migration 

Master cloud-scale data analysis for powerful business insights 

Unlock maximum cloud efficiency and savings with Azure 

Unleash the power of AI by mastering intelligent app development 

Azure provides a comprehensive ecosystem of services, tools, and infrastructure tailored for the entire AI lifecycle. At Build we highlighted how your team can efficiently develop, scale, and optimize intelligent solutions that use cutting-edge technologies. 

This year at Build, Microsoft announced the general availability for developers to build and customize models in Microsoft Azure AI Studio. We recently dropped an Azure Enablement Show episode that guides viewers through building their own Copilot using Studio. Watch a demonstration of how to use prompt flow to create a custom Copilot, how to chat with the AI model, and then deploy it as an endpoint. 

Another episode focuses on new Microsoft Azure Cosmos DB developer guides for Node.js and Python, as well as a learning path for building AI chatbots using Azure Cosmos DB and Microsoft Azure Open AI. You’ll learn how to set up, migrate, manage, and utilize V Core-based Azure Cosmos DB for MongoDB to create generative AI apps, culminating in a live demo of an AI chatbot. 

If that Azure Enablement Show episode piques your interest to learn more about Azure Cosmos DB, check out the Microsoft Developers AI Learning Hackathon, where you’ll further explore the world of AI and how to build innovative apps using Azure Cosmos DB, plus get the chance to win prizes! To help you prepare for the hackathon, we have a two-part series to guide you through building AI apps with Azure Cosmos DB, which includes deep-dives into AI fundamentals, Azure Open AI API, vector search, and more.  

You can also review our official collection of Azure Cosmos DB learning resources, which includes lessons, technical documentation, and reference sample codes.  

Looking for a more structured lesson plan? Our newly launched Plans on Microsoft Learn now provides guided learning for top Azure tools and solutions, including Azure Cosmos DB. Think of it as a structured roadmap for you or your team to acquire new skills, offering focused content, clear milestones, and support to speed up the learning process. Watch for more official Plans on Microsoft Learn over the coming months! 

There’s even more to learn about building intelligent AI apps with other exciting Azure tools, with two official collections on Azure Kubernetes Service—Build Intelligent Apps with AI and cloud-native technologies and Taking Azure Kubernetes Service out of the Cloud and into your World—and Build AI Apps with Azure Database for PostgreSQL.  

Empower your developers to achieve improved productivity 

Accelerating developer productivity isn’t just about coding faster; it’s about unlocking innovation, reducing costs, and delivering high-quality software that drives business growth. Azure developer tools and services empowers you to streamline processes, automate workflows, and use advanced technologies like AI and machine learning. 

Join another fun episode of the Azure Enablement Show to discover Microsoft’s skilling resources and tools to help make Python coding more efficient. Learn how to build intelligent apps with Azure’s cloud, AI, and data capabilities and follow along with hands-on modules covering Python web app deployment and machine learning model building on Azure. 

We also have three official collections of learning resources that tackle different aspects of developer productivity:  

Microsoft Developer Tools @ Build 2024: With cutting-edge developer tools and insights, we’ll show you how to create the next generation of modern, intelligent apps. Learn how you can build, test, and deploy apps from the cloud with Microsoft Dev Box, Microsoft Visual Studio, and how Microsoft Azure Load Testing and Microsoft Playwright Testing make it easy to test modern apps.  

Accelerate Developer Productivity with GitHub and Azure for Developers: Continue unlocking the full coding potential in the cloud with GitHub Copilot. Through a series of videos, articles, and activities, you’ll see how GitHub Copilot can assist you and speed up your productivity across a variety of programming languages and projects.  

Secure Developer Platforms with GitHub and Azure: Learn how to elevate your code security with GitHub Advanced Security, an add-on to GitHub Enterprise. Safeguard your private repositories at every development stage with advanced features like secret scanning, code scanning, and dependency management. 

Accelerate your cloud journey with seamless Azure migration

Migrating to Azure empowers organizations to unlock a world of opportunities. At Build we demonstrated how, by using the robust and scalable Azure cloud platform, businesses can modernize their legacy systems, enhance security and compliance, and integrate with AI.  

Looking to get more hands-on with Azure migration tools? Check out our lineup of Microsoft Azure Virtual Training Days. These free, two-day, four-hour sessions are packed with practical knowledge and hands-on exercises for in-demand skills.  

Data Fundamentals: In this foundational-level course, you’ll learn core data concepts and skills in Azure cloud data services. Find out the difference between relational and non-relational databases, explore Azure offerings like Azure Cosmos DB, Microsoft Azure Storage, and gain insights into large-scale analytics solutions such as Microsoft Azure Synapse Analytics and Microsoft Azure Databricks.  

Migrate and Secure Windows Server and SQL Server Workloads: This comprehensive look at migrating and securing on-premises Windows Server and SQL Server workloads to Azure offers insights into assessing workloads, selecting appropriate migration options, and using Azure flexibility, scalability, and cost-saving features.  

Microsoft Azure SQL is an intelligent, scalable, and secure cloud database service that simplifies your operations and unlocks valuable insights for your business. The curated learning paths in our official Azure SQL collection will enable you to focus on the domain-specific database administration and optimization activities that are critical for your business. 

For an even more structured learning experience, there’s our official Plans on Microsoft Learn offering, Migrate and Modernize with Azure Cloud-Scale Database to Enable AI.  Designed to equip you with the expertise needed to harness the full potential of Azure SQL, Microsoft Azure Database for MySQL, Microsoft Azure Database for PostgreSQL, and Microsoft SQL Server enabled by Microsoft Azure Arc for hybrid and multi-cloud environments, this plan will immerse you in the latest capabilities and best practices.  

Master cloud-scale data analysis for insightful decision making 

Cloud-scale analytics help businesses gain valuable insights and make data-driven decisions at an unprecedented speed. Our unified analytics platform, Microsoft Fabric, simplifies data integration, enables seamless collaboration, and democratizes access to AI-powered insights, all within a single, integrated environment. 

Looking to take the Fabric Analytics Engineer Associate certification exam? Get ready with Microsoft Fabric Learn Together, a series of live, expert-led sessions designed to help you build proficiency in tools such as Apache Spark and Data Factory and understand concepts from medallion architecture design to lakehouses.   

There’s still time to register for our Virtual Training Day session, Implementing a Data Lakehouse with Microsoft Fabric, which aims to supply data pros with technical experience how to unify data analytics using AI and extract critical insights. Key objectives include identifying Fabric core workloads to deliver insights faster and setting up a data lakehouse foundation for ingestion, transformation, modeling, and visualization.  

And of course, don’t miss out on our official collection of learning resources for Microsoft Fabric and Azure Databricks, featuring modules on implementing a Data Lakehouse and using Copilot in Fabric, and workshops on building retrieval augmented generation (RAG) Applications and Azure Cosmos DB for MongoDB vCore. For a more curated experience, our Plans on Microsoft Learn collection will get started on how to ingest data with shortcuts, pipelines, or dataflows, how to transform data with dataflows, procedures, and notebooks, and how to store data in the Lakehouse and Data Warehouse.  

Unlock maximum cloud efficiency and savings with Azure 

Promoting resiliency on Azure is a strategic approach to managing your cloud resources efficiently, ensuring optimal performance while minimizing costs. By right-sizing virtual machines (VMs), utilizing reserved instances or savings plans, and taking advantage of automation tools like Microsoft Azure Advisor, you can maximize the value of your Azure investment. 

On another fun episode of our Azure Enablement Show, we explore the Learn Live resources available to help you optimize your cloud adoption journey. Confident cloud operations require an understanding of how to manage cost efficiency, reliability, security, and sustainability. Whether you’re an IT pro or just testing the waters, this two-part episode will point you to the learning resources you need.  

There’s always more to explore at Microsoft Learn 

Like every year, Microsoft Build delivered exciting new products and advancements in Azure technology. Don’t get left behind! Start your skilling journey today at Microsoft Learn.  
The post Build exciting career opportunities with new Azure skilling options  appeared first on Azure Blog.
Quelle: Azure