New features and insights in Azure Monitor

Customers need full stack observability for their apps and infrastructure across Azure and hybrid environments to ensure their workloads are always up and running, for which they rely on Azure Monitor. Over the past few months, we have released many new capabilities targeting to improve native integration with Azure, enable easier onboarding at scale, support enterprise security and compliance needs, provide rich full stack distributed tracing, and much more. In this blog, we're sharing the newest enhancements from Azure Monitor announced at Microsoft Build, including:

Preview of Azure Monitor Application Insights logs being available directly on Log Analytics Workspaces.  
General availability of Azure Monitor for Storage and Azure Cosmos DB, with previews for Key Vault and Azure Cache for Redis.

Be sure to read through the blog post to learn about the full list of announcements at the end.

Application Insights on Log Analytics Workspaces

Logs from Application Insights could previously only be stored separately for each monitored application, and you had to resort to cross-workspace queries to correlate with logs in Log Analytics Workspaces. Continuing the integration of Application Insights with Log Analytics, we are announcing a preview of a major milestone today in our Application Performance Management (APM) story. You can now choose to send your Application Insights logs to a common Log Analytics Workspace, keeping application, infrastructure, and platform logs altogether.

 
This would let you apply common role-based access control across your resources and not have to worry about cross-application or workspace queries anymore (even the Application Insights logs schema is now integrated and made consistent with other data tables in Log Analytics).

 

You will now be able to export both your metrics and logs data to firewalled Azure Storage accounts or stream to Azure Event Hub via Diagnostic Settings. Note that exporting to Storage or Event Hub is a premium feature that would start getting billed once it reaches general availability. You can also start optimizing cost through reserved capacity pricing at workspace level with Log Analytics.

One of the biggest benefits with this upgrade is that you will now be able to easily drive enterprise readiness for your application logs, with all the new enhancements coming soon in Azure Monitor Logs (including Customer Managed Key Encryption, Network Isolation with Private Link support, Business Continuity Disaster Recovery with Globally Distributed Workspaces, High Availability with Availability Zones, Global Availability with Satellite Regions, and much more).

Out-of-the-box insights for Azure Resources

Customers have asked us to provide more out of the insights like the ones we provide for virtual machines, containers, and network. We are now happy to provide you out of the box insights on more Azure resources using the platform metrics that Azure Monitor already collects. These insights are built on workbooks which is a platform for creating and sharing rich interactive reports. You can access any of these insights out-of-the-box and customize it even further. These can be accessed either directly from an individual resource blade, or at scale from Azure Monitor blade in Azure Portal.

Azure Monitor for Storage is now generally available, offering comprehensive monitoring of your Azure Storage accounts covering insights across health and capacity, with the ability to focus on hotspots and help you diagnose latency, throttling, and availability issues.
  
Azure Monitor for Azure Cosmos DB is also now GA, where you can access insights on usage, failures, capacity, throughput, and operations for your Azure Cosmos DB resources across subscriptions.
  
We also announced previews of Azure Monitor for Key Vault and Azure Monitor for Azure Cache for Redis which will provide similar out-of-the-box insights for these resources, helping you use them optimally.

More enhancements

In addition to the two highlights above, here are other exciting announcements from Microsoft Build:

Data Encryption at Rest with Customer Managed Keys (CMK) in Azure Key Vault, providing complete control over log data access with key revocation. Available only when using dedicated clusters with capacity reservation of more than 1TB/day.
Out-of-the-box support for Distributed Tracing in Java Azure Functions, providing richer data pertaining to requests, dependencies, logs, and metrics.
Application Insights Codeless Attach for Node.JS Apps on Azure App Services (Linux) with automatic dependency collection.
Notifications with enhanced visibility on all Azure resource changes across subscriptions with application change analysis.

Next steps with Azure Monitor

To learn more about Azure Monitor and monitoring best practices, check out the documentation and recorded sessions from our recent virtual series. If you have any questions or suggestions, reach out to us through our Tech Community forum.
Quelle: Azure

General availability of Azure Files on-premises Active Directory Domain Services authentication

Today we're announcing the general availability of Azure Files support for authentication with on-premises Active Directory Domain Services (AD DS).

Since preview in February 2020, we’ve received great feedback and growing interest from our customers, especially because of increased work from home scenarios. With file shares migrated to the cloud, maintaining access using Active Directory credentials greatly simplifies the IT management experience and provide better mobility for remote work. Most importantly, you do not need to reconfigure your clients. As long as your on-premises servers or user laptops are domain-joined to AD DS, you can sync Active Directory to Azure AD, enable AD DS authentication on the storage account, and mount the file share directly. It makes the migration from on-premises to cloud extremely simple as the existing Windows ACLs can be seamlessly carried over to Azure Files and continue to be enforced for authorization. Along with private endpoint support of Azure Files, you can access data in Azure Files just like you would in an on-premises file server within the secure network boundary.

On-premises AD DS integration also simplifies the setup experience of using Azure Files as the user profile storage for Virtual Desktop scenarios. Leveraging Azure Files for Virtual Desktop Infrastructure (VDI) environments eliminates the need for self-hosting file servers. With AD DS integration, it extends the same authentication and authorization as traditional file servers to Azure. User profiles will be loaded from the file share to the desktop session supporting a single sign-on login experience. You can continue to use the existing AD DS setup and carry over Windows access control lists (ACLs) if needed. Beyond that, Azure Files as a cloud-native file service provides dynamic scaling to better accommodate the change of capacity and traffic patterns. For example, your VDI farm may have started with supporting 500 users, but with more people working remotely you need to scale up to 5000 users (10x increase). Azure Files premium tier allows you to scale up your capability along with performance on the fly to handle the increase in capacity. This will also reduce the management overhead to deploy additional file servers and manage the reconfigurations.

To help with your setup, we have collaborated with first and third-party VDI providers to provide detailed guidance. You can follow this step-by-step walkthrough to configure Windows Virtual Desktop FSLogix profile containers with Azure Files. Citrix has partnered with Microsoft to provide day-one support for Azure Files as a certified storage solution for both User Profile Management and User Personalization Layer technologies. Leveraging Azure Files provides a simple and cost-effective persistent storage solution for user data in your VDI environment. Detailed configuration information for integrating Azure Files with Citrix technologies is available in Citrix Tech Zone.

In addition, we want to share with you the recent updates on Azure Files:

Enhanced data protection with soft delete. To protect your Azure file shares from accidental deletion, we released the preview of soft delete for Azure file shares. Think of soft delete like a recycle bin for your file shares. When a file share is deleted, it transitions to a soft deleted state in the form of a soft deleted snapshot. You get to configure how long soft deleted data is recoverable for before it is permanently erased.
Better scaling with max file size increased to 4 TiB. We have increased the max size supported on a single file from 1TiB to 4 TiB. If you are using file share to store engineering files or virtual hard disks (VHDs), this would address your concerns on the size limitations. As you grow your data footprint, you can also scale up the share size at runtime. Larger file sizes are supported over Server Message Block (SMB) protocol and will be enabled for REST access in the upcoming weeks.
Private endpoint support for Azure File Sync. Starting with version 10.1 of the Azure File Sync agent, you can create private endpoints for your Storage Sync Services. Private endpoints enable you to securely connect to your Azure resources from on-premises using an ExpressRoute with private peering or a Site-to-Site VPN connection.

Getting started

You can deploy a file share and mount it for your data storage within 5 minutes. Here are some materials to help you get started:

What is Azure Files?
Quickstart: Create and manage Azure Files share.
Planning for an Azure Files deployment.
Planning for an Azure File Sync deployment.
Enable Active Directory Domain Services authentication on Azure Files.

You can share your feedback via Azure Storage forum or just send us an email at AzureFiles@microsoft.com.
Quelle: Azure

Introducing live video analytics from Azure Media Services—now in preview

Azure Media Services is pleased to announce the preview of a new platform capability called Live Video Analytics, or in short, LVA. LVA provides a platform for you to build hybrid applications with video analytics capabilities. The platform offers the capability of capturing, recording, and analyzing live video and publishing the results (which could be video and/or video analytics) to Azure Services in the cloud and/or the edge.

With this announcement, the LVA platform is now available as an Azure IoT Edge module via the Azure Marketplace. The module, referred to as, “Live Video Analytics on IoT Edge” is built to run on a Linux x86-64 edge device in your business location. This enables you to build IoT solutions with video analytics capabilities, without worrying about the complexity of designing, building, and operating a live video pipeline.

LVA is designed to be a “pluggable” platform, so you can integrate video analysis modules, whether they are custom edge modules built by you with open source machine learning models, custom models trained with your own data (using Azure Machine Learning or other equivalent services) or Microsoft Cognitive Services containers. You can combine LVA functionality with other Azure edge modules such as Stream Analytics on IoT Edge to analyze video analytics in real-time to drive business actions (e.g. generate an alert when a certain type of object is detected with a probability above a threshold).

You can also choose to integrate LVA with Azure services such as Event Hub (to route video analytics messages to appropriate destinations), Cognitive Services Anomaly Detector (to detect anomalies in time-series data), Azure Time Series Insights (to visualize video analytics data), and so on. This enables you to build powerful hybrid (i.e. edge + cloud) applications.

With LVA on IoT Edge, you can continue to use your CCTV cameras with your existing video management systems (VMS) and build video analytics apps independently. It can also be used in conjunction with existing computer vision SDKs (e.g. extract text from video frames) to build cutting edge, hardware-accelerated live video analytics enabled IoT solutions. The diagram below illustrates this process:

 

Use cases for LVA

With LVA, you can bring the AI of your choice and integrate it with LVA for different use cases. It can be first-party Microsoft AI models, open source or third-party models, etc.

Retail

Retailers can use LVA to analyze video from cameras in their parking lots to detect and match incoming cars to registered consumers to enable curb-side pickup of items ordered by the consumer via their online store. This enables consumers and employees to maintain a safe physical distance from each other, which is particularly important in the current pandemic environment.

In addition, retailers can use video analytics to understand how consumers view and interact with products and displays in their stores to make decisions about product placement. They can also use real-time video analytics to build interactive displays that respond to consumer behavior.

Transportation

When it comes to transportation and traffic, video analytics can be used to monitor parking spots, track usage to display automated, “No parking available” signs and re-route those trying to park. It can also be used for public transportation to monitor queues and crowds and identify capacity needs, enabling organizations to add capacity or open new entrances or exits. By feeding business data, pricing can be adjusted in real-time based on demand and capacity.

Manufacturing

Manufacturers can use LVA to monitor lines for quality assurances or ensure safety equipment is being used and procedures are being followed. For example, monitoring personnel to see that they are wearing helmets where required or even checking that face shields are lowered when needed.

Platform capabilities

The LVA on IoT Edge platform offers the following capabilities for you to develop video analytics functionality in IoT solutions.

Process video in your own environment

Live Video Analytics on IoT Edge can be deployed on your own appliance in your business environment. Depending on your business needs you can choose to process the video on your device and have only analytics data go to cloud services such as Power BI. This helps in avoiding cost related to moving video from the edge to the cloud and helps address any privacy or compliance concerns.

Analyze video with your own AI

Live Video Analytics on IoT Edge enables you to plug in your own AI and be in control of analyzing your video per your business needs. You have flexibility in using your own custom-built AI, open source AI, or AI built by companies specializing in your business domain.

Flexible live video workflows

You can define a variety of live video workflows using the concept of Media Graph. Media Graph lets you define where video should be captured from, how it should be processed, and where the results should be delivered. You accomplish this by connecting components, or nodes, in the desired manner. The diagram below provides a graphical representation of a Media Graph. You can learn more about it on the Media Graph concept page.

Integrate with other Azure services

LVA on IoT Edge can be combined with other Azure services on the edge and in the cloud to build powerful business applications with relative ease. As an example, you can use LVA on IoT Edge to capture video from cameras, sample frames at a frequency of your choice, use an open source AI model such as Yolo to detect objects, use Azure Stream Analytics on IoT Edge to count and/or filter objects detected by Yolo, and use Azure Time Series Insights to visualize the analytics data in the cloud, while using Azure Media Services to record the video and make it available for consumption by video players in browsers and mobile apps.

Next steps

To learn more, visit LVA on IoT Edge, watch this demo, and see the LVA on IoT Edge documentation.

Microsoft is committed to designing responsible AI and has published a set of Responsible AI principles. Please review the Transparency Note: Live Video Analytics (LVA) to learn more about designing responsible AI integrations.
Quelle: Azure

Containerize Your Go Developer Environment – Part 1

When joining a development team, it takes some time to become productive. This is usually a combination of learning the code base and getting your environment setup. Often there will be an onboarding document of some sort for setting up your environment but in my experience, this is never up to date and you always have to ask someone for help with what tools are needed.

This problem continues as you spend more time in the team. You’ll find issues because the version of the tool you’re using is different to that used by someone on your team, or, worse, the CI. I’ve been on more than one team where “works on my machine” has been exclaimed or written in all caps on Slack and I’ve spent a lot of time debugging things on the CI which is incredibly painful.

Many people use Docker as a way to run application dependencies, like databases, while they’re developing locally and for containerizing their production applications. Docker is also a great tool for defining your development environment in code to ensure that your team members and the CI are all using the same set of tools.

We do a lot of Go development at Docker. The Go toolchain is great– providing fast compile times, built in dependency management, easy cross compiling, and strong opinionation on things like code formatting. Even with this toolchain we often run into issues like mismatched versions of Go, missing dependencies, and slightly different configurations. A good example of this is that we use gRPC for many projects and so require a specific version of protoc that works with our code base.

This is the first of a series of blog posts that will show you how to use Docker for Go development. It will cover building, testing, CI, and optimization to make your builds quicker.

Start simple

Let’s start with a simple Go program:

package mainimport “fmt”func main() {    fmt.Println(“Hello world!”) }

You can easily build this into a binary using the following command:$ go build -o bin/example .

The same can be achieved using the following Dockerfile:

FROM golang:1.14.3-alpine AS buildWORKDIR /srcCOPY . .RUN go build -o /out/example .FROM scratch AS binCOPY –from=build /out/example /

This Dockerfile is broken into two stages identified with the AS keyword. The first stage, build, starts from the Go Alpine image. Alpine uses the musl C library and is a minimalist alternative to the regular Debian based Golang image. Note that we can define which version of Go we want to use. It then sets the working directory in the container, copies the source from the host into the container, and runs the go build command from before. The second stage, bin, uses a scratch (i.e.: empty) base image. It then simply copies the resulting binary from the first stage to its filesystem. Note that if your binary needs other resources, like CA certificates, then these would also need to be included in the final image.

As we are leveraging BuildKit in this blog post, you will need to make sure that you enable it by using Docker 19.03 or later and setting DOCKER_BUILDKIT=1 in your environment. On Linux, macOS, or using WSL 2 you can do this using the following command:$ export DOCKER_BUILDKIT=1

On Windows for PowerShell you can use:$env:DOCKER_BUILDKIT=1

Or for command prompt:set DOCKER_BUILDKIT=1

To run the build, we will use the docker build command with the output option to say that we want the result to be written to the host’s filesystem:$ docker build –target bin –output bin/ .

You will then see that we have the example binary inside our bin directory:$ ls binexample

As this binary was built in a Linux container, it will be built for Linux and not necessarily your local OS:$ file bin/examplebin/example: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, Go BuildID=c6gqNQAtPiSgADfwKGvk/Gwjkkghl0lqappMisBRv/A5Lxynxz0n8wTt25vbme/LH_2ZCsnmpIAW_cpaQW7, not stripped

Note that this is only a problem if your host operating system is not Linux and if you are developing software that needs to run on your host.

Simple cross compiling

As Docker supports building images for different platforms, the build command has a platform flag. This allows setting the target OS (i.e.: Linux or Windows) and architecture (i.e.: amd64, arm64, etc.). To build natively for other platforms, your builder will need to be set up for these platforms. As Golang’s toolchain includes cross compilation, we do not need to do any builder setup but can still leverage the platform flag.

We can cross compile the binary for the host operating system by adding arguments to our Dockerfile and filling them from the platform flag of the docker build command. The updated Dockerfile is as follows:

FROM –platform=${BUILDPLATFORM} golang:1.14.3-alpine AS buildWORKDIR /srcENV CGO_ENABLED=0COPY . .ARG TARGETOSARG TARGETARCHRUN GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o /out/example .FROM scratch AS binCOPY –from=build /out/example /

Notice that we now have the BUILDPLATFORM variable set as the platform for our base image. This will pin the image to the platform that the builder is running on. In the compilation step, we consume the TARGETOS and TARGETARCH variables, both filled by the build platform flag, to tell Go which platform to build for. To simplify things, and because this is a simple application, we statically compile the binary by setting CGO_ENABLED=0. This means that the resulting binary will not be linked to any C libraries. If your application uses any system libraries (like the system’s cryptography library) then you will not be able to statically compile the binary like this.

To build for your host operating system, you can specify the local platform:$ docker build –target bin –output bin/ –platform local .

As the docker build command is getting quite long, we can put it into a Makefile (or a scripting language of your choice):

all: bin/example.PHONY: bin/examplebin/example:   @docker build . –target bin    –output bin/    –platform local

This will allow you to run your build as follows:$ make bin/example$ make

We can go a step further and add a cross compiling targets to the Dockerfile:

FROM –platform=${BUILDPLATFORM} golang:1.14.3-alpine AS buildWORKDIR /srcENV CGO_ENABLED=0COPY . .ARG TARGETOSARG TARGETARCHRUN GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o /out/example .FROM scratch AS bin-unixCOPY –from=build /out/example /FROM bin-unix AS bin-linuxFROM bin-unix AS bin-darwinFROM scratch AS bin-windowsCOPY –from=build /out/example /example.exeFROM bin-${TARGETOS} AS bin

Above we have two different stages for Unix-like OSes  (bin-unix) and for Windows (bin-windows). We then add aliases for Linux (bin-linux) and macOS (bin-darwin). This allows us to make a dynamic target (bin) that depends on the TARGETOS variable and is automatically set by the docker build platform flag.

This allows us to build for a specific platform:$ docker build –target bin –platform windows/amd64 .$ file bin/bin/example.exe: PE32+ executable (console) x86-64 (stripped to external PDB), for MS Windows

Our updated Makefile has a PLATFORM variable that you can set:

all: bin/examplePLATFORM=local.PHONY: bin/examplebin/example:   @docker build . –target bin    –output bin/    –platform ${PLATFORM}

This means that you can build for a specific platform by setting PLATFORM:$ make PLATFORM=windows/amd64

You can find the list of valid operating system and architecture combinations here: https://golang.org/doc/install/source#environment.

Shrinking our build context

By default, the docker build command will take everything in the path passed to it and send it to the builder. In our case, that includes the contents of the bin/ directory which we don’t use in our build. We can tell the builder not to do this, using a .dockerignore file:

bin/*

Since the bin/ directory contains several megabytes of data, adding the .dockerignore file reduces the time it takes to build by a little bit.

Similarly, if you are using Git for code management but do not use git commands as part of your build process, you can exclude the .git/ directory too.

What’s next?

This post showed how to start a containerized development environment for local Go development, build an example CLI tool for different platforms and how to start speeding up builds by shrinking the build context. In the next post of this series, we will add dependencies to make our example project more realistic, look at caching to make builds faster, and add unit tests.

You can find the finalized source this example on my GitHub: https://github.com/chris-crone/containerized-go-dev

If you’re interested in build at Docker, take a look at the Buildx repository: https://github.com/docker/buildx

Read the whole blog post series here.
The post Containerize Your Go Developer Environment – Part 1 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/