Azure Log Analytics – Container Monitoring Solution general availability, CNCF Landscape

Docker container is an emerging technology to help developers and devops with easy provisioning and continuous delivery in modern infrastructure. As containers can be ubiquitous in an environment, monitoring is essential. We've developed a monitoring solution which provides deep insights into containers supporting Kubernetes, Docker Swarm, Mesos DC/OS, and Service Fabric container orchestrators on multiple OS platforms. We are excited to announce the general availability for the Container Monitoring management solution on Azure Log Analytics, available in the Azure Marketplace today. 

"Every community contribution helps DC/OS become a better platform for running modern applications, and the addition of Azure Log Analytics Container Monitoring Solution into DC/OS Universe is a meaningful contribution, indeed," said Ravi Yadav, Technical Partnership Lead at Mesosphere. "DC/OS users are running a lot of Docker containers, and having the option to manage them with a tool like Azure Log Analytics Container Monitoring Solution will result in a richer user experience."

Microsoft recently joined the Cloud Native Computing Foundation (CNCF) and we continue to invest in open source projects. Azure Log Analytics is now part of the Cloud Native Computing Foundation (CNCF) Landscape under Monitoring Category.

Here's what the Container Monitoring Solution supports:

With this solution, you can:

See information about all container hosts in a single location
Know which containers are running, what image they’re running, and where they’re running
See an audit trail for actions on containers
Troubleshoot by viewing and searching centralized logs without remote login to the Docker hosts
Find containers that may be “noisy neighbors” and consuming excess resources on a host
View centralized CPU, memory, storage, and network usage and performance information for containers

 

New features available as part of the general availability include:

We've added new features to provide better insights to your Kubernetes cluster. With the new features, you can more easily narrow down container issues within a Kubernetes cluster. Now you can use search filters on you own custom pod labels and Kubernetes cluster hierarchies. With container process information, you can quickly see container process status for deeper health analysis. These features are only for Linux—additional Windows features are coming soon.

Kubernetes cluster awareness with at-a-glance hierarchy inventory from Kubernetes cluster to pods
New Kubernetes events
Capture custom pod labels and provides custom complex search filters
Provides container process information
Container Node Inventory including storage, network, orchestration type, and Docker version

For more information about how to use Container Monitoring solution, as well as the insights you can gather, see Containers solution in Log Analytics.

Learn more by reading previous blogs on Azure Log Analytics Container Monitoring.

How do I try this?

You can get a free subscription for Microsoft Azure so that you can test the Container Monitoring solution features.

How can I give you guys feedback?

There are a few different routes to give feedback:

UserVoice: Post ideas for new Azure Log Analytics features to work on. Visit the UserVoice page.
Forums: Visit the Azure Log Analytics Forums.
Email: Tell us whatever is on your mind by emailing us at OMScontainers@microsoft.com.

We plan on enhancing monitoring capabilities for containers. If you have feedback or questions, please feel free to contact us!
Quelle: Azure

Azure Monitor: Enhanced capabilities for routing logs and metrics

Today we are pleased to announce a new set of capabilities within Azure Monitor for routing your Azure resource diagnostic logs and metrics to storage accounts, Event Hubs namespaces, or Log Analytics workspaces. You can now create multiple resource diagnostic settings per resource, enabling you to route different permutations of log categories and metrics to different destinations (in public preview) and route your metrics and logs to a destination in a different subscription. In this post, we’ll walk through these new capabilities and how you can start using them today.

Creating multiple diagnostic settings

A resource diagnostic setting is a rule on an individual Azure resource that determines what logs and metrics among those available for that resource type are to be collected and to where that data will be sent. Resource diagnostic settings have three destinations, or ‘data sinks’ for monitoring data:

Storage accounts for archival
Log Analytics workspaces for search and analytics
Event Hubs namespaces for integration with 3rd party tools or custom solutions

Previously, only one diagnostic setting could be set per resource. We heard feedback that this was too restrictive in two ways:

It limited you to sending monitoring data to only one instance of each destination. For example, you could only send the logs and metrics for a particular Application Gateway to one storage account. If you have two independent teams for security and monitoring that each wanted to consume this data, this limited your ability to offer that data separately to both teams.
It required that you route the same permutation of log categories and metrics to all destinations. For example, it was impossible to route a particular Batch Account’s service logs into Log Analytics while sending that same account’s metrics into a storage account.

Today, we are introducing the public preview of the ability to create multiple diagnostic settings on a single resource, removing both restrictions above. Let’s take a quick look at how you can set this up in the Azure Portal. Navigate to the Monitor blade, and click on “Diagnostic Settings.”

You’ll notice we’ve renamed this section “Diagnostic Settings” from “Diagnostic Logs” to better reflect the ability to route both log and metric data from a resource. In this blade you’ll see a list of resources that support diagnostic settings. Clicking on one will show you a list of all settings on that resource.

If none exist, you will be prompted to create one.

Clicking “turn on diagnostics” will present the familiar blade for setting a diagnostic setting, but now you will see that a field for “name” has been added. Give your setting a name to differentiate between multiple settings on the same resource.

Click “save,” and now returning to the previous blade you will see the created setting and add an additional setting.

Adding more diagnostic settings will add them to this list. Note that you can have a maximum of three diagnostic settings per resource.

You can also do this using the REST API or in an ARM template. PowerShell support is coming soon. Note that routing data to an additional destination of the same type will incur a service fee per our billing information.

Writing monitoring data across subscriptions

Previously, you could only route metrics and log categories for a resource to a storage account, Event Hubs namespace, or Log Analytics workspace within the same subscription as the resource emitting data. For companies with centralized monitoring teams responsible for keeping track of many subscriptions, we heard that maintaining a destination resource per subscription was tedious, requiring knowledge of the unique storage account (or Event Hubs namespace or workspace) for each subscription. Now you can configure a diagnostic setting to send monitoring data to a destination in a different subscription, provided that your user account has appropriate write access to that destination resource.

Note that authentication is done within a particular Azure Active Directory tenant, so monitoring data can only be routed to a destination within the same tenant as the resource emitting data.

These new capabilities are rolling out to all public Azure regions beginning today. Try them out and let us know your feedback through our UserVoice channel or in the comments below.
Quelle: Azure

Azure Analysis Services web designer adds visual model editing to the preview

Last month we released a preview of the Azure Analysis Services web designer. This new browser-based experience will allow developers to start creating and managing Azure Analysis Services (AAS) semantic models quickly and easily. While SQL Server Data Tools (SSDT) and SQL Server Management Studio (SSMS) are still the primary tools for development, this new experience is intended to make modeling fast and easy. It is great for getting started on a new model or to do things such as adding a new measure to an existing model.

With this round of updates, we are adding the most significant modeling feature yet, the ability to edit your model visually with the new diagram editor.

This new diagram editor was designed for making changes on models with a large number of tables in mind. To make the best use of screen space, you are not required to make all the tables visible on the diagram at once. Tables can be dragged into the diagram from the table list and can be closed from view by clicking the three dots at the top right of the table.

New measures can also be added to a table by clicking on the same three dots and then clicking measures to bring up the measure editor. When you want to change the properties of a table, measure or column, you no longer need to do this one object at a time. With multi select, you can select as many objects as you want at one time and update the properties for all of them in one batch.

By dragging a column from one table to another, a relationship will be created between those tables. You can edit the relationship by clicking on the relationship line which will bring up the relationship editor.

While the new diagram editor is a great way to easily understand and make bulk changes to your model, you still have all of the power of the existing JSON editor. You can switch between the editors by changing the view at the top of the screen.

As you enhance your model, you can continuously test it out by switching to the query view at the top of the screen.

The query view will give you a preview of what your model will look like when used in tools like the Power BI desktop, and it will also let you run sample queries against your model so that you can check your data.

You can try the Azure Analysis web designer today by linking to it from a server in the Azure portal.

Submit your own ideas for features on our feedback forum. Learn more about Azure Analysis Services and the Azure Analysis Services web designer.
Quelle: Azure

Announcing the public preview of Azure Archive Blob Storage and Blob-Level Tiering

From startups to large organizations, our customers in every industry have experienced exponential growth of their data. A significant amount of this data is rarely accessed but must be stored for a long period of time to meet business continuity and compliance requirements. Examples include employee data, medical records, customer information, financial records, backups, etc. Additionally, recent and coming advances in artificial intelligence and data analytics are unlocking value from data that might have previously been discarded. Customers want to keep more of these data sets for a longer period but need a scalable and cost-effective solution to do so.

Last year, we launched Cool Blob Storage to help customers reduce storage costs by tiering their infrequently accessed data to the Cool tier. Today we’re announcing the public preview of Archive Blob Storage designed to help organizations reduce their storage costs even further by storing rarely accessed data in our lowest-priced tier yet. Furthermore, we’re excited to introduce the public preview of Blob-Level Tiering enabling you to optimize storage costs by easily managing the lifecycle of your data across these tiers at the object level.

The CEO of HubStor, a leading enterprise backup and archiving company, stated: “We are jumping for joy to see the amazing design Microsoft successfully implemented. Azure Archive Blob Storage is indeed an excellent example of Microsoft leapfrogging the competition.”

Azure Archive Blob Storage

Azure Archive Blob storage is designed to provide organizations with a low cost means of delivering durable, highly available, secure cloud storage for rarely accessed data with flexible latency requirements (on the order of hours). See Azure Blob Storage: Hot, cool, and archive tiers to learn more.

The Archive tier, in addition to Hot and Cool access tiers, is now available in Blob Storage accounts. Archive Storage characteristics include:

Cost-effectiveness: Archive access tier is our lowest priced storage offering. Customers with long-term storage which is rarely accessed can take advantage of this. For more details on regional preview pricing, see Azure Storage Pricing.
Seamless Integration: Customers use the same familiar operations on blobs in the Archive tier as on blobs in the Hot and Cool access tiers. This will enable customers to easily integrate the new access tier into their applications.
Availability: The Archive access tier will provide the same 99% availability SLA (at General Availability (GA)) offered by the Cool access tier.
Durability: All access tiers including Archive are designed to offer the same high durability that you have come to expect from Azure Storage with the same data replication options available today.
Security: All data in the Archive access tier is automatically encrypted at rest.

Blob-Level Tiering:  easily optimize storage costs without moving your data

To simplify data lifecycle management, we now allow customers to tier their data at the blob level.  Customers can easily change the access tier of a blob among the Hot, Cool, or Archive tiers as usage patterns change, without having to move data between accounts. Blobs in all three access tiers can co-exist within the same account.

Flexible management

Archive Storage and Blob-level Tiering will be available on all Blob Storage accounts. For customers with large volumes of data in General Purpose accounts, we will allow upgrading your account to get access to Cool, Archive, and Blob-level Tiering at GA.

A user may access the feature using .NET (see Figure 1), Python (preview), or Node.js client libraries or REST APIs initially. Support for the Java client library and portal (see Figure 2) will roll out over the next week. Other SDKs and tools will be supported in the next few months.

Figure 1: Set blob access tier using .NET client library

Figure 2: Set blob access tier in portal

Pricing

Pricing for Azure Archive Blob Storage during preview will be reduced. Please refer to the Azure Blobs Storage Pricing page for more details.

How to get started

To enroll in the public preview, you will need to submit a request to register this feature to your subscription. After your request is approved (within 1-2 days), any new LRS Blob Storage account you create in US East 2 will have the Archive access tier enabled, and all new accounts in all public regions will have blob-level tiering enabled. During preview, only LRS accounts will be supported but we plan to extend support to GRS and RA-GRS accounts (new and existing) as well at GA. Blob-level tiering will not be supported for any blob with snapshots. As with most previews, this should not be used for production workloads until the feature reaches GA.

To submit a request, run the following PowerShell or CLI commands.

PowerShell

Register-AzureRmProviderFeature -FeatureName AllowArchive -ProviderNamespace Microsoft.Storage

This will return the following response:

FeatureName ProviderName RegistrationState
———– ———— —————–
AllowArchive Microsoft.Storage Pending

It may take 1-2 days to receive approval.  To verify successful registration approval, run the following command:

Get-AzureRmProviderFeature -FeatureName AllowArchive -ProviderNamespace Microsoft.Storage

If the feature was approved and properly registered, you should receive the following output:

FeatureName ProviderName RegistrationState
———– ———— —————–
AllowArchive Microsoft.Storage Registered

CLI 2.0

az feature register –-namespace Microsoft.Storage –-name AllowArchive

This will return the following response:

{
"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/providers/Microsoft.Features/providers/Microsoft.Storage/features/AllowArchive",
"name": "Microsoft.Storage/AllowArchive",
"properties": {
"state": "Pending"
},
"type": "Microsoft.Features/providers/features"
}

It may take 1-2 days to receive approval.  To verify successful registration approval, run the following command:

-az feature show –-namespace Microsoft.Storage –-name AllowArchive

If the feature was approved and properly registered, you should receive the following output:

{
"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/providers/Microsoft.Features/providers/Microsoft.Storage/features/AllowArchive",
"name": "Microsoft.Storage/AllowArchive",
"properties": {
"state": "Registered"
},
"type": "Microsoft.Features/providers/features"
}

Get it, use it, and tell us about it

We’re confident that Azure Archive Blob Storage will provide another critical element for optimizing your organization’s cloud data storage strategy. As this is a preview, we look forward to hearing your feedback on these features, which you can send by email to us at archivefeedback@microsoft.com.
Quelle: Azure

Azure Network Watcher introduces Connectivity Check (Preview)

Diagnosing network connectivity and performance issues in the cloud can be a challenge as your network evolves in complexity. We are pleased to announce the preview of a new feature to check network connectivity in a variety of scenarios when using VM.

The Azure Network Watcher Connectivity Check feature helps to drastically reduce the amount of time needed to find and detect connectivity issues in the infrastructure. The results returned can provide valuable insights to whether a connectivity issue is due to a platform or a potential user configuration. Network Watcher Connectivity Check can be used from the Azure portal, using PowerShell, Azure CLI, and REST API.

Connectivity Check is supported in a variety of scenarios – VM to a VM, VM to an external endpoint, and VM to an on-premise endpoint. Leveraging a common and typical network topology, the example below illustrates how Connectivity Check can help resolve network reachability issues using the Azure portal. There is a VNet hosting a multi-tier web application and four subnets, amongst which are an application subnet and a database subnet.

Figure 1 – Multi-tier web application

On the Azure portal, navigate to Azure Network Watcher and under Network Diagnostic Tools click on Connectivity Check. Once there, you can specify the Source and Destination VM and click the “Check” button to begin the connectivity check.

A status indicating reachable or unreachable is returned once the connectivity check completes. The number of hops, the minimum, average and maximum latency to reach the destination are also returned.

Figure 2 – Connectivity Check – access from portal

In this example, a connectivity check was done from the VM running the application tier to the VM running the database tier. The status is returned as unreachable, and it’s important to note, one of the hops indicated a red status. Clicking on the hop indicates the presence of an NSG rule that is blocking all traffic, thereby blocking end-to-end connectivity.

Figure 3 – Unreachable status

The NSG rule configuration error was rectified and a connectivity check was repeated as illustrated below, where the results now indicate an end-to-end connectivity. The network latency between source and destination, along with hop information is also provided.

Figure 4 – Reachable status

The destination for Connectivity Check can be an IP address, an FQDN, or an ARM URI.
 
We believe the Connectivity Check feature will give you deeper insights to network performance in Azure. We welcome you to reach out, as your feedback from using Network Watcher is crucial to help steer the product development and eco system growth.
Quelle: Azure

New Offers in Azure Marketplace!

14 great new cloud offerings were published to Azure Marketplace last month. Check ‘em out!

– The Azure Marketplace Team

Denodo Platform:  is a leading data virtualization product that provides agile, high performance data integration and data abstraction across the broadest range of enterprise, cloud and big data sources; exposes real-time data models for expediting the use by other applications and business users. Learn more on Azure Marketplace.

Kinetica (BYOL):  is a GPU-accelerated, in-memory analytics database that delivers truly real-time response to queries on large, complex and streaming data sets: 100x faster performance than traditional databases. Learn more on Azure Marketplace.

Informatica Big Data Management 10.1.1. U2 (BYOL): provides data management solutions to quickly and holistically integrate, govern, and secure big data for your business. Learn more on Azure Marketplace.

Lumify: Altamira LUMIFY is a powerful big data fusion, analysis, and visualization platform that supports the development of actionable intelligence. Learn more on Azure Marketplace.

AppGate: AppGate for Azure supports fine-grained, dynamic access control to Azure resources. Learn more on Azure Marketplace.

NetConnect: NetConnect secures data by locking it within the cloud environment, and enabling users to remotely interact with files and applications as if they were local to their device. Learn more on Azure Marketplace.

SQLstream Blaze: Enterprises like Amazon use SQLstream Blaze to easily build, test, deploy, or update streaming applications in minutes, that keep operations running at optimal efficiency, protect systems from security threats, and support real-time customer engagement. Learn more on Azure Marketplace.

CARTO Builder:  solves this with its drag and drop analytics capabilities, by empowering business analysts to optimize operations and quickly deploy location applications. Learn more on Azure Marketplace.

vSEC:CMS C-Series: The vSEC:CMS system is fully functional with minidriver enabled smart cards and it streamlines all aspects of a smart card management system by connecting to enterprise directories. Learn more on Azure Marketplace.

VU Application Server: Manage all your security with simplicity Administrate your policies in a flexible platform VU Application Server is the authentication server that allows companies, institutions and organizations to unfold a robust authentication strategy for local & remote access to applications. Learn more on Azure Marketplace.

Identity Orchestration and Management Portal: Imagine collapsing multiple management portals for the many online services and on-premise applications into a single management interface in a browser. Learn more on Azure Marketplace.

Solution Templates

Viptela vEdge Cloud Router (3 NICs): Viptela vEdge Cloud is a software router that supports all of the capabilities available on Viptela's industry leading SD-WAN platform. Learn more on Azure Marketplace.

Informatica Big Data Management 10.1.1. U2 (BYOL): provides data management solutions to quickly and holistically integrate, govern, and secure big data for your business. Learn more on Azure Marketplace.

Teradata Server Management: monitors Teradata Database instances and generates alerts related to database and OS errors and operational state changes. Learn more on Azure Marketplace.

Quelle: Azure

Microsoft and Red Hat Help Accelerate Enterprise Container Adoption

Today we made some exciting announcements along with Red Hat around container support. We’re adding Windows support in OpenShift, OpenShift Dedicated on Azure and SQL Server support in OpenShift and Red Hat Linux to our joint roadmap, and extending the integrated, co-located support that has been a signature of our partnership. You can read all the news in the press release here.

For me the significance is the impact Red Hat and Microsoft continue to make in the cloud. It wasn’t long ago that the idea of these two companies working together would have been almost inconceivable. Well it turns out that word does not mean what people thought it did.

We’ve jointly recognized that customers aren’t choosing Red Hat or Microsoft, they have already chosen: they chose to use both technologies. Enterprises all over the world are using Windows and Red Hat Enterprise Linux; Java and .NET, and together Microsoft and Red Hat can serve those customers much better than either of us can alone.

This is why Microsoft joined forces with Red Hat in late 2015. We started by making Red Hat solutions available natively to Microsoft Azure customers, delivering integrated enterprise support across hybrid environments, collaborating on .NET for a new generation of applications, and more. Last year we announced that we would bring SQL Server to Linux, including Red Hat Enterprise Linux. Since then we’ve continued to work together to bring these and other solutions – and, with it unmatched choice and flexibility – to our customers. In that spirit, with this week’s announcement a customer running OpenShift can use Linux and Windows containers together in a single cluster, they can use .NET Core 2.0 in a container on OpenShift, and SQL Server on either operating system and OpenShift, either on Azure or in their own datacenters. This just extends our customers’ ability to choose the right technology for the right job and have them work together. It is pretty cool to see this come to life, even if it was inconceivable not long ago.

As I reflect on today’s announcement, one can’t help to think about Kubernetes. Last month, I talked about some of the upstream CNCF projects we are involved in. Since then, we’ve continued engaging with the community upstream including the Kubernetes Service Catalog, where Microsoft employees are amongst the top committers, upstream managed/dedicated blob disk contributions and of course the Open Service Broker API. Working with the community and contributing upstream will continue being an important part of our relationship with Red Hat.

I want to thank my colleagues at Red Hat for our partnership. We’ve put in a lot of hours together, and it’s always a pleasure. All of us at Microsoft thank you and look forward to continuing to solve problems for our customers together.

To learn more about Red Hat solutions in Microsoft Azure and get started today, visit azure.com/redhat.
Quelle: Azure

Perform advanced analytics on Application Insights data using Jupyter Notebook

To help you leverage your telemetry data and better monitor the behavior of your Azure applications, we are happy to provide a Jupyter Notebook template that extends the power of Application Insights. Instead of making ad hoc queries in the Application Insights portal when an issue arises, you can now write a Jupyter Notebook that routinely queries for telemetry data, performs advanced analytics, and sends the derived data back to Application Insights for monitoring and alerting. You can execute the Jupyter Notebook using Azure WebJob either on a schedule or via webhook.

Through this approach, you can manipulate and analyze your telemetry data beyond the constraints of query language or limit. You can take advantage of the existing alerting system to monitor the newly derived data, rather than raw instrumentation data. The derived data can also be correlated with other metrics for root cause analysis, used to train machine learning models, and much more. In this blog post, you will find a step-by-step guide for operationalizing this template to perform advanced analytics on your telemetry data, as well as an example implementation.

Create a Jupyter Notebook

Create a new Notebook or clone the template. While Jupyter supports various programming languages, this blog post focuses on performing advanced analytics in Python 2.7.

Query for telemetry data from Application Insights

To query for telemetry data from an Application Insights resource, the Application ID and an API Key are needed. Both can be found in Application Insights portal, on the API Access blade and under Configure.

!pip install –upgrade applicationinsights-jupyter

from applicationinsights_jupyter import Jupyter

API_URL = "https://api.aimon.applicationinsights.io/"
APP_ID = "REDACTED"
API_KEY = "REDACTED"
QUERY_STRING = "customEvents
| where timestamp >= ago(10m) and timestamp < ago(5m)
| where name == 'NodeProcessStarted'
| summarize pids=makeset(tostring(customDimensions.PID)) by cloud_RoleName, cloud_RoleInstance, bin(timestamp, 1m)"

jupyterObj = Jupyter(APP_ID, API_KEY, API_URL)
jupyterObjData = jupyterObj.getAIData(QUERY_STRING)

Get more information by accessing the API.

Send derived data back to Application Insights

To send data to an Application Insights resource, the Instrumentation Key is needed. It can be found in Application Insights portal, on the Overview blade.

!pip install applicationinsights

from applicationinsights import TelemetryClient

IKEY = "REDACTED"
tc = TelemetryClient(IKEY)

tc.track_metric("crashCount", 1)
tc.flush()

Get more information by accessing the API.

Execute the Notebook using Azure WebJob

To execute the Notebook using Azure WebJob, the Notebook, its dependencies, and the Jupyter server need to be uploaded onto an Azure App Service container.

Prepare the necessary resources

Download the Notebook onto your machine.
Install the Jupyter server using Anaconda.
Execute the Notebook on your machine to install all dependencies, as App Service container does not allow changes to the directories where the modules would otherwise be installed automatically.
Update the path in a dependency to reflect App Service container’s directory. Replace the first script in Anaconda2/Scripts/jupyter-nbconvert-script.py with
#!D:/home/site/wwwroot/App_Data/resources/Anaconda2python.exe
Update the local copy of the Notebook, excluding pip commands.
Create run.cmd file containing the following script
D:homesitewwwrootApp_DataresourcesAnaconda2Scriptsjupyter nbconvert –execute <Your Notebook Name>.ipynb

FTP resources

Obtain deployment credentials and FTP connection information.
FTP the Anaconda2 folder to a new directory in App Service container
D:homesitewwwrootApp_Dataresources

Operationalize the Notebook

Create a new Azure WebJob and upload the Notebook and run.cmd file.

An example implementation

We operationalized this template and have been performing advanced analytics on telemetry data of one of our own services.

Our service runs four Node.js processes on each cloud instance. From root cause analysis, we have noticed cases of Node.js crashes. However, due to limitations of the SDK, we cannot log when the crash occurs. So, we created a Jupyter Notebook to analyze the existing telemetry data to detect Node.js crashes.

A custom event NodeProcessStarted is logged when a new Node.js process starts in a cloud instance. Normally, all four processes start nearly simultaneously when they are recycled every 8-11 hours. So, when we see less than four NodeProcessStarted events occur at a different frequency, we can infer that new process(es) started to replace recently crashed process(es).

In this implemented template, you will see how we query for telemetry data, analyze the data, query for more telemetry data to enrich the analysis, and then send the derived data back to Application Insights.

 

We hope this template helps you derive actionable insights from telemetry data and better manage your Azure applications.
Quelle: Azure

Azure Service Bus .NET Standard Client Generally Available

Azure Service Bus .NET Standard client is generally available. With it comes support for .NET Core and the .NET framework. And as mentioned in an earlier post it also supports Mono/Xamarin for cross-platform application development. This is only the start of greater things to come.

Here is a full list of the supported platforms. Ours will be .NET Standard version 1.3.

Service Bus .NET Samples

We have queue, topic, and session samples to get you going and we'll be adding more samples over time.

For now try out a sample or two from this list:

Sample for sending and receiving to/from a Service Bus queue using a QueueClient
Sample for sending to a Topic and receiving from a Subscription
Sample for sending and receiving session based messages, great if you need First In First Out (FIFO) order
Sample for sending and receiving to/from a Service Bus queue using MessageSender and MessageReceiver
Sample for sending and receiving session based messages to/from Service Bus queues using SessionClient
Sample for configuring and managing rules for Subscriptions

Plugins

We also have plugins you can use with this new client such as the Message ID plugin shown below. and we have one plugin contribution from the community!

Message ID plugin for Azure Service Bus

The Message ID plugin for Azure Service Bus allows for the message ID on outgoing messages to be set using custom logic. This is very useful for having more control over de-duplication where you want to make sure you remove duplicate messages being sent to your queue or topic by setting a value of the Message ID property that makes the most sense for your particular scenario.

How to use

In order to use this plugin you will need to setup the following:

An Azure subscription
A Service Bus namespace

Below is a simple example of how to use the plugin.

var messageIdPlugin = new MessageIdPlugin((msg) => Guid.NewGuid().ToString("N"));

var queueClient = new QueueClient("{ServiceBusConnectionString}", "{ServiceBusEntityName}");
queueClient.RegisterPlugin(messageIdPlugin);

var message = new Message(Encoding.UTF8.GetBytes("Message with GUID message ID"));

await queueClient.SendAsync(message).ConfigureAwait(false);

// message.MessageId will be assigned a GUID in a 32 digit format w/o hyphens or braces

Open Source

This .NET Standard client is open source and if you want to contribute you can! You can submit a code fix for a bug, submit a new feature request and provide other feedback in our GitHub repo.

Thank you to our community members who helped us to get here!

You can find the NuGet package here and documentation here.
Quelle: Azure

Azure Service Bus Java Client Generally Available

The Azure Service Bus team is extremely excited to announce general availability of our Java client library version 1.0.0. It allows customers to enjoy a solid Java experience with Azure Service Bus as it comes complete with native functionality.

Want to use the native client to send scheduled messages? No problem. Want to use sessions on Standard and Premium plans to keep your messages in order? Sure thing.

We had a number of organizations and individuals motivating us to get this out the door. Thank you to them for the patience and the push!

Our Java client (Java 8) is also now on par with our .NET Standard client library (.NET Standard 1.3) and if you were to use both you would notice feature parity and full support for interacting with Azure Service Bus.

Service Bus Java Samples

In order to run the samples below, replace the following bracketed values in the [sample].java file.

For queue samples

private static final String connectionString = "{connection string}";
private static final String queueName = "{queue name}";

For topic samples

private static final String connectionString = "{connection string}";
private static final String topicName = "{topic name}";
private static final String subscriptionName = "{subscription name}";

Prerequisites

Java 8
An Azure subscription.
A Service Bus namespace
A Service Bus queue
Or A Service Bus topic

The samples are available here

Send and receive messages with Queue using QueueClient

This sample demonstrates how to use QueueClient to connect to a queue and then send and receive messages with this QueueClient. It uses MessageHandler (aka MessagePump) model which simplifies the processing model for messages.

Send and receive messages with Topic Subscription using TopicClient and Subscription Client

This sample demonstrates how to use TopicClient and SubscriptionClient to connect to a Topic and its Subscription and send and receive messages. It uses MessageHandler (aka MessagePump) model which simplifies the processing model for messages.

Send and receive messages with Queue using MessageSender and MessageReceiver

This sample demonstrates how to use MessageSender and MessageReceiver to send and receive messages from a Service Bus Queue. With sender and receiver, the client could have full control of how the messages are sent and processed.

Send messages with Qpid JMS and Receive with Service Bus Java Client

This sample demonstrates how to send messages via Qpid JMS to Azure Serivce Bus and receive messages with Service Bus Java client. Please note: only BytesMessage is supported currently, we'll be adding more support later for this such as TextMessage.

Open Source

This client is open source and if you want to contribute you can! You can submit a code fix for a bug, submit a new feature request and provide other feedback in our GitHub repo.

You can find the Maven package here and documentation here.
Quelle: Azure