Azure Analysis Services Backup and Restore

This post is authored by Bret Grinslade, Principal Program Manager and Josh Caplan, Senior Program Manager, Azure Analysis Services.

We have gotten good feedback from customers and partners starting to adopt Azure Analysis Services in production. Based on this feedback, this week we are releasing improvements around pricing options, support for backup and restore, and improved Azure Active Directory support. Please try them out and let us know how they work for you.

New Basic Tier

The new Basic Tier is designed to support smaller workloads with simpler refresh and processing needs. While you can put multiple models in one Standard instance, this new tier enables you to create models that are more targeted at less cost. The key differences between Standard and Basic is that the Basic tier does not support some specific enterprise features. Standard supports larger sizes and higher QPUs for concurrent queries and adds data partitioning for improved processing, translations, perspectives, and Direct Query. If your solution doesn’t need these capabilities, you can start with Basic. You can also scale up from Basic to Standard at any time. However, once you scale up to the higher tier you can’t scale back down to Basic. As an example, you can scale from B1 to S0 and then from S0 to S1 and back to S0, but you cannot scale from S0 to either the Basic or Develop tier.

Backup & Restore

We have added backup and restore. At a high level, you configure a backup storage location from your subscription for use with your Azure Analysis Services instance. If you do not have a storage account, you will need to create one. You can do this from the Azure Analysis Services blade for backup configuration or you can create it separately. Once you have associated a storage location, you can backup and restore from that location using TMSL commands or a tool like SQL Server Management Studio (SSMS) which will support this shortly. The documentation has more details on Backing Up and Restoring Azure Analysis Services models. One note, to restore a 1200 tabular model you have created with an on-premises version of SQL Server Analysis Services, you will need to copy it up to the storage account before it can be restored to Azure Analysis Services. The Microsoft Azure Storage Explorer or the AzCopy command-line utility are useful tools for moving large files in to Azure. In addition, if you restore a model from an on-premises server, the on-premises domain users will not have access to the model.  You will need to remove all of the on-premises users from the model roles and then you can add Azure Active Directory users to roles. The roles will be the same. Azure Analysis Services Server Admins will still have access as these are AAD based members. The setting on restore for “SkipMembership” will honored in a future service update to make managing cloud based role membership easier.

Improved Azure Active Directory integration

We have also done some work to improve the way Azure Analysis Services works with Azure Active Directory. Starting now, any newly created Azure AS server will be tied to the Azure AD tenant for which your Azure subscription is associated with and only users within that directory will be able to use your Azure AS server if granted access. This means that if a server is created in a subscription that is owned by Contoso.com than only users within the Contoso.com directory will be able to use those servers. In order to use that server, users must still be granted access to a role within the model. Azure AD supports a few options for allowing users outside of your organization to get access to resources within your tenant. One of these upcoming options will be Azure AD B2B. With B2B, you will be able to add guest access to users outside of your organization to your models through Azure Active Directory. We are hard at work enabling B2B for Azure Analysis Services end-to-end and will post an update when it is fully available in SSMS in SSDT as well as client tools.
Quelle: Azure

The network is a living organism

Organism, from Greek word Organismos, depicts a complex structure of living elements. But, what does a network have in common with organisms?

At Microsoft, we build and manage a hyper-scale global network that’s constantly growing and evolving. Supporting workloads such as Microsoft Azure, Bing, Dynamics, Office 365, OneDrive, Skype, Xbox, and soon LinkedIn, has stringent needs of reliability, security, and performance. Such needs make it imperative to continually monitor the pulse of the network, to detect anomalies, faults, and drive recovery at the millisecond level, much akin to monitoring a living organism.

Monitoring a large network that connects 38 regions, as of April 2017, hundreds of datacenters, thousands of servers with several thousand devices, and millions of components requires constant innovation and invention.

Figure 1. Microsoft global network

Figure 2. Illustration of a physical network in a datacenter

Four core principles drive the design and innovation of our monitoring services:

Speed and accuracy: It’s imperative to detect failures at the sub-second level and drive recovery of the same.
Coverage: From bit errors to bytes, to packets, to protocols, to components, to devices that make up the end-to-end network, our monitoring services must cover them all.
Scale: The services must process petabytes of logs, millions of events, and thousands of correlations that are spread over several thousand miles of connectivity across the face of the planet.
Optimize based on real user metrics: Our monitoring services must use metrics from a network topology level—within a rack, to a cluster, to a datacenter, to a region, to the WAN and the edge—and they must have the capability to zoom in and out.

We built innovations to proactively detect and localize a network issue, including PingMesh and NetBouncer. These services are always on and monitor the pulse of our network for latency and packet drops.

PingMesh uses lightweight TCP probes (consuming negligible bandwidth) for probing thousands of peers for latency measurement (RTT, or round trip time) and detects whether the issue is related to the physical network. RTT measurement is a good tool for detecting network reachability and packet-level latency issues.

After a latency deviation or packet drop is discovered, Netbouncer’s machine learning algorithms are then used to filter out transient issues, such as top-of-rack reboots for an upgrade. After completing temporal analysis in which we look at historical data and performance, we can confidently classify the incident as a network issue and accurately localize the faulty component. After the issue is localized, we can auto-mitigate it by rerouting the impacted traffic, and then either rebooting or removing the faulty component. In the following figure, green, yellow, or red visualize network latency ranges at the 99th percentile between a source-destination rack-pair.

Figure 3. Examples of network latency patterns for known failure modes

In some customer incidents, the incident might need deeper investigation by an on-call engineer to localize and find the root cause. We needed a troubleshooting tool to efficiently capture and analyze the life of a packet through every network hop in its path. This is a difficult problem because of the necessary specificity and scale for packet-level analysis in our datacenters, where traffic can reach hundreds of terabits per second. This motivated us to develop a service called Everflow—it’s used to troubleshoot network faults using packet-level analysis. Everflow can inject traffic patterns, mirror specific packet headers, and mimic the customer’s network packet. Without Everflow, it would be hard to recreate the specific path taken by a customer’s packet; therefore, it would be difficult to accurately localize the problem. The following figure illustrates the high-level architecture of Everflow.

Figure 4. Packet-level telemetry collection and analytics using Everflow

Everflow is one of the tools used to monitor every cable for frame check sequence (FCS) errors. The optical cables can get dirty from human errors like bending or bad placement, or simply aging of the cable. The following figure shows examples of cable bending and cable placed near fans that can cause an FCS error on this link.

Figure 5. Examples of cable bending, and cable placed near the fans that can cause an FCS error on this link

We currently monitor every cable and allow only one error for every billion packets sent, and we plan to further reduce this threshold to ensure link quality for loss-sensitive traffic across millions of physical cables in each datacenter. If the cable has a higher error rate, we automatically shut down any links with these errors. After the cable is cleaned or replaced, Everflow is used to send guided probes to ensure that the link quality is acceptable.

Beyond the datacenter, supporting critical customer scenarios on the most reliable cloud requires observing network performance end-to-end from Internet endpoints. The Azure WAN evolved to build a service called the Map of the Internet that monitors Internet performance and customer experience in real time. This system can disambiguate between expected client performance across wired and wireless connections, separates sustained issues from transient ones, and provides visibility into any customer perspective on demand. For example, it helps us to answer questions like, “Are customers in Los Angeles seeing high RTT on AT&T?”, “Is Taipei seeing increased packet loss through HiNet to Hong Kong?”, and “Is Bucharest seeing reliability issues to Amsterdam?” We use this service to proactively and reactively intervene on impact or risks to customer experiences and quickly correlate them to the scenario, network, and location at fault. This data also triggers automated response and traffic engineering to really minimize impact or mitigate ahead of time whenever possible.

Figure 6. Example of latency degradation alert with a peering partner

The innovation built to monitor our datacenters, and its connectivity is also leveraged to provide insights to our customers.

Typically, customers use our network services via software abstractions. Such abstractions, including virtual networks, virtual network interface cards, and network access control lists, hide the complexity and intricacies of the datacenter network. We recently launched Azure Network Watcher, a service to provide visibility and diagnostic capability of the virtual/logical network and related network resources.

Using Network Watcher, you can visualize the topology of your network, understand performance metrics of the resources deployed in the topology, create packet captures to diagnose connectivity issues, and validate the security perimeter of your network to detect vulnerabilities and for compliance/audit needs.

Figure 7. Topology view of a customer network

The following figure shows how a remote packet capture operation can be performed on a virtual machine.

Figure 8. Variable packet capture in a virtual machine

Building and operating the world’s most reliable and hyper-scale cloud is underpinned by the need to proactively monitor and detect network anomalies and take corrective action—much akin to monitoring a living organism. As the pace, scale, and complexity of the datacenters evolve, new challenges and opportunities emerge, paving the way for continuous innovation. We’ll continue to invest in networking monitoring and automatic recovery, while also sharing our innovations with customers to also help them manage their virtual networks.

References

PingMesh: Guo, Chuanxiong, Lihua Yuan, Dong Xiang, Yingnong Dang, Ray Huang, Dave Maltz, Zhaoyi Liu, et al. "Pingmesh: A large-scale system for data center network latency measurement and analysis." ACM SIGCOMM Computer Communication Review 45, no. 4 (2015): 139-152.

Everflow: Zhu, Yibo, Nanxi Kang, Jiaxin Cao, Albert Greenberg, Guohan Lu, Ratul Mahajan, Dave Maltz, et al. "Packet-level telemetry in large datacenter networks." In ACM SIGCOMM Computer Communication Review, vol. 45, no. 4, pp. 479-491. ACM, 2015.

Read more

To read more posts from this series please visit:

Networking innovations that drive the cloud disruption
SONiC: The networking switch software that powers the Microsoft Global Cloud
How Microsoft builds its fast and reliable global network
Lighting up network innovation
Azure Network Security
Microsoft&;s open approach to networking

Quelle: Azure

Announcing Azure CLI Shell (Preview); more Azure CLI 2.0 commands now generally Available

Following up on the generally available release of VM, ACS, Storage and Network commands in the new Azure CLI 2.0, we are today announcing a new Azure CLI interactive Shell preview mode release, in addition to the generally available release of following command modules: ACR, Batch, KeyVault, and SQL.

Interactive Shell

Azure CLI 2.0 provides an idiomatic command line interface that integrates natively with Bash and POSIX tools. It is easy to install, use and learn. You can use it to run one command at a time, as well as to run automation scripts composed of multiple commands, including other BASH commands. To support this, commands are not interactive and will error out when provided with incomplete or incorrect input.

However, there are circumstances when you might prefer an interactive experience, such as when learning the Azure CLI’s capabilities, command structures and output formats. Azure CLI Shell (az-shell) provides an interactive mode in which to run Azure CLI 2.0 commands. It provides autocomplete dropdowns, auto-cached suggestions combined with on the fly documentation, including examples of how each command is used. Azure CLI Shell provides an experience that makes it easier to learn and use Azure CLI commands.

We invite you to install and use the new interactive shell for Azure CLI 2.0. You can use it in a Docker image we created, or install it locally on your Mac or Windows machine. It works with your existing Azure CLI installations, and you can use the commands side-by-side in az-shell or another command shell of your choice (BASH on MAC/linux and cmd.exe on Windows).

New commands now generally available

Continuing with the momentum of our GA release of the first Azure CLI 2.0 command modules on Feb 27th, today we are also announcing that following command modules are now Generally Available: Azure Container Registry, Batch, KeyVault, and SQL. With this GA release, you can use these commands in production with full support from Microsoft through our Azure support channels or on GitHub. We don’t expect any breaking changes for these commands in future releases of Azure CLI 2.0.

Azure Container Registry enables developers to create and maintain Azure container registries to store and manage private Docker container images. Using the acr commands in Azure CLI 2.0, you can create and manage these registries right from the command line. After you create a registry, you can use other CLI commands to assign a service principal to it, manage admin credentials, and list the repositories within it.

Azure Batch service provides an environment developers can use to manage their compute resources, and to schedule jobs to run with specific resources and dependencies. Using the batch commands in Azure CLI 2.0, you can create Azure Batch accounts, applications, and application packages in that account. You can also create jobs, tasks and job schedules to run at specific times, and manage (create, update, delete) them directly from the command line.

Azure Key Vault helps safeguard cryptographic keys and secrets used by cloud applications and services. Developers and security administrators can generate keys, store and access them, set policies, and monitor their usage using this service. Using the keyvault commands in Azure CLI 2.0, you can create/delete a key vault, manage certificates, policies, import and create new keys, and set secrets to key vaults.

Azure SQL Database is a relational database-as-a-service using the Microsoft SQL Server engine. Using the SQL commands in Azure CLI 2.0, you can manage all aspects of this service from the command line: create/delete/update SQL server, create/delete/update databases and data warehouses and scale them individually by creating elastic pools and moving databases in and out of shared pools, etc.

In addition to the above commands being generally available, the new release also contains command modules for dev/test labs (lab) and monitoring (monitor) services that are now available in preview mode.

New features in Azure CLI 2.0

This release also contains some new features that will make working with the Azure CLI easier and more productive.

“Az find” is a new command for searching Azure CLI 2.0 commands based on simple text. As the number of commands and coverage of Azure services grows in Azure CLI 2.0, we recognize that it may become hard for developers to find the commands they need for specific tasks.

For example, the following command finds all Azure CLI 2.0 commands that contains the text “arm,” “template,” or “deploy.”

az find -q arm template deploy

`az monitor autoscale-settings get-parameters-template`
Scaffold fully formed autoscale-settings&; parameters as json
template

`az group export`
Captures a resource group as a template.

`az group`
Manage resource groups and template deployments.

`az group deployment export`
Export the template used for the specified deployment.

`az group deployment create`
Start a deployment.

`az group deployment validate`
Validate whether the specified template is syntactically correct
and will be accepted by Azure Resource Manager.

`az vm capture`
Captures the VM by copying virtual hard disks of the VM and
outputs a template that can be used to create similar VMs.
For an end-to-end tutorial, see https://docs.microsoft.com/azure
/virtual-machines/virtual-machines-linux-capture-image.

`az keyvault certificate get-default-policy`
Get a default policy for a self-signed certificate
This default policy can be used in conjunction with `az keyvault
create` to create a self-signed certificate. The default policy
can also be used as a starting point to create derivative
policies. Also see: https://docs.microsoft.com/en-
us/rest/api/keyvault/certificates-and-policies

`az keyvault certificate create`
Creates a new certificate version. If this is the first version,
the certificate resource is created.
Create a Key Vault certificate. Certificates can also be used as a
secrets in provisioned virtual machines.

`az vm format-secret`
Format secrets to be used in `az vm create –secrets`
Transform secrets into a form consumed by VMs and VMSS create via
–secrets.

You can now set global defaults and scope for specific variables and resources that you need to use repeatedly within a command line session. You can set these defaults using the “az configure” command:

az configure –defaults group=MyResourceGroup

This sets the resource group to “MyResourceGroup” so that you don’t need to supply it as a parameter in subsequent commands that require this parameter. For example, you could then run the “vm show” command without explicitly specifying the resource group parameter:

az vm show -n MyLinuxVM

Name ResourceGroup Location
——— ————— ———-
MyLinuxVM MyResourceGroup westus2

You can also specify multiple defaults by listing them in <resource name=value> pairs in the “az configure” command, and you can reset them by simply setting an empty value in the configure command.

Start using Azure CLI 2.0 today!

Whether you are an existing CLI user, or you are starting a new Azure project, it’s easy to get started with the CLI directly, or use the interactive mode to master the command line with our updated docs and samples.

Azure CLI 2.0 is open source and on GitHub.

In the next few months, we’ll provide more updates. As ever, we want your ongoing feedback! Customers using the commands, that are now in GA release, in production can contact Azure Support for any issues, reach out via StackOverflow using the azure-cli tag, or email us directly at azfeedback@microsoft.com. You can also use the "az feedback" command directly from within the CLI to send us your feedback.
Quelle: Azure

March 2017 Leaderboard of Database Systems contributors on MSDN

Many congratulations to the March 2017 Top-10 contributors!

Hilary Cotter and Alberto Morillo top the Overall and Cloud database lists this month as well. 7 of the Overall Top-10 featured in last month’s Overall Top-10 too.

This Leaderboard initiative was started in October 2016 to recognize the top Database Systems contributors on MSDN forums. The following continues to be the points hierarchy, in decreasing order of points:

For questions related to this leaderboard, please write to leaderboard-sql@microsoft.com
Quelle: Azure

Faster and unrestricted power by Pivotal Cloud Foundry’s 1.10 now supports .NET

No longer be held back, instead go beyond your limits by having Distributed Tracing, Isolated Segments and Shared Platforms for all apps: Java and .Net. The new PCF 1.10 provides Spring Cloud Sleuth which can be used for many different apps across frameworks. Deployment complexity is lowered, cut costs of maintenance and infrastructure by tying each isolated segment to the same foundry keeping roles and permissions in sync. Achieve greater efficiency as developers can use their preferred framework. Dive into more details at Pivotal.
Quelle: Azure

Azure Analysis Services now available in West India

Last October we released the preview of Azure Analysis Services, which is built on the proven analytics engine in Microsoft SQL Server Analysis Services. With Azure Analysis Services you can host semantic data models in the cloud. Users in your organization can then connect to your data models using tools like Excel, Power BI, and many others to create reports and perform ad-hoc data analysis.

We are excited to share with you that the preview of Azure Analysis Services is now available in an additional region: West India. This means that Azure Analysis Services is now available in the following regions: Australia Southeast, Canada Central, Brazil South, Southeast Asia, North Europe, West Europe, West US, South Central US, North Central US, East US 2, West Central US, Japan East, West India, and UK South.

New to Azure Analysis Services? Find out how you can try Azure Analysis Services or learn how to create your first data model.
Quelle: Azure

New search analytics for Azure Search

One of the most important aspects of any search application is the ability to show relevant content that satisfies the needs of your users. Measuring relevance requires combining search results with the app side user interactions, and it can be hard to decide what to collect and how to do it. This is why we are excited to announce our new version of Search Traffic Analytics, a pattern on how to structure, instrument, and monitor search queries and clicks, that will provide you with actionable insights about your search application. You’ll be able to answer common questions, like most clicked documents or most common queries that do not result in clicks, as well as provide evidence for other situations, like deciding on the effectiveness of a new UI layout or tweaks on the search index. Overall, this new tool will provide valuable insights that will let you make more informed decisions.

Let’s expand on the scoring profile example. Let’s say you have a movies site and you think your users usually look for the newest releases, so you add a scoring profile with a freshness function to boost the most recent movies. How can you tell this scoring profile is helping your users find the correct movies? You will need information on what your users are searching for, the content that is being displayed and the content that your users select. When you have the data on what your users are clicking, you can create metrics to measure effectiveness and relevance.

Our solution

To obtain rich search quality metrics, it’s not enough to log the search requests; it’s also necessary to log data on what users are choosing as the relevant documents. This means that you need to add telemetry to your search application that logs what a user searches for and what a user selects. This is the only way you can have information on what users are really interested on and wether they are finding what they are looking for. There are many telemetry solutions available and we didn&;t invent yet another one. We decided to partner with Application Insights, a mature and robust telemetry solution, available for multiple platforms. You can use any telemetry solution to follow the pattern that we describe, but using Application Insights lets you take advantage of the Power BI template created by Azure Search.

The telemetry and data pattern consists of 4 steps:

1.    Enabling Application Insights
2.    Logging search request data
3.    Logging users’ clicks data
4.    Monitoring in Power BI desktop

Because it’s not easy to decide what to log and how to use that information to produce interesting metrics, we created a clear set schema to follow, that will immediately produce commonly asked for charts and tables out of the box on Power BI desktop. Starting today, you can access the easy to follow instructions on the Azure Portal and the official documentation.

Once you instrument your application and start sending the data to your instance of Application Insights, you will be able to use Power BI to monitor the search quality metrics. Upon opening the Power BI desktop file, you’ll find the following metrics and charts
•    Clickthrough Rate (CTR): ratio of users who click on a document to the number of total searches.
•    Searches without clicks: terms for top queries that register no clicks.
•    Most clicked documents: most clicked documents by ID in the last 24 hours, 7 days and 30 days.
•    Popular term-document pairs: terms that result in the same document clicked, ordered by clicks.
•    Time to click: clicks bucketed by time since the search query.

 

Operational Logs and Metrics

Monitoring metrics and logs are still available. You can enable and manage them in the Azure Portal under the Monitoring section.

Enable Monitoring to copy operation logs and/or metrics to a storage account of your choosing. This option lets you integrate with the Power BI content pack for Azure Search as well as your own custom integrations.

If you are only interested in Metrics, you don’t need to enable monitoring as metrics are available for all search services since the launch of Azure Monitor, a platform service that lets you monitor all your resources in one place.

Next steps

Follow the instructions in the portal or in the documentation to instrument your app and start getting detailed and insightful search metrics.

You can find more information on Application Insights here.  Please visit Application Insights pricing page to learn more about their different service tiers.
Quelle: Azure

Integrating Application Insights into a modular CMS and a multi-tenant public SaaS

The Orchard CMS Application Insights module and DotNest case study

Application Insights has an active ecosystem with our partners developing integrations using our Open Source SDKs and public endpoints. We recently had Lombiq (one of our partners) integrate Application Insights into Orchard CMS and a multi-tenant public SaaS version of the same.

Here is a case study of their experience in their own words, by Zoltán Lehóczky, co-founder of Lombiq, Orchard CMS developer.

We have integrated Application Insights into a multi-tenant service in such a way that each tenant gets its own separate performance and usage monitoring. At the same time, we, the providers of the service, get overall monitoring of the whole platform. The code we wrote is open-source.

Adding Application Insights telemetry to an ASP.NET web app is easy with just a few clicks in Visual Studio. But the complexity of monitoring needs increases when the web app is a rich-featured multi-tenant content management system (CMS) that can be self-hosted or offered as CMS as a Service. So you need to build an integration that feels native to the platform by extending the Application Insights libraries. The aim is to give people the great analytical and monitoring capabilities of Application Insights, specific to the CMS platform, that enables as easily. This blog post explains some techniques and practices that are used in the Orchard CMS Application Insights module.

We at Lombiq Technologies are a .NET software services company from Hungary. We have international clients like Microsoft itself. Orchard, an open source ASP.NET MVC CMS started and still supported by Microsoft, is what we mainly work with, having also built the public multi-tenant Orchard as a Service called DotNest. Being a long-time Azure user we learned about Application Insights when it was still very early in development and started to build an easy to use Orchard integration that can be utilized on DotNest. So, what are our experiences worth sharing?

The Application Insights Orchard module we developed is open source, so make sure to check it out on GitHub if you want to see more code! Everything discussed here is implemented there.

Using Application Insights in a modular multi-tenant CMS

Application Insights, as it is delivered “out of the box”, works easily for single-tenant applications, where it’s no issue that you need some root-level XML config files. However, if your code is a module that will be integrated into other people’s applications, like our Orchard CMS, then you want your code, including all the monitoring extensions, to be self-contained. We don’t want our clients to be exposed to configuration files at the application level. In short, we need to integrate Application Insights into our code to make a single, independently distributable MVC project. The distributed form might be a source repository or a zip file.

To package Application Insights into our code, we must:

Move Application Insights configuration to code—that is, do the same in C# that would normally be done in the XML config file.
Manage the lifetime of telemetry modules in code. Each module handles a different type of telemetry—requests, exceptions, dependencies, and so on. Normally, these modules are instantiated when the .config file is read, and have parameters set in the config file. (Learn more. Our code).
Instead of relying on static singletons, manage TelemetryClient and TelemetryConfiguration objects in a custom way. This allows the telemetry for separate tenants to be kept separate. (See for example this code)
Orchard uses log4net for logging. We can collect this data in Application Insights, but again we need to write code to configure ApplicationInsightsAppender instead of relying on the config files. (Code)
All good, so now we got rid of app-level XML configs. But what if we have multiple tenants in the same app? The default setup of Application Insights only has single-tenancy in mind, so we need to dig a bit deeper. (For the purpose of this post “tenant” will mean a sub-application, a component within the application that maintains a high level of data isolation to other tenants)

We can’t utilize the HttpModule that ships with Application Insights for request tracking, since that would require changes to a global config file (the Web.config) and wouldn’t allow us to easily switch request tracking on or off per tenant. Time to implement an Owin middleware and do request tracking with some custom code! Such middlewares can be registered entirely from code and can be enabled on a per tenant basis.
Since request tracking is done in our own way we also need to add an operation ID from code for each request. In Application Insights, Operation ID is used to correlate telemetry that occur as part of servicing the same request.
Let’s also add an ITelemetryInitializer that will add which tenant a piece of telemetry originates from. (Learn more. Code)
If everything is done we’ll end up with an Application Insights plugin that can be enabled and disabled from the Orchard admin site, separately for each tenant:

 

 

Adding some Orchardyness

So far so good, but the result still needs some more work to really be part of the CMS: There’s no place to configure it yet!

In Orchard, the site settings can be used for that. It’s easy to add some configuration options that admins can change from the web UI; these settings are on the level of a tenant. We’ve added a settings screen like this:

 

 

Note that calls to dependencies, like SQL queries, storage operations or HTTP requests to remote resources are tracked. However, since this generates a lot of data it’s possible to switch dependency tracking off.

Do note that some settings are either not possible to configure on a tenant level (and thus need to be app level), or it doesn’t make sense to do so: e.g. since log entries might not be tied to a tenant (but rather to the whole application) those are only available for app-wide collection in our module (nevertheless an additional tenant-level log collection would be possible). What you see is the full config that’s only available on the “main” tenant.

Furthermore, we added several extension points for developers to hook into. So if you’re a fellow Orchard developer you can override the Application Insights configuration, add your own context to telemetry data or utilize event handlers (and Orchard-style events for that matter).

 

Making Application Insights available in a public SaaS

What we’ve seen until now was all the fundamental functionality that’s needed for a self-contained component monitored by Application Insights. However, in DotNest, where everyone can sign up, we need two distinct layers of monitoring by Application Insights:

We want detailed telemetry about the whole application, for our own use.
Users of DotNest tenants want to separately configure Application Insights and collect telemetry that they’re allowed to see, just for their tenants.
Users of DotNest thus don’t even see the original Application Insights configuration options, as those are managed on the level of the whole platform. However, they get another site settings screen where they can configure their own instrumentation key:

 

 

When such a key is provided, then another, second Application Insights configuration will be created on the tenant and used together with the platform-level one, providing server-side and client-side request tracking and error reporting. Thus, while we at Lombiq, the owners of the service see all data under our own Application Insights account, each user will also be able to see just their own tenant’s data in the Azure Portal as usual.

This tenant configuration is created and managed in the same way as the original one, from code.

 

Seeing the results

Once all of this is set up, we want to see what kind of data we gathered, and this happens as usual in the Azure Portal.

Live Metrics Stream

Live Metrics Stream provides real time monitoring. We included the appropriate telemetry processor in our initialization chain. It includes system metrics like memory and CPU usage as well, and as of recently you don’t even need to install the Application Insights Extensions for an App Service to see these:

 

 

Tracing errors

But what if something goes wrong? Log entries are visible as Traces (standard log entries) or Exceptions (when exceptions are caught and logged) in the Azure Portal:

But remember that we’ve implemented an operation ID? The great thing once we have that is that events, exceptions, request, any data points are not just visible alone, but in context: Using the operation ID, Application Insights will be able to correlate telemetry data with other data points, for example to tell you the request in which the exception happened.

This makes it easier to find out how you can reproduce a problem that just happened in production.

Wrapping it up

All in all, if you need more than just to add Application Insights to your application with a single configuration, without the need to redistribute the integration, then you need to dig into the Application Insights libraries’ API. Now with the libraries being open source this is not much of an issue and you can fully configure and utilize them just by writing C#. With the Azure Application Insights Orchard module you even have a documented example of doing it.

So, don’t be afraid and code some awesome Application Insights integration! And if you just want to play with fancy graphs on the Azure Portal you can quickly create a free DotNest site and start gathering some data right away!

Quelle: Azure

Azure Data Factory March new features update

Hello, everyone! In March, we added a lot of great new capabilities to Azure Data Factory, including high demanding features like loading data from SAP HANA, SAP Business Warehouse (BW) and SFTP, performance enhancement of directly loading from Data Lake Store into SQL Data Warehouse, data movement support for the first region in the UK (UK South), and a new Spark activity for rich data transformation. We can’t wait to share more details with you, following is a complete list of Azure Data Factory March new features:

Support data loading from SAP HANA and SAP DW
Support data loading from SFTP
Performance enhancement of direct loading from Data Lake Store to Azure SQL Data Warehouse via PolyBase
Spark activity for rich data transformation
Max allowed cloud Data Movement Units increase
UK data center now available for data movement

Support data loading from SAP HANA and SAP Business Warehouse

SAP is one of the most widely-used enterprise softwares in the world. We hear you that it’s crucial for Microsoft to empower customers to integrate their existing SAP system with Azure to unlock business insights. We are happy to announce that we have enabled loading data from SAP HANA and SAP Business Warehouse (BW) into various Azure data stores for advanced analytics and reporting, including Azure Blob, Azure Data Lake, and Azure SQL DW, etc.

The SAP HANA connector supports copying data from HANA information models (such as Analytic and Calculation views) as well as Row and Column tables using SQL queries. To establish the connectivity, you need to install the latest Data Management Gateway (version 2.8) and the SAP HANA ODBC driver. Refer to SAP HANA supported versions and installation for more details.
The SAP BW connector supports copying data from SAP Business Warehouse version 7.x InfoCubes and QueryCubes (including BEx queries) using MDX queries. To establish the connectivity, you need to install the latest Data Management Gateway (version 2.8) and the SAP NetWeaver library. Refer to SAP BW supported versions and installation for more details.

For more information about connecting to SAP HANA and SAP BW, refer to Azure Data Factory offers SAP HANA and Business Warehouse data integration.

Support data loading from SFTP

You can now use Azure Data Factory to copy data from SFTP servers into various data stores in Azure or On-Premise environments, including Azure Blob/Azure Data Lake/Azure SQL DW/etc. A full support matrix can be found in Supported data stores and formats. You can author copy activity using the intuitive Copy wizard (screenshot below) or JSON scripting. Refer to SFTP connector documentation for more details.

Performance enhancement of direct data loading from Data Lake Store to Azure SQL Data Warehouse via PolyBase

Data Factory Copy Activity now supports loading data from Data Lake Store to Azure SQL Data Warehouse directly via PolyBase. When using the Copy Wizard, PolyBase is by default turned on and your source file compatibility will be automatically checked. You can monitor whether PolyBase is used in the activity run details.

If you are currently not using PolyBase or staged copy plus PolyBase for copying data from Data Lake Store to Azure SQL Data Warehouse, we suggest checking your source data format and updating the pipeline to enable PolyBase and remove staging settings for performance improvement. For more detailed information, refer to Use PolyBase to load data into Azure SQL Data Warehouse and Azure Data Factory makes it even easier and convenient to uncover insights from data when using Data Lake Store with SQL Data Warehouse.

Spark activity for rich data transformation

Apache Spark for Azure HDInsight is built on an in-memory compute engine, which enables high performance querying on big data. Azure Data Factory now supports Spark Activity against Bring-Your-Own HDInsight clusters. Users can now operationalize Spark job executions through Spark Activity in Azure Data Factory.

Since Spark job may have multiple dependencies such as jar packages (placed in the java CLASSPATH) and python files (placed on the PYTHONPATH), you will need to follow a predefined folder structure for your Spark script files. For more detailed information about JSON scripting of the Spark Activity, refer to Invoke Spark programs from Azure Data Factory pipelines.

Max allowed cloud Data Movement Units increase

Cloud Data Movement Units (DMU) reflects the powerfulness of copy executor used to empower your cloud-to-cloud copy. To copy multiple files with large volume from Blob storage/Data Lake Store/Amazon S3/cloud FTP/cloud SFTP into Blob storage/Data Lake Store/Azure SQL Database, higher DMUs usually provide you better throughput. Now you can specify up to 32 DMUs for large copy runs. Learn more from cloud data movement units and parallel copy.

UK data center now available for data movement

Azure Data Factory data movement service is now available in the UK, in addition to the existing 16 data centers. With that, you can leverage Data Factory to copy data from Cloud and On-Premise data sources into various supported Azure data stores located in the UK. Learn more about the globally available data movement and how it works from Globally available data movement, and the Azure Data Factory’s Data Movement is now available in the UK blog post.

Above are the new features we introduced in March. Have more feedbacks or questions? Share your thoughts with us on Azure Data Factory forum or feedback site, we’d love to hear more from you.
Quelle: Azure

Announcing General Availability of Europe-based Azure AD B2C directories

Since its general availability in July 2016, organizations around the world have been connecting with millions of customers through the scale, reliability and flexibility of Azure AD B2C. Taking a step further to help organizations comply with industry regulations and data protection laws, we are pleased to announce the general availability of Europe-based Azure AD B2C directories. Read more about Azure AD B2C’s region availability and data residency.

The directory placement is determined based on the country selected by the administrator when creating an Azure AD B2C directory in the Azure portal. If a European country is selected, the Azure AD B2C directory will reside in European datacenters. For the rest of the countries/regions, the directory will be placed in the closest location among the North American and European Azure datacenters.

What’s next

Continue expanding Azure AD B2C’s global presence with directories that reside in Asia-Pacific and China.
Deliver multi-language support to allow organizations to deliver experiences to their customers in their own language. If you’d like to try out this functionality and provide feedback, send us a note at aadb2cpreview@microsoft.com

Resources to get started

Visit the Azure AD B2C web page
Learn more through our documentation and samples
Get help on Stack Overflow using the azure-ad-b2c tag.
Let us know what you’d like to see in Azure AD B2C via our UserVoice forum
Tweet us (@azuread)

Quelle: Azure