Azure Site Recovery now supports managed disks

Azure Site Recovery (ASR) now supports managed disks. This follows the announcement of Azure’s support for managed disks in February. With the integration of managed disks into ASR, you can attach managed disks to your machines during a failover or migration to Azure. 

Managed disks provide the following advantages:

Simplified disk management for Azure IaaS VMs by removing the hassle of managing storage accounts for your machines after failover to Azure.
Improved reliability for Availability Sets by ensuring that the disks of the failed over VMs are automatically placed in different storage scale units (stamps) to avoid single points of failure.

To attach managed disks to your machine on a failover, set “Use managed disks” to “Yes” in the Compute and Network settings for the virtual machine as shown below.

 

Below are a few considerations to keep in mind when using this feature: 

Managed disks can be created only for virtual machines deployed using the Resource manager deployment model.  
Virtual machines with managed disks can only be part of availability sets with "Use managed disks" property set to "Yes". Learn more about managed disks and availability sets.
If the storage account used for replication was encrypted with Storage Service Encryption (SSE) at any point in time, creation of managed disks during failover will fail. In such a scenario, you can either set "Use managed disks" to "No" in the Compute and Network settings for the virtual machine and retry failover or disable protection for the virtual machine and protect it to a storage account which did not have Storage service encryption enabled at any point in time. Learn more about managed disks and Storage service encryption.
For Hyper-V VM’s managed by/not under the management of System Center VMM, set the option to use managed disks only if you intend to migrate your machine to Azure. This is because failback from Azure to on-premises Hyper-V environment is not currently supported for machines with managed disks.
Data from on-premises VMs replicates to a target storage account in Azure, as is with the experience today. Managed disks are created and attached to your machine only on a failover to Azure.
Disaster Recovery of Azure IaaS machines with managed disks is not supported currently and will be made available in the future.

The latest Deployment Planner tool, version 1.3, supports managed disks. You can download the tool from the ASR Deployment Planner doc. For a complete understanding of how managed disks works, please refer to the detailed Managed disks documentation.

Ready to start using ASR?

Check out additional product information to start replicating your workloads to Microsoft Azure using Azure Site Recovery today. You can use the powerful replication capabilities of Site Recovery for 31 days at no charge for every new physical server or virtual machine that you replicate. Visit the Azure Site Recovery forum on MSDN for additional information and to engage with other customers. You can also use the ASR UserVoice to let us know what features you want us to enable next.

Azure Site Recovery, as part of Microsoft Operations Management Suite, enables you to gain control and manage your workloads no matter where they run, including Azure, AWS, Windows Server, Linux, VMware, or OpenStack, with a cost-effective, all-in-one cloud IT management solution. Existing System Center customers can take advantage of the Microsoft Operations Management Suite add-on, empowering them to do more by leveraging their current investments. Get access to all the new services that OMS offers, with a convenient step-up price for all existing System Center customers. You can also access only the IT management services that you need, enabling you to on-board quickly and have immediate value, paying only for the features that you use.
Quelle: Azure

Mesosphere DCOS, Azure, Docker, VMware and everything between – Architecture and CI/CD Flow

These days, I try to be involved in any Containers, DevOps, Automation, etc. related discussion. Part of my role is to consult my customers around how to architect their containers platform and orchestration tools in Azure. But what happens when you have a chance to do something cool like architecting a solution which involves Mesosphere DC/OS, Azure Container Service, Azure Container Registry, Docker, and VMware vSphere?! Let’s find out…

In this first multiple-part blog post series I will describe the motivation behind it, the requirements and constraints, architecture, and of course the “how to” – let’s begin.

Motivation, Requirements & Constraints

The motivation for this one was pretty straightforward – start looking into Docker containers and integrate several applications with it.

Without going into too many details, I had one major constraint – The DevOps team had the production environment deployed on top of vSphere and it’s dev/QA/integration environments in Azure. Why this is constraint you might ask?! Well, you will soon find out.

As for requirements, these are the main ones:

Use Azure Container Service with DC/OS
Store Docker images in the same private container registry which will be used by all parties – dev/QA/integration/Prod
Unified containers orchestration platform across all stacks

Going back to the constraint part for a second, the reason I consider this a constraint is because if the production was also part of Azure, I wouldn’t have to do anything with regard to vSphere and everything was pure “cloudish”.

Dev to Production CI/CD Flow

The continuous integration and deployment flow presented below goes as follows:

A developer does some coding on a container deployed locally on his workstation.
He then pushes an “Integration Ready” docker image to a private container registry.
The integration team pulls the image into a DC/OS cluster deploy on Azure to do some extra integration and testing work. Once done, a new “Production Ready” image is being pushed to the container registry.
The “Production Ready” container is being pulled to the DC/OS production cluster deployed on vSphere.

Architecture

Below is the infrastructure logical design for our deployment which will serve the process previously described. Please note that I will not touch the Visual Studio Team Services (VSTS) or the Team Foundation Server (TFS) in this series as I wanted to focus more on the infra side of things.

In the next blog post in this series, I will be talking about the DC/OS 1.9 deployment on top of vSphere. I went with the advanced way of doing things so I will share my knowledge around the configurations, caveats, and steps needed for such a deployment.
Quelle: Azure

Azure Marketplace Test Drive

Azure Marketplace provides a rich catalog of thousands of products and solutions from independent software vendors (ISVs) that have been certified and optimized to run on Azure. In addition to finding and deploying ISV products, customers often use Azure Marketplace as a learning tool to discover and evaluate products. One feature in Azure Marketplace that is especially useful for learning about products is “Test Drive.”  Test Drives are ready to go environments that allow you to experience a product for free without needing an Azure subscription. An additional benefit with a Test Drive is that it is pre-provisioned – you don’t have to download, set up or configure the product and can instead spend your time on evaluating the user experience, key features, and benefits of the product. To get started with a Test Drive, follow this 3-step process: Visit the Test Drive page on Azure Marketplace Choose a Test Drive, sign in and agree to the terms of use. Once you complete the form, your Test Drive will start deploying and in a few minutes you will get an email notification that the environment is ready. Just follow instructions in the email, and you will be able to access a fully provisioned and ready to use environment. Once provisioned, the Test Drive will be available for a limited time, typically a few hours. After the Test Drive is over, you will receive an email with the instructions to purchase or continue using the product. As you start thinking about your next DevOps tool or Web application firewall, consider using Test Drives. It is easy, free, and the hands-on experience will help you make the right decision. Happy Test Driving.
Quelle: Azure

Gartner names Microsoft Azure as a leader in the Cloud IaaS MQ

As customers bet more and more on Cloud to drive digital transformation within their organizations, we’re seeing tremendous usage of Azure. Recently, Forbes reported a study done by Cowen and Company Equity Research, and stated that Microsoft Azure is the most used Public Cloud as well as most likely to be renewed or purchased. More than 90 percent of the Fortune 500 use Microsoft’s cloud services today. Large enterprises such as Shell, GEICO, CarMax and MetLife, as well as smaller companies like Medpoint Digital and TreasuryXpress are all leveraging Azure to fuel business growth and reinvent themselves. We strongly believe that the momentum we’re seeing has been possible because of what Azure offers and stands for – a comprehensive and secure Cloud platform across IaaS and PaaS, unparalleled integration with Office 365, unique hybrid experience with Azure Stack, first-class support for Linux and open-source tooling, and a robust partner ecosystem.

Today we’re delighted that Gartner has recognized Microsoft as a leader in their Cloud Infrastructure as a Service (IaaS) MQ for the fourth consecutive year. We’re excited that Gartner continues to recognize Microsoft for completeness of our vision and ability to execute in this key area.

While we’re honored by our placement in the leaders quadrant for Cloud IaaS, we believe many of our customers choose Microsoft not just for our leadership in this area but because of our leadership across a broad portfolio of cloud offerings spanning Software as a Service (SaaS) offerings like Office 365, CRM Online and Power BI in addition to Azure Platform Services (PaaS). It’s the comprehensiveness of our cloud portfolio that gives customers the confidence that that no matter where they’re in their cloud adoption journey, they’re covered with a breadth of solutions for their problems instead of having to work with multiple vendors.

Here’s the list of cloud-related Gartner MQs where Microsoft is placed in the leader’s quadrant:

We look forward to continuously innovating and delivering across our portfolio of cloud offerings, and sincerely believe that every customer – whether big or small, new or seasoned to the Cloud, relying on open-source or otherwise –  has meaningful business value to gain from Azure. If you haven’t dug into Azure yet, here’s an easy way to do it!

If you’d like to read the full report, “Gartner: Magic Quadrant for Cloud Infrastructure as a Service,” you can request it here.

Disclaimer: Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

**Gartner Magic Quadrants

Gartner Magic Quadrant for Cloud Infrastructure as a Service; Lydia Leong, Raj Bala, Craig Lowery, Dennis Smith; June 17, 2017
Gartner Magic Quadrant for Public Cloud Storage Services, Raj Bala, Arun Chandrasekaran, July 26, 2016
Gartner Magic Quadrant for Access Management, Gregg Kreizman, Anmol Singh, June 7, 2017
Gartner Magic Quadrant for Business Intelligence and Analytics Platforms, Rita Sallam, Cindi Howson, Carlie Idoine, Thomas Oestreich, James Laurence Richardson, Joao Tapadinhas, February 16, 2017
Magic Quadrant for the CRM Customer Engagement Center, Michael Maoz, Brian Manusama, May 8, 2017
Magic Quadrant for Data Management Solutions for Analytics, Roxane Edjlali, Adam Ronthal, Rick Greenwald, Mark Beyer, Donald Feinberg, February 20, 2017
Magic Quadrant for Enterprise Agile Planning Tools, Thomas E. Murphy, Mike West, Keith James Mann, April 27, 2017
Magic Quadrant for Horizontal Portals, Jim Murphy, Gene Phifer, Gavin Tay, Magnus Revang, October 17, 2016
Magic Quadrant for Mobile Application Development Platforms, Jason Wong, Van Baker, Adrian Lowe, Marty Resnick, June 12, 2017
Magic Quadrant for Operational Database Management Systems, Nick Heudecker, Donald Feinberg, Merv Adrian, Terilyn Palanca, Rick Greenwald, October 5, 2016
Magic Quadrant for Sales Force Automation, Tad Travis, Ilona Hansen, Joanne Correia, Julian Poulter, August 10, 2016
Magic Quadrant for Unified Communications, Bern Elliot, Megan Marek Fernandez, Steve Blood, July 13, 2016
Magic Quadrant for Web Conferencing, Adam Preset, Mike Fasciani, Whit Andrews, November 10, 2016

Quelle: Azure

Announcing public preview of Apache Kafka on HDInsight with Azure Managed disks

HDInsight set a firm goal of helping enterprises build secure, robust, scalable open source streaming pipelines on Azure. To meet this goal, a few months ago we announced a limited preview of Managed Kafka on Azure HDInsight. The addition of Kafka on HDInsight completes the ingestion piece for scalable open source streaming on Azure. In addition to the scale and performance benefits of Apache Kafka, HDInsight Kafka customers reap the following advantages:

The promise of a managed open source Kafka backed by a 99.9% uptime SLA

This includes installation, configuration, and management of open source components
HDInsight additionally provisions and monitors a Zookeeper quorum as part of the cluster shape.

Managed rebalance of replicas and partitions across Azure update domains and fault domains. This ensures high availability of Kafka partitions on environments with a multidimensional view of a rack. This tool is also open sourced here.
Security and compliance benefits of Azure and HDInsight with certifications such as SOC, PCI, DSS.
An integrated experience to deploy a managed and secure streaming pipeline (Kafka, Storm or Spark streaming) within minutes via prebuilt architectures on ARM templates.

Today we are pleased to announce the Public Preview of Apache Kafka with Azure Managed Disks on the HDInsight platform. Users can now deploy Kafka clusters with managed disks straight from Azure portal, with no signup necessary. This allows for the powerful advantage of exponentially higher scalability, alongside exponential lower cost as workloads scale. This feature is discussed in more detail below.

Customer Success Stories: Toyota Connected

Over the last year HDInsight has worked very closely with is Toyota Inc. to build one of the world’s largest and most distributed connected car streaming platform. This platform processes millions of large events/day in production on HDInsight Kafka to unlock insights in real-time. A platform at this scale was made possible by the secure, managed, and elastically scalable nature of HDInsight. The benefits are best explained by the Chief Product Owner of Toyota Connected below.

"Toyota manufactures millions of cars running globally, and building a connected car platform to process real-time data at Toyota scale is a monumental challenge. To process events at Toyota’s scale, technologies such as Kafka need to be leveraged. Since HDInsight is the only managed platform that provides Kafka as a managed service with a 99.9% SLA, Toyota was able to leverage the scalable technology of Kafka, Storm and Spark on Azure HDInsight. Using the HDInsight platform, we were able to deploy enterprise grade streaming pipelines to process events from millions of cars every second. This is just scratching the surface – the future of global connected cars on Azure HDInsight is bright, and we are excited for what's in store."   -Vijay Chemuturi, Chief Product Owner, Toyota Connected

A high-level architecture of the connected car architecture is depicted below. As Vijay states, this is just the beginning – we are very excited to build upon this powerful streaming platform in the upcoming months.

More details on architecting similar IoT scenarios will follow in upcoming series of blogs.

Integration of HDInsight Kafka with Azure Managed Disks

With this public preview, HDInsight Kafka is also releasing native integration with Azure Managed Disks.

Azure Managed Disks is a new feature that abstracts the storage account specification for the customer allowing for an easier and managed route to use disks. They provide for a higher scale by abolishing the storage account IOPS limitation, along with the ability to create hundreds of VMs from a given VHD in a centralized storage account. A disk can be either Premium (SSD) or Standard (HDD), and 1 TB in size. More information on this feature is located here.

Kafka is a high throughput, low latency messaging service that is I/O heavy. Prior to Azure Managed Disks, HDInsight Kafka’s original preview offering stored data on the largest persisted disk of the node. This meant that each node had a limitation of 1 TB. Given Kafka’s I/O heavy nature the disk would often become the bottleneck and additional nodes needed to be added for more storage. This resulted in high cost, with a gross underutilization of the CPU and memory on the cluster. With this release, we are implementing the HDInsight Kafka with Managed Disks feature, which is pictorially depicted below.

With this feature, one can have both persisted and scalable data, up to 16 TBs per node. This allows for an exponentially lower cost, higher scalability and better performance as the workloads increase. Since the cost of a disk is a fraction of the cost of a node, the below figure shows how the number of nodes and cost scales down exponentially as scaling needs increase.

This feature is automatically turned on, and taking advantage of this feature is simple – the user just needs to specify the number of disks to be attached to a given node. This can be done via the portal, or by specifying a single property in the ARM template, shown in the below figures. Note that the type of disk – Premium or Standard is determined by the type of VMs chosen the worker nodes. Premium disks are attached to DS and GS series VMs, whereas standard are attached to all other VM types. End to end templates, with examples on how to create these clusters are detailed in the next section. More information on this is located in our documentation.

Disk Specification via Portal 1

Disk Specification via ARM template 1

Start deploying and using Spark, Storm, and Kafka with Managed Disks on HDInsight within minutes

We have updated our documentation and samples to help deploy scalable open source streaming solutions on HDInsight. Each of these examples walks through creating the clusters step by step, and contain one-click deploy ARM templates to enable powerful pipelines. We have additionally updated the Spark Streaming examples to include the new examples for Structured Streaming, and creating an end to end pipeline using Twitter, Kafka, Spark Streaming..

Getting Started with Kafka for HDInsight
Deploy HDInsight Kafka + Spark streaming
Deploy HDInsight Kafka + Storm
Stream data from on-premise to HDInsight Kafka in the cloud
Stream tweets to HDInsight Kafka and process with Spark structured streaming

For any questions, suggestions or feedback, please do not hesitate to reach out to us via HDIFeedback@microsoft.com. We are really excited to have you onboard, and would love to hear from you.
Quelle: Azure

Announcing large disk sizes of up to 4 TB for Azure IaaS VMs

Azure increases the maximum size and performance of Azure Disks

We are excited to announce an increase of maximum disk sizes for both Premium and Standard storage. This extends the maximum size of the disks from 1,024 GB to 4,095 GB and enables customers to add 4x more disk storage capacity per VM. Customers can now provision up to a total of 256 TB disks storage on a GS5 VM using 64 disks with 4 TB capacity. As a result, customers no longer need to scale up to multiple VMs or stripe multiple disks to provision larger disk capacity.

Large Disks are currently available in all Azure regions except sovereign clouds, which includes US Gov, US DOD, Germany, and China. We will have large disks available in sovereign clouds in a few weeks.

To provide flexibility for customers to provision an appropriate disk size which matches their workloads, we introduce two new disk sizes in P40 (2TB) and P50 (4TB) for both Managed and unmanaged Premium Disks; S40 (2TB) and S50 (4TB) for Standard Managed Disks. Customers can also provision the maximum disk size of 4,095 GB for Standard unmanaged disks.

 
Premium Disks
Standard Disks

Managed Disks
P40, P50
S40, S50

Unmanaged Disks
P40, P50
Max up to 4,095GB

Larger Premium Disks P40 and P50 will support your IO intensive workload, consequently, offers higher provisioned disk performance. The maximum Premium Disk IOPS and bandwidth is increased to 7,500 IOPS and 250 MBps respectively. Standard Disks, of all sizes, will offer up to 500 IOPS and 60 MBps.

 
P40
P50
S40
S50

Disk Size
2048 GB
4095 GB
2048 GB
4095 GB

Disk IOPS
7,500 IOPS
7,500 IOPS
Up to 500 IOPS
Up to 500 IOPS

Disk Bandwidth
250 MBps
250 MBps
Up to 60 MBps
Up to 60 MBps

You can create a larger disk or resize existing disks to larger disk sizes with your existing Azure tools through Azure Resource Manager (ARM). We will light up Azure Portal support for larger disks next week. To upload VHD file of more than 1TB as page blob or unmanaged disks, use the latest released toolsets. Azure Backup and Azure Site Recovery support for larger disks is coming soon.

Smaller Premium Managed Disks (32 GB and 64 GB) for cost efficiency

We will also offer two smaller disk sizes P4 (32GB) and P6 (64 GB) for Premium Managed Disks. You can use these new smaller sizes to optimize cost in scenarios in which you require consistent disk performance but with lower disk capacity, such as the OS disks for Linux VMs. We already offer smaller disk sizes for Standard Managed Disks.

 
P4
P6

Disk Size
32 GB
64 GB

Disk IOPS
120 IOPS
240 IOPS

Disk Bandwidth
25 MBps
50 MBps

New Premium Managed Disks created after June 15th, 2017 with disk size between 33GB and 64GB will be provisioned as P6 Premium Disks, and as P4 Premium Disks if the size is less than or equal to 32GB. The change of disk creation behavior will gradually take effect in all Azure regions in the coming week. Your existing Premium Managed Disks with disk size smaller and equal to 64GB deployed before June 15th, 2017 will stay at P10 disk performance and pricing tier. You can also resize your disks to more than 64GB to maintain your disk performance at P10 level.

Currently, the new P4 and P6 Premium Disk sizes are only available for Managed Disks. We will soon release the support of these smaller sizes for unmanaged Premium Disks. If you are not yet ready to migrate to Managed Disks, please stay tuned.

Pricing

You can visit the Managed Disk Pricing and unmanaged Disk Pricing pages for more details about large disks and smaller Premium Managed Disks pricing.

Getting started

Create new Managed Disks
Expand OS Disk
Expand Data Disk

Quelle: Azure

Solve Node.js issues faster with Application Insights for Node.js

Azure Application Insights is an application performance management (APM) platform which provides performance and diagnostic information about your running services and applications to help you discover and diagnose issues quickly. Use App Insights wherever your Node.js application runs containers, PaaS, IoT, and even Electron desktop apps. Just follow the instructions to drop the Node.js SDK into your app and watch helpful information flow into the Azure Portal in minutes.

APM’s help you understand and act upon what’s happening in your application, so one of our missions is to automatically collect and display information to help you pinpoint issues. We hope the following capabilities available in our latest SDK release (0.21.0) contribute to that.

 

Find related events

When pinpointing issues, reviewing all your traces and logs might be helpful, but it would be even more helpful to quickly filter to only those directly related to the problem at hand. For example, if your API service returns an error response, we’d like to help you quickly find all other traces related to that error. There’s a good chance some of them will lead you much closer to the source of the problem!

In this release we’ve made this correlation possible by including a shared correlation identifier in each App Insights item. Once you’ve drilled into the details view of any item, click “All available telemetry for this root operation” and instantly get a filtered view of related items based on correlation ID. For example, in the following screenshot a Node.js API request in turn leads to a MongoDB call, and the Request and Dependency items can be instantly viewed together.

Get started and learn more about Navigation and Dashboards in the Application Insights portal.

How it works

Instant filtering of correlated items in the Azure Portal requires adding the same correlation identifier (ID) to all related items sent by the SDK. That means we have to share that ID in the SDK between the original request context and the context of later operations like database and HTTP calls or sending a response. Sharing such data across callbacks and other asynchronous tasks in Node.js and JavaScript is challenging because JavaScript and Node.js don’t yet include a standard way to share context across callbacks, though several efforts are in progress.

Let’s illustrate the problem with a simple weather API service. A request to this API returns forecast details for a US zip code specified by an HTTP query parameter. The service itself gets these forecast details from the OpenWeatherMap HTTP API. The main handler follows and full code is in this gist.

function httpHandler (request, response) {

let parsed_url = url.parse(request.url);
let zip = parsed_url.query.zip;
let country = parsed_url.query.country || 'us';

let query_string = `zip=${zip},${country}&APPID=${appid}`
http.get(`http://api.openweathermap.org/data/2.5/forecast?${query_string}`,
weather_response => { weather_response.pipe(response) }
);
}
http.createServer(httpHandler).listen(8080);

The topology of this app as discovered by App Insights’ App Map.

In this code, when Node’s HTTP server receives an incoming request, the App Insights SDK collects information about that request for you and specifies a correlation identifier (ID). When the associated HTTP request is then sent to the OpenWeatherMap API the SDK collects information about that request too including the previously specified correlation ID. When the response from the OpenWeatherMap API is received by the service and sent along to our original caller more events are sent by App Insights which include the original correlation identifier too.

To share such an identifier across all these operations and callbacks the App Insights SDK needs to access some storage shared across all of them. To meet this challenge, App Insights now utilizes the popular zone.js library by default to provide a persistent context to store and share a correlation identifier when a request starts, and retrieve and use it when other information is collected and sent to you. As a result, you, and the Portal on your behalf, can filter by this identifier to discover and fix problems in your app faster.

Find related events *across services*

With zone.js included App Insights is able to get you to the root of a problem in a single service faster. However, your application may in fact consist of multiple services, and the root problem may be in another one. To address this and provide you a filtered view of related traces and logs across all your services takes another step which we and the .NET team have begun to implement in this release.

We can now properly relate traces between multiple Node.js and .NET services, which communicate via HTTP when both utilize the same instrumentation key (ikey). As an example, in the following screens I’m investigating a failed request in an application with a Node.js API service which in turn invokes a .NET service. I drill in to one of these failures to find related events, where I find some events from the Node.js service and some from .NET. I’m quickly able to identify that an exception in the .NET service led to the failed Node.js API call!

Better App Map support

The correlation work described above also allows us to better represent your app’s topology in App Insights’ App Map. The same Node.js and .NET app mentioned above is represented in App Map as follows, with the Node.js API as “appinsights-node-02” and the .NET service as “api-11”. The error (“!”) icon in the api-11 node could serve as another entry point for the investigation described in the previous section.

We continue to experiment with details provided by App Map and would like your feedback on what’s needed to meet your needs.

Info from third-party modules

Next, we know how much value you derive from Node.js’s massive module ecosystem. To truly help you pinpoint problems you’ll often need insight into third-party modules too. For example, if a MongoDB call fails, it would help to know what message was sent to the Mongo service and what error was received.

App Insights provides a user API you can use to trace this activity yourself, but with this release we begin collecting this information for you automatically for MongoDB, MySQL, and Redis, as well as the Bunyan logging framework and console APIs. So now more detail about that failed database call is automatically available to you.

For example, you may have noticed in an earlier example that a Node.js API call which led to a MongoDB call actually listed *2* calls to Mongo. Drilling in helped me determine that both find and getMore commands were sent, as shown in the following screenshot.

Since maintaining module patches for third-party modules is hard to do consistently and reliably, we’ve published these patches and the mechanism we use to utilize them as the open source node-diagnostic-channel project on GitHub. We’re working on sharing this project with Project Glimpse and would love feedback from other tool providers and module authors on how we can collaborate.

Sampling

Last but not least, this release also includes support for percentage-based sampling so you can reduce the amount of data sent to your App Insights resource and thereby reduce costs. Don’t worry, our sampling algorithm is sensitive to the correlation work described above, so even if you’ve enabled sampling we’ll still send all related events from a sampled request.

To enable sampling, specify a percentage before starting the client as follows:

const appInsights = require("applicationinsights");
appInsights.client.config.samplingPercentage = 33;
appInsights.start();

Conclusion

For all the details on these and other updates, see the changelogs for v0.20.0, v0.20.1, and v0.21.0.

Our goal is to help you quickly discover and diagnose performance and functional issues in your Node.js services and applications. We’d love your feedback here and in GitHub on how we’re doing and what’s most important to you. Thanks!
Quelle: Azure

May 2017 Leaderboard of Database Systems contributors on MSDN

Congratulations to the May 2017 top-10 contributors! Hilary Cotter and Alberto Morillo continue to top the Overall and Cloud database for the fourth successive month.

This Leaderboard initiative was started in October 2016 to recognize the top Database Systems contributors on MSDN forums. The following continues to be the points hierarchy (in decreasing order of points):

For questions related to this leaderboard, please write to leaderboard-sql@microsoft.com
Quelle: Azure

Microsoft joins Cloud Foundry Foundation

From when we first launched Azure Virtual Machines, I have had the pleasure of working with fantastic community partners and customers. We have built new open-source technologies and have made numerous community contributions. Making Azure an open, flexible, and portable platform takes a lot of friends.

However, we aren’t done. Far, far from it. Today, at the Cloud Foundry Summit in Santa Clara, I am honored to join Abby Kearns, executive director of the Cloud Foundry Foundation on stage to announce that we have joined the Cloud Foundry Foundation as a Gold Member. Cloud Foundry on Azure has seen a lot of customer success, enabling cloud migration with application modernization while still offering an open, portable and hybrid platform. The partnership with the Cloud Foundry Foundation extends our commitment to deeply collaborate and innovate in the open community. We remain committed to create a diverse and open technology ecosystem, to offer you the freedom to deploy the application solution you want on the cloud platform you prefer.

In addition to joining the Cloud Foundry Foundation, we are also extending Cloud Foundry integration with Azure. This includes back-end integration with Azure Database (PostgreSQL and MySQL) and cloud broker support for SQL Database, Service Bus, and Cosmos DB. We even included the Cloud Foundry CLI in the tools available in the Cloud Shell for easy CF management in seconds. Here are some additional details on the integration offered between Azure and Cloud Foundry.

Enabling the most comprehensive Cloud Foundry support

It has been really exciting working with the community to bring together two thriving ecosystems, offering support for Azure tools and frameworks with Cloud Foundry. In fact, as we develop new services and capabilities in Azure, we offer Cloud Foundry integration from the first day of preview. Here are two examples of exciting integration with announcements from our Microsoft Build developers conference last month:

Cloud Foundry CLI in Azure Cloud Shell – The Azure Cloud Shell embedded in the Azure portal puts a fully featured Bash shell at your fingertips on any device with a browser. Today, we’re pleased to announce that we’ve added the Cloud Foundry CLI to the list of tools installed in the Cloud Shell by default.
Support for Azure Database for MySQL and Azure Database for PostgreSQL services – With the new Azure Database offerings, you can now back your CF environment with a store that is fully managed, with automatic scaling and backup built in.

Here are a few other investments we have made to bring together the Azure platform with the Cloud Foundry platform:

Azure Cloud Provider Interface – The Azure CPI provides integration between BOSH and the Azure infrastructure, including the VMs, virtual networks, and other infrastructural elements required to run Cloud Foundry. The CPI is continually updated to take advantage of the latest Azure features, including supporting Azure Stack.
Azure Meta Service Broker – The Azure meta service broker provides Cloud Foundry developers with an easy way to provision and bind their applications to some of our most popular services, including Azure SQL, Azure Service Bus, and Azure Cosmos DB.
Visual Studio Team Services plugin – The Cloud Foundry plugin for Visual Studio Team Services (VSTS) provides rich support for building continuous integration/continuous delivery (CI/CD) pipelines for CF, including the ability to deploy to a CF environment from a VSTS hosted build agent, allowing teams to avoid managing build servers. And of course, the plug-in is open-source.
Microsoft Operations Management Suite Log Analytics – Integration with Log Analytics in OMS allows you to collect system and application metrics and logs for monitoring your CF Application.

Open Service Broker

The Azure team has been deeply involved in enabling the Open Service Broker API ecosystem in Kubernetes and making it easier for developers to use Azure services through the Service Catalog as part of an effort that started with Deis. This broker strives to enable a standard interface for connecting cloud native platforms with application platforms like Cloud Foundry and Kubernetes. With Deis joining the Azure team, I am excited to announce Microsoft formally joining this Open Service Broker working group, a core initiative of the Cloud Foundry Foundation. Working with this group, I hope we can accelerate the efforts to standardize the interface for connecting cloud native platforms, offering you even more multi-cloud and multi-platform portability.

Extending Choice for our Customers and Partners

Many of the largest enterprises have chosen Cloud Foundry to help solve complex business challenges and have looked to Azure as the leading enterprise cloud on which to run, including Ford, Manulife, and Merrill. Furthermore, we already work closely with many partners in the Cloud Foundry community, including Pivotal, SAP (SAP Cloud Platform), and GE. These announcements reinforce our support and excitement to work with these partners in this growing community. With our joining of the Cloud Foundry Foundation and the capabilities listed above I hope you find Azure offers the best place for deploying portable and open Cloud Foundry applications without any lock-in.

Register for a webinar on Cloud Foundry on Azure to learn more. I look forward to seeing what you build!

See ya around,

Corey
Quelle: Azure