How Azure Security Center detects DDoS attack using cyber threat intelligence

Azure Security Center automatically collects, analyzes, and integrates log data from a variety of Azure resources. A list of prioritized security alerts are shown in Security Center along with the information you need to quickly investigate the problem along with recommendations for how to remediate an attack. In addition, a team of security researchers and experts often work directly with customers to gain insight into security incidents affecting Microsoft Azure customers, with the goal of constantly improving Security Center detection and alerting capabilities.

In the previous blog post "Azure Security Center adds Context Alerts to aid threat investigation," Ian Hellen described the context alerting feature that helps to automate security investigation and delivers relevant context about what else happened on the system during and immediately before an attack. In this blog post, we will focus on a real-world DDoS attack campaign and how it was detected using cyber threat intelligence.

Before we get into the details of our investigation, let’s quickly explain some terms that you’ll see throughout this blog. So, what is DDoS? DDoS (Distributed Denial of Service) is a collection of attack types aimed at disrupting the availability of a target. These attacks involve a coordinated effort that uses multiple Internet-connected systems to launch many network requests against targets such as DNS servers, web services, e-mail, and others. The attacker’s goal is to overwhelm system resources on the targeted servers so that they can no longer process legitimate traffic effectively, making the system inaccessible. Another term is “Brute Force” which is a type of attack that attempts to calculate or guess valid username/password combinations to gain unauthorized access to a computer host. Oftentimes, the sheer amount of Brute Force attempts can effectively result in DDoS of the targeted system.

Initial Azure Security Center alert details

We began our initial investigation when Azure Security Center alerted on a series of Failed RDP Brute Force Attacks followed by Successful RDP brute force Attack immediately afterward. Around the same time, we also observed consistent RDP Incoming BF Many to One & RDP Incoming BF One to One attack alerts in Azure Security Center. These attacks appear to originate from ~79-85 unique IP addresses trying to target the RDP service periodically. 

Below we see this series of alerts in Azure Security Center:

Azure Security Center also provides a threat intelligence report on alerts that provides detailed insight into the attack techniques being used like below:

After the successful brute force attack, we began our deeper investigation that revealed the attackers first created three new user accounts, all with the same password:

‘administrator’,
‘admin’,
‘adminserver’

That password for each was ‘lman321’.

Later, Azure Security Center detected that the attackers had executed processes associated with an unknown binary ‘wrsd.exe’ running from the user account’s %temp% directory.
Once downloaded, we observed wrsd.exe, running the whoami command which displays the current logged domainuser account.
Attackers then changed the below registry key to be able to bypass Network Level Authentication (NLA) to get to a generic RDP window, so that they could login from any Windows RDP client.

REG ADD “HKLMSYSTEMControlSet001ControlTerminal ServerWinStationsRDP-Tcp” /v UserAuthentication /t REG_DWORD /d 1 /f
"reg add "HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlTerminal ServerWinStationsRDP-Tcp" /v UserAuthentication /t REG_DWORD /d 1 /f"

Attackers then delete Terminal Services registry key entries related to the display of LegalNoticeCaption and LegalNoticeText. These registry keys are used to enable and configure custom legal notices and start up messages that Windows displays to all remote RDP users upon logon. Attackers will typically delete these LegalNotice keys as the UI can sometimes break or interrupt attacker’s automation.

reg delete

"HKEY_LOCAL_MACHINESOFTWAREMicrosoftWindowsCurrentVersionPoliciesSystem" /v legalnoticecaption /f   

reg delete

"HKEY_LOCAL_MACHINESOFTWAREMicrosoftWindowsCurrentVersionPoliciesSystem" /v legalnoticetext /f

The Parent process then launches commands to terminate any running Chrome or Firefox processes using Taskill.exe with the /f (force) option. Please note, task kill (Taskill.exe) is a program used to end one or more tasks or processes. Processes can be killed by process ID or image name. The ‘/f’ parameter represents that it’s trying to terminate the processes forcefully.
After killing the processes, we see the following:

Attackers first attempt to log off using the “Shutdown /l /f” command. The /l switch indicates a “logoff” while the /f switch forces running applications to close.
This is followed by the “ping -n 3 127.0.0.1” command, pinging the localhost 3 times, which appears to be used to insert a delay of about 3 secs as each ping takes a second.
Finally, we see the attacker logging off using the “logoff” command.

Within an hour of compromise, Azure Security Center used Microsoft’s threat intelligence to detect that the compromised subscription was likely being used as a shadow server to perform outgoing DNS amplification attacks. 

DNS amplification attacks are a popular form of distributed DDoS attack that usually involves two sophisticated steps. Attackers first spoofs the IP address of the DNS resolver and substitute it with the victim's IP address. The result of this is that all DNS replies will be sent to the victim's servers. In the second step, attackers discover an Internet domain that is possibly registered with many DNS records.

Attackers will then send DNS queries that request the entire set of DNS records for that domain. The DNS server’s response is usually so large that it floods the target with large quantities of packets.

Considering the high severity and priority of cases like these, our team of security researchers and experts immediately reached out to the customer and worked with their security team in identifying the threat, performing forensic investigative steps to ascertain what activities took place on the victim host, the scope of the intrusion, and the motives behind it. Further remediation steps were also taken to prevent continued exposure and the possibility of further compromise in the customer’s network. All the recommended action taken are explained in detail in the remediation and mitigation section below.

Recommended remediation and mitigation steps

The initial compromise was the result of a successful RDP Brute force attack that resulted in complete compromise of the machine and was further used for DDoS- DNS Amplification Attack. In this case, the host was being used for nefarious purposes. Microsoft recommends investigating the source of the initial compromise via a review of available log sources, host-based analysis, and if needed, forensic analysis to help build a picture of the compromise. In the case of Azure ‘Infrastructure as a Service’ (IaaS) virtual machines (VMs), several features are present to facilitate the collection of data including the ability to attach data drives to a running machine and disk imaging capabilities. Microsoft also recommends performing a scan using malware protection software to help identify and remove any malicious software running on the host. If lateral movement has been identified from the compromised host, remediation actions should extend to these hosts.

In cases where the victim host cannot be confirmed clean, or a root cause of the compromise cannot be identified, Microsoft recommends backing up critical data and migrating to a new virtual machine. Additionally, new or remediated hosts should be hardened prior to being placed back on the network to prevent reinfection. However, with the understanding that this sometimes cannot be done immediately, we recommend implementing the following remediation/preventative steps:

Password Policy: Attackers usually launch brute-force attacks using widely available tools that utilize wordlists and smart rulesets to intelligently and automatically guess user passwords. So, the first step is to make sure to utilize complex passwords for all VMs. A complex password policy that enforces frequent password changes should be in place. Learn more about the best practices for enforcing password policies.
Endpoints: Endpoints allows communication with your VM from the Internet. When creating a VM in the Azure environment, two endpoints get created by default to help manage the VM, Remote Desktop and PowerShell. It is recommended to remove any endpoints that are not needed and to only add them when required. Should you have an endpoint open, it is recommended to change the public port that is used whenever possible. When creating a new Windows VM, by default the public port for Remote Desktop is set to “Auto” which means a random public port will get automatically generated for you. Get more information on how to set up endpoints on a classic Windows virtual machine in Azure.
Enable Network Security Group: Azure Security Center recommends that you enable a network security group (NSG) if it’s not already enabled. NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your VM instances in a Virtual Network. An endpoint ACL allows you to control which IP address, or CIDR subnet of addresses, you want to allow access over that management protocol. Learn more about how to filter network traffic with network security groups and enable Network Security Groups in Azure Security Center.
Using VPN for management: A VPN gateway is a type of virtual network gateway that sends encrypted traffic across a public connection to an on-premises location. You can also use VPN gateways to send encrypted traffic between Azure virtual networks over the Microsoft network. To send encrypted network traffic between your Azure virtual network and on-premises site, you must create a VPN gateway for your virtual network. Both Site to Site and Point to Site gateway connections allows us to completely remove public endpoints and connect directly to the Virtual Machine over secure VPN connection.

To learn more about Azure Security Center, see the following:

Azure Security Center’s detection capabilities
Managing and responding to security alerts in Azure Security Center
Managing security recommendations in Azure Security Center
Security health monitoring in Azure Security Center
Monitoring partner solutions with Azure Security Center
Azure Security Center FAQ
Get the latest Azure security news and information by reading the Azure Security blog.

Quelle: Azure

Missed Azure OpenDev? Watch the videos on-demand now!

On Wednesday, June 21, Microsoft hosted the first ever Azure OpenDev virtual event, and I was blown away by the community support and response! The event was only made possible with the amazing support of partners such as Canonical, Red Hat, Docker, Pivotal, and Chef. OpenDev brought to life what’s possible with open source in the cloud based on experiences from our partners, customers, and community members from around the world.

Nearly one million people have already watched the event live or on-demand. If you have not participated yet, watch all the Azure OpenDev sessions now, for free and on-demand!

Microsoft’s open source strategy

I kicked off Azure OpenDev by sharing Microsoft’s strategy for open source, bringing to light facts and unknown statistics around our usage of and contribution to open source software. I highlighted some of our most recent open source related announcements such as managed services for MySQL and PostgreSQL on Azure, a new open source Kubernetes tool, Draft, and joining the Cloud Foundry Foundation.

Building on the latter, I was fortunate to have Abby Kearns, Executive Director of the Cloud Foundry Foundation, join me via video-conference during OpenDev to explain the role of the foundation, how they work with cloud vendors, and her perspective on Microsoft joining the foundation.

Learning from thought leaders

Next, some of our partners shared their point of view about open source in the cloud, microservices development, containers, and DevOps using open technologies.

Scott Johnston, COO at Docker, demonstrated how Docker can help modernize traditional applications and bring them to the cloud, in addition to announcing support for Docker Community Edition in Azure Container Service.
Mark Shuttleworth, founder of Ubuntu and Canonical, showcased Canonical Kubernetes and large-scale distributed systems on Azure.
Joshua McKenty of Pivotal and Rick Clark of Mastercard talked about business transformation through the adoption of cloud-native patterns for Java applications on Pivotal Cloud Foundry, powered by Azure.
Nicholas Gerasimatos from Red Hat presented the Red Hat + Microsoft partnership and OpenShift running on Azure.
Nell Shamrell-Harrington from Chef presented application automation with Chef Habitat.

Mark Shuttleworth, Founder of Canonical and Ubuntu, demoing Canonical Kubernetes

Joshua McKenty of Pivotal and Rick Clark of Mastercard discussing cloud native apps

Some Microsoft speakers also presented their experiences with open source technologies.

Gabe Monroy and Michelle Noorali, who recently joined Microsoft through the Deis acquisition, presented Helm and Draft, two open source tools to help manage Kubernetes.
Kaspars Mickevics, engineering manager from the Skype team in Estonia, showed how they’re using Azure to run the massive-scale Debian systems that power the VoIP solution worldwide, as he shared some practical, technically deep learnings.

Try it yourself with the how-to videos

We published how-to videos to help you quickly get started and experience open source technologies on Azure.

Joe Binder, Principal Product Manager in the Azure team, showed how to run a Spring Boot (Java) app on Azure Web Apps and on Azure Container Service.
Matt Hernandez, Senior Program Manager, demonstrated the end-to-end story for running a MEAN (Node.js) app on Azure with Visual Studio Code and Cosmos DB (a MongoDB drop-in replacement).

Be sure to check them out and try the demo on your own! A special thank you to all of our speakers, our viewers, and the team that made this event possible and such a success.

See you at the next OpenDev in October

While this was the first edition of OpenDev, it’s definitely not the last! We’re working to make Azure OpenDev a recurring event, three times per year, and I’m pleased to announce that the next one will be in October! Sign up for updates and stay tuned for more announcements.
Quelle: Azure

Super Charge Power BI with Azure Analysis Services

In April we announced the general availability of Azure Analysis Services, which evolved from the proven analytics engine in Microsoft SQL Server Analysis Services. The success of any modern data-driven organization requires that information is available at the fingertips of every business user, not just IT professionals and data scientists, to guide their day-to-day decisions. Self-service BI tools have made huge strides in making data accessible to business users. However, most business users don’t have the expertise or desire to do the heavy lifting that is typically required, including finding the right sources of data, importing the raw data, transforming it into the right shape, and adding business logic and metrics, before they can explore the data to derive insights. With Azure Analysis Services, a BI professional can create a semantic model over the raw data and share it with business users so that all they need to do is connect to the model from any BI tool and immediately explore the data and gain insights. Azure Analysis Services uses a highly optimized in-memory engine to provide responses to user queries at the speed of thought.

In this video, I show how you can migrate your Power BI Models to Azure Analysis Service. I also touch on the new feature that have recently been released and what is coming next.

Learn more about Azure Analysis Services.
Quelle: Azure

Design patterns for microservices

The AzureCAT patterns & practices team has published nine new design patterns on the Azure Architecture Center. These nine patterns are particularly useful when designing and implementing microservices. The increased interest in microservices within the industry was the motivation for documenting these patterns.

The following diagram illustrates how these patterns could be used in a microservices architecture.

 

For each pattern, we describe the problem, the solution, when to use the pattern, and implementation considerations.

Here are the new patterns:

Ambassador can be used to offload common client connectivity tasks such as monitoring, logging, routing, and security (such as TLS) in a language agnostic way.
Anti-corruption layer implements a façade between new and legacy applications, to ensure that the design of a new application is not limited by dependencies on legacy systems.
Backends for Frontends creates separate backend services for different types of clients, such as desktop and mobile. That way, a single backend service doesn’t need to handle the conflicting requirements of various client types. This pattern can help keep each microservice simple, by separating client-specific concerns.
Bulkhead isolates critical resources, such as connection pool, memory, and CPU, for each workload or service. By using bulkheads, a single workload (or service) can’t consume all of the resources, starving others. This pattern increases the resiliency of the system by preventing cascading failures caused by one service.
Gateway Aggregation aggregates requests to multiple individual microservices into a single request, reducing chattiness between consumers and services.
Gateway Offloading enables each microservice to offload shared service functionality, such as the use of SSL certificates, to an API gateway.
Gateway Routing routes requests to multiple microservices using a single endpoint, so that consumers don't need to manage many separate endpoints.
Sidecar deploys helper components of an application as a separate container or process to provide isolation and encapsulation.
Strangler supports incremental migration by gradually replacing specific pieces of functionality with new services.

The goal of microservices is to increase the velocity of application releases, by decomposing the application into small autonomous services that can be deployed independently. A microservices architecture also brings some challenges, and these patterns can help mitigate these challenges. We hope you will find them useful in your own projects. As always, we greatly appreciate your feedback.
Quelle: Azure

Handling data encoding issues while loading data to SQL Data Warehouse

This blog is intended to provide insight on some of the data encoding issues that you may encounter while using Polybase to load data to SQL Data Warehouse. This article also provides some options that you can use to overcome such issues and load the data successfully.

Problem

In most cases, you will be migrating data from an external system to SQL Data Warehouse or working with data that has been exported in flat file format. If the data is formatted using either the UTF-8 or UTF-16 encoding standard, you can use Polybase to load the data. However, the format of your data is dependent on the encoding options supported by the source system. Some systems do not provide support for UTF-8 or UTF-16 encoding. If the data you are working with is formatted in an alternate format, such as ISO-8859-1, then being able to convert the data to UTF-8/UTF-16 format can save valuable time and effort.

The flow of data from a source system to Azure Blob Storage and then on to Azure SQL Data Warehouse (DW) is shown in the following graphic:

Azure Blob Storage is a convenient place to store data for use by Azure services like SQL DW. PolyBase makes it easy to access the data by using T-SQL, for example creating external tables for the data on Azure Blob Storage and loading the data into internal tables of SQL Data Warehouse using a simple SELECT query.

If the volume of the data being loaded is small, then it may be easier to export the data from the source system again, this time using UTF-8/UTF-16 encoding. For larger volumes of data, however, re-export, data compression, and data load to Azure Blob Storage can take weeks. To avoid this delay, you need to be able to convert the encoding on the data files within the Azure environment without accessing the source system again.

Solution

The sections below provides details on options you have for converting source file encoding to UTF-8/UTF-16.

Important: PolyBase supports UTF16-LE. It shouldn’t matter for customers in the Windows ecosystem, but a customer may specify UTF16-BE and have their load fail.

Option 1: Notepad++

You can use the Notepad++ tool to change the encoding of a file on a local computer. Simply download the data file to a local computer, open the file in Notepad++, and then convert the file encoding to UTF-8/UTF-16.

1. To view the encoding of a source file, click the Encoding menu, as shown in the following graphic:

The source file in the example above is encoded in ANSI.

2. To convert file encoding to UTF-8, on the Encoding menu, select Convert to UTF-8.

3. Save the file, use the Encoding menu to view the encoding, and confirm that the file is now encoded using UTF-8.

After the file is saved in UTF-8 encoding, you can use Polybase to upload it to Azure Blob Storage and load it into SQL Data Warehouse.

While this is a viable approach, there are some drawbacks, which are listed below:

Download time
Available space on local system
Upload time
Works only with small files because of memory and space constraints

Option 2: Azure VM

To overcome some of the drawbacks associated with using Notepad++, you can use an Azure VM to convert data file encoding. With this method, the entire process occurs within the Azure environment, thereby eliminating delays associated with transferring data between Azure and the local system. This process is shown in the following graphic:

This approach has the following high-level steps:

Setup an Azure VM (Windows or Linux)
Download data file from Azure Blob Storage to local storage on Azure VM
Extract data file (if applicable)
Convert data file encoding using a utility (custom/built-in)
Upload the converted data file from local storage on Azure VM to Azure Blob Storage

Note that this approach has its own drawbacks:

Download time
Available space on local system
Upload time

Option 3: Azure File Storage

To overcome the limitations associated with download and upload time when using Azure VMs, you can use Azure File Storage, which offers cloud-based SMB file shares that you can use to quickly migrate legacy applications that rely on file shares to Azure without costly rewrites. With Azure File Storage, applications running in Azure virtual machines or cloud services can mount a file share in the cloud, just as a desktop application mounts a typical SMB share. Any number of application components can then mount and access the File Storage share simultaneously, as shown in the following graphic:

Note: Learn more about Azure Storage.

When using Azure File Storage, be aware of the capacity limits identified in the following table:

Note: A full listing of Azure Storage Scalability and Performance Targets is now available.

With this approach, you can have all the data files on Azure File Storage and have an Azure VM that can mount Azure File Storage. After having the mount, the Azure VM can directly read and write files from/to Azure File Storage without having to download to or upload from local storage on Azure VM.

This approach includes the following high-level steps:

Setup an Azure VM (Windows or Linux)
Mount Azure File Storage on Azure VM (see procedure below)
Extract data file (if applicable)
Convert data file encoding using a utility (custom/built-in)

The diagram below shows the complete flow of data compression, transfer, extraction, transformation, and load via PolyBase into SQL DW:

Mounting Azure File Storage to VM

The process of mounting Azure File Storage to VM, Ubuntu Linux VM in this case, involves three high-level steps:

Installing the required libraries/packages.

sudo apt-get install cifs-utils

Creating the mount point location on Azure VM to which the Azure File Storage will be mapped.

sudo mkdir /mnt/mountpoint

Mounting Azure File Storage location to Azure VM mount point.

sudo mount -t cifs //myaccountname.file.core.windows.net/mysharename /mnt/mountpoint -o vers=3.0,user=myaccountname,password=StorageAccountKeyEndingIn==,dir_mode=0777,file_mode=0777,serverino

Note: Get full details on mounting Azure File Storage from a Linux VM.

Automating data encoding conversion

This section provides some details on a project that leveraged this approach to convert the encoding of a data file:

131 tables data exported from Netezza system
4 data files per source table organized under the folder name representing the source table
All data files encoded in ANSI format (ISO-8859-1)
All data files compressed using GZ compression
Total compressed data files size was 750GB
Total uncompressed converted data files size was 7.6TB

The data files were organized on Azure File Storage in the following structure:

A snapshot of the bash script on Ubuntu VM that was used to convert the encoding on the data files automatically is shown in the following graphic:

This script performed the following:

Accepted the table name as an argument
Looped through each of the 4 data files for the given table
For each data file

Extracted the compressed GZ file using gunzip command
Converted the encoding of each file using iconv command where the source file encoding is specified as ISO-8859-1 and the target file encoding is specified as UTF-8
Wrote the converted file to a folder with the table name under ConvertedData

The script was further enhanced to loop through a list of table names and repeat the above process, rather than accepting the table name as an argument.

Convert from any encoding to any other encoding

The script can be modified to accept the from and to encoding as arguments instead of hardcoding them in the script. A full list of encodings supported by iconv command can be retrieved by running the command iconv -l on the computer you will be using to convert the data encoding. Be sure to check for any typos in the encoding format specified before running the command. A snapshot of the generic script and an example on how to invoke it is shown in the following graphic:

The above command converts the data files from UTF-8 encoding to ISO_8859-1 encoding format.

Recognition

The Data Migration Team would like to thank primary contributors Rakesh Davanum, Andy Isley, Joe Yong, Casey Karst, and Mukesh Kumar, for their efforts in preparing this blog posting. The details provided has been harvested as part of a customer engagement sponsored by the CSE DM Jumpstart Program.
Quelle: Azure

Announcing new set of Azure Services in the UK

We’re pleased to announce the following services which are now available in the UK!

Azure Container Service –  Azure Container Service is the fastest way to realize the benefits of running containers in production. It uses customers’ preferred choice of open source technology, tools, and skills, combined with the confidence of solid support and a thriving community ecosystem. Simplified configurations of proven open source container orchestration technology, optimized to run in the Azure cloud, are provided. In just a few clicks, customers can deploy in production container-based applications, and on a framework designed to help manage the complexity of containers deployed at scale. Unlike other container services, Azure Container Service is built on 100% open source software and offer a choice between open source orchestrators Kubernetes, DC/OS, or Docker Swarm with Swarm mode.
The UK region is the first Azure region featuring Docker Swarm mode instead of legacy Swarm.

Learn more about Container Service.

Log Analytics – Azure Log Analytics is a service in the Operations Management Suite (OMS) offering that monitors your cloud and on-premises environments to maintain their availability and performance. It collects data generated by resources in your hybrid cloud environments and from other monitoring tools to provide insights and analysis and help you detect and respond to issues quickly.
With the availability of Log Analytics in the UK, you can now access a full set of operations management and security services (Log Analytics, Automation, Security Center, Backup and Site Recovery) in the UK.

Learn more about Log Analytics.

Logic Apps –  Logic Apps provide a way to simplify and implement scalable integrations and workflows in the cloud. It provides a visual designer to model and automate your process as a series of steps known as a workflow. Logic Apps is a fully managed iPaaS (integration Platform as a Service) allowing developers not to have to worry about building, hosting, scalability, availability and management. Logic Apps scale up automatically to meet demand.

Learn more about Logic Apps.

Azure Stream Analytics –  Azure Stream Analytics is a fully managed, cost effective real-time event processing engine that helps to unlock deep insights from data. Stream Analytics makes it easy to set up real-time analytic computations on data streaming from devices, sensors, web sites, social media, applications, infrastructure systems, and more.

With a few clicks in the Azure portal, you can author a Stream Analytics job specifying the input source of the streaming data, the output sink for the results of your job, and a data transformation expressed in a SQL-like language. You can monitor and adjust the scale/speed of your job in the Azure portal to scale from a few kilobytes to a gigabyte or more of events processed per second.
Stream Analytics leverages years of Microsoft Research work in developing highly tuned streaming engines for time-sensitive processing, as well as language integrations for intuitive specifications of such.

Learn more about Stream Analytics.

SQL Threat Detection –  SQL Threat Detection provides a new layer of security, which enables customers to detect and respond to potential threats as they occur by providing security alerts on anomalous activities. Users will receive an alert upon suspicious database activities, potential vulnerabilities, and SQL injection attacks, as well as anomalous database access patterns. SQL Threat Detection alerts provide details of suspicious activity and recommend action on how to investigate and mitigate the threat. Users can explore the suspicious events using SQL Database Auditing to determine if they are caused by an attempt to access, breach, or exploit data in the database. Threat Detection makes it simple to address potential threats to the database without the need to be a security expert or manage advanced security monitoring systems.

Learn more about SQL Threat Detection.

SQL Data Sync Public Preview –  SQL Data Sync (Preview) is a service of SQL Database that enables you to synchronize the data you select across multiple SQL Server and SQL Database instances. To synchronize your data, you create sync groups which define the databases, tables and columns to synchronize as well as the synchronization schedule. Each sync group must have at least one SQL Database instance which serves as the sync group hub in a hub-and-spoke topology.

Learn more about Azure SQL Data Sync.

Managed Disks SSE (Storage Service Encryption) –  Azure Storage Service Encryption (SSE) is now supported for Managed Disks. SSE provides encryption-at-rest and safeguards your data to meet your organizational security and compliance commitments.
Starting June 10th, 2017, all new managed disks, snapshots, images and new data written to existing managed disks are automatically encrypted-at-rest with keys managed by Microsoft.

Learn more about Storage Service Encryption for Azure Managed Disks.

We are excited about these additions, and invite customers using the UK Azure region to try them today!
Quelle: Azure

Enhanced app usage monitoring and investigative features in Azure Application Insights

Do you find yourself scrambling for information when investigating application performance issues? Do you spend hours (sometimes even days) gathering and reporting on usage and incidents of your application to stakeholders?

Azure Application Insights lets you learn, iterate, and improve the performance and usability of your apps and services by providing real-time insights based on machine learning and ad-hoc analytics. It helps you to detect, investigate, and mitigate application performance issues before impacting users.

New features of Azure Application Insights, currently available in public preview, make it easy to collate and report all relevant information. Whether you are investigating an application performance issue or exploring application usage, you can journal the findings and narrate the complete story of your app.

Workbooks for Application Usage Monitoring

Workbooks, a new feature in Application Insights, lets you combine visualizations of usage data, Analytics queries, and text into interactive documents. It helps product owners answer questions about their app usage that span multiple visualization tools – Users, Sessions, Events, Retention, and Analytics – then pull the results together into an easy-to-read form to share with their team. When you send a workbook to someone on your team, the controls and queries you used to make the workbook remain editable to them. This makes workbooks easy to explore, extend, and check for mistakes. Workbooks are available in the Usage section of Application Insights, today.

To learn more, visit the documentation page on how to investigate and share usage data.

Screenshot of Workbooks in Azure Application Insights

User and Session Timeline Visualizations

Analyzing the behaviors of individual users can be illuminating. For example, by watching a customer use your product in-person you can learn where there are usability improvements you need to make. That’s why we’re making it even easier to filter down to individual users and session in Application Insights. In the Users, Sessions, and Events tools, you’ll find sections that give five sample users, sessions, or events, respectively, based on your query. We’re taking this a step further with a new timeline view when you click on sample sessions in the Sessions tool. This timeline makes it easy to browse the details, page views, and custom events of a session like a story. By stepping through a user’s experience in this way, you can infer difficulties or goals they may have had while using your product, then address these in a future release of your site.

Screenshot highlighting Session Timeline in Azure Application Insights

Funnels to understand User Flows

Funnels allow you to easily measure conversions of a sequence of events through visual representation without writing complicated queries. Use Funnels to understand where users are dropping off and pinpoint events or pages that are causing issues for your customers. To learn more, visit the documentation page on usage analysis for web applications. 

Screenshot of Funnels in Azure Application Insights

Curated performance investigation improvements

We listened to your feedback and made improvements to application performance investigation experience. The performance blade now includes titles for each chart and grid. It also provides the trends of operational responsiveness and lets you drill into the details on a specific slow operation of interest.

 

Screenshot of performance investigation in Azure Application Insights

Public preview management

As we continue to innovate and release new features in Application Insights, we want to provide the flexibility and control you need to leverage the new capabilities. In this regard, we have enabled public preview management to land new experiences with minimal disruption to your business operations. Here you can try out new enhancements to Application Insights on your own schedule and test them in your pre-production environments, before moving to production.

To learn more about how to configure previews, see the documentation link on how to preview upcoming changes.

Try out these features today and let us know what you think of them at Application Insights UserVoice.
Quelle: Azure

.NET: Manage Azure Container Service, Cosmos DB, Active Directory Graph and more

We released 1.1 of the Azure Management Libraries for .NET. This release adds support for:

Cosmos DB
Azure Container Service and Registry
Active Directory Graph

https://github.com/azure/azure-sdk-for-net/tree/Fluent

Getting started

You can download 1.1 from:

Create a Cosmos DB with DocumentDB API

You can create a Cosmos DB account by using a define() … create() method chain.

var documentDBAccount = azure.DocumentDBAccounts.Define(docDBName)
.WithRegion(Region.USEast)
.WithNewResourceGroup(rgName)
.WithKind(DatabaseAccountKind.GlobalDocumentDB)
.WithSessionConsistency()
.WithWriteReplication(Region.USWest)
.WithReadReplication(Region.USCentral)
.Create();

In addition, you can:

Create Cosmos DB with DocumentDB API and configure for high availability
Create Cosmos DB with DocumentDB API and configure with eventual consistency
Create Cosmos DB with DocumentDB API, configure for high availability and create a firewall to limit access from an approved set of IP addresses
Create Cosmos DB with MongoDB API and get connection string

Create an Azure Container Registry

You can create an Azure Container Registry by using a define() … create() method chain.

var azureRegistry = azure.ContainerRegistries.Define("acrdemo")
.WithRegion(Region.USEast)
.WithNewResourceGroup(rgName)
.WithNewStorageAccount(saName)
.WithRegistryNameAsAdminUser()
.Create();

You can get Azure Container Registry credentials by using ListCredentials().

RegistryListCredentials acrCredentials = azureRegistry.ListCredentials();

Create an Azure Container Service with Kubernetes Orchestration

You can create an Azure Container Service by using a define() … create() method chain.

var azureContainerService = azure.ContainerServices.Define(acsName)
.WithRegion(Region.USEast)
.WithNewResourceGroup(rgName)
.WithKubernetesOrchestration()
.WithServicePrincipal(servicePrincipalClientId, servicePrincipalSecret)
.WithLinux()
.WithRootUsername(rootUserName)
.WithSshKey(sshPublicKey)
.WithMasterNodeCount(ContainerServiceMasterProfileCount.MIN)
.WithMasterLeafDomainLabel("dns-myK8S")
.DefineAgentPool("agentpool")
.WithVMCount(1)
.WithVMSize(ContainerServiceVMSizeTypes.StandardD1V2)
.WithLeafDomainLabel("dns-ap-myK8S")
.Attach()
.Create();

Create Service Principal with Subscription Access

You can create a service principal and assign it to a subscription with contributor role by using a define() … create() method chain.

var servicePrincipal = authenticated.ServicePrincipals.Define("spName")
.WithExistingApplication(activeDirectoryApplication)
// define credentials
.DefinePasswordCredential("ServicePrincipalAzureSample")
.WithPasswordValue("StrongPass!12")
.Attach()
// define certificate credentials
.DefineCertificateCredential("spcert")
.WithAsymmetricX509Certificate()
.WithPublicKey(File.ReadAllBytes(certificate.CerPath))
.WithDuration(TimeSpan.FromDays(7))
// export credentials to a file
.WithAuthFileToExport(new StreamWriter
(new FileStream(authFilePath, FileMode.OpenOrCreate)))
.WithPrivateKeyFile(certificate.PfxPath)
.WithPrivateKeyPassword(certPassword)
.Attach()
.WithNewRoleInSubscription(role, subscriptionId)
.Create();

Similarly, you can:

Manage service principals
Browse graph (users, groups and members) and managing roles
Manage passwords

Try it

You can get more samples from GitHub. Give it a try and let us know what do you think by emailing us or commenting below.
Quelle: Azure

ISVs find their cloud footing on Azure

This post is authored by the ISV team.

According to Gartner, “By 2020, anything other than a cloud-only strategy for new IT initiatives will require justification at more than 30% of large-enterprise organizations.” With innovation shifting to public datacenters, pressure is on ISVs to develop their own cloud roadmap.

Moving to the cloud is a big step, but it might be easier than you think. The Microsoft Azure platform has an array of options that accelerate business transformation. Move to the cloud on your terms, and from there the sky’s the limit.

For example, Baker Hill, a technology solution provider to more than 600 banks and credit unions, needed to move more than 10 terabytes of data from its parent company’s datacenter in just 48 hours without using a transfer agent or touching anything in the originating datacenter. With help from Microsoft, Baker Hill migrated hundreds of databases with time to spare by using Azure ExpressRoute connected to Equinix’s high-speed network. And now that Baker Hill has met its migration deadline, the company is continuing to transform its offerings with Azure.

In another scenario, Brainshark, which provides its clients worldwide with a cloud-based sales readiness and training platform, needed to find a more elastic solution to handle an ever-expanding volume of video content. To eliminate storage and processing constraints, Brainshark moved to Azure. In addition to improving the end-user experience, the transition virtually eliminated maintenance costs. But that was just the first step. Next, the company created Brainshark Labs, an incubator for next-generation sales enablement solutions that include wearable technology, virtual reality, and artificial intelligence. For this next chapter of innovation, Brainshark integrated Azure Cognitive Services with HoloLens mixed-reality simulation technology to transform sales training and customer engagement.

These are just two of many success stories with Microsoft technologies. Are you ready to add yours?

Learn more about partnering with Microsoft.
Quelle: Azure

Announcing the Solution Template for Jenkins on Azure

Have you been looking for the Microsoft Azure Marketplace image for Jenkins on Azure? We removed it because the Jenkins version used is outdated. I am excited to announce the replacement and share some updates from our team.

Solution template for Jenkins in Azure Marketplace

The solution template for Jenkins in Azure Marketplace is designed to configure a Jenkins instance following best practices with minimal Azure knowledge. You can now easily provision a fully configured Jenkins instance in minutes with a single-click through the Azure portal and a handful of user inputs.

The template installs the latest stable Jenkins version on a Linux (Ubuntu 14.04 LTS) Virtual Machine along with the following tools and plugins configured to work with Azure:

Git for source control
Azure Credentials plugin for connecting securely
Azure VM Agents plugin for elastic build, test and continuous integration
Azure Storage plugin for storing artifacts Azure CLI to deploy apps using scripts
Azure CLI to deploy apps using scripts

You can find a 5-min quickstart that provides a step-by-step walkthrough on the new Jenkins Hub. And yes, we now have a central hub where you can get all Jenkins on Azure resources.

Azure Credentials plugin version 1.2

We updated the Azure Credentials plugin so that you can now retrieve an Azure service principal and use it in Azure CLI.

In the below code snippet, substitute 'my service principal' with your credential ID in your Jenkins instance.

withCredentials([azureServicePrincipal('my service principal')]) {
sh 'az login –service-principal -u $AZURE_CLIENT_ID -p $AZURE_CLIENT_SECRET -t $AZURE_TENANT_ID'
}

This article on Jenkins Hub shows you how to create a Jenkins pipeline, checks out the source code in a GitHub repo, runs maven and then uses Azure CLI to deploy to Azure App service.

As always, we would love to get your feedback via comments. You can also email Azure Jenkins Support to let us know what you think.
Quelle: Azure