Announcing Azure Database for MySQL and Azure Database for PostgreSQL availability in India

We’re excited to announce the public preview availability of Azure Database for MySQL and Azure Database for PostgreSQL in India data centers (Central and West India). The availability of Azure Database for MySQL and PostgreSQL services in India provides app developers the ability to choose from an even wider number of geographies, and deploy their favorite database on Azure, without the complexity of managing and administering the databases.

The Azure Database for MySQL and PostgreSQL, built using community edition of MySQL and PostgreSQL database offers built-in high availability, security and scaling on the fly with minimal downtime, all with an inclusive pricing model that enables developers to simply focus on developing apps. In addition, you can seamlessly migrate your existing apps without any changes and continue using existing tools.

Learn more about Azure Database of PostgreSQL and Azure Database for MySQL, or just create a new database with MySQL or PostgreSQL. You can also read the public preview launch blogs for MySQL and PostgreSQL.

Creating an Azure database for MySQL in India

To create a new MySQL database in one of the India data centers, follow the Create process, choosing a new logical server in one of the India data centers (Central, or West India).

Creating an Azure Database for PostgreSQL in India

To create a new PostgreSQL database in one of the India data centers, follow the Create process, choosing a new logical server in one of the India data centers (Central, or West India).

Solutions and Samples

You can access sample PostgreSQL applications available on GitHub which allows you to deploy our sample Day Planner app using node.js or Ruby on Rails on your own Azure subscription with a backend PostgreSQL database. The Day Planner App is a sample application that can be used to manage your day-to-day engagements. The app marks engagements, displays routes between them and showcases the distance and time required to reach the next engagement.

We also support deploying Azure Web Apps with a MySQL database backend as a template on GitHub.

Developers can accomplish seamless connectivity for our PostgreSQL and MySQL service using native tools that they are used to and continue to develop using Python, node.js, Java, PHP or any programming language of your choice. We support development with your favorite open source frameworks such as Djnago, Flask, etc., the service will work seamlessly. If you have a sample application that would like to host on our GitHub repo or even have suggestions or feedback about our sample applications, please feel free to submit a pull request and become a contributor on our repo. We love working with our community to provide ready-to-go applications for the community at large.

Feedback

As with all new feature releases, we would love to receive your feedback. Feel free to leave comments below. You can also engage with us directly through User Voice (PostgreSQL and MySQL) if you have suggestions on how we can further improve the service.

Sunil Kamath
Twitter: @kamathsun
Quelle: Azure

Imanis Data – Cloud migration, backup for your big data applications on Azure HDInsight

We are pleased to announce the availability of Imanis Data on Azure.

Azure HDInsight is the industry leading fully-managed cloud Apache Hadoop & Spark offering, which allows customers to run reliable open source analytics with an industry-leading SLA. Imanis Data provides data management software that allows users to migrate data and add backup and restore functionality for their big data applications.

This combined offering of Imanis Data on HDInsight and integration with Azure Blob Storage and Data Lake Store enables customers to migrate to cloud faster, protect critical data assets from application or human error.

Microsoft Azure HDInsight – Reliable open source analytics at enterprise grade and scale

HDInsight is the only fully-managed cloud Hadoop offering that provides optimized open source analytical clusters for Spark, Hive, Interactive Hive, MapReduce, HBase, Storm, Kafka, and R Server, backed by a 99.9% SLA. Each of these big data technologies are easily deployable as managed clusters with enterprise-level security and monitoring.

Imanis Data – Cloud migration, backup and restore for big data applications

The explosive growth of cloud computing in general, and the rise of big data applications, has brought about a need to ensure that workloads previously running on-premises can run at-scale in Azure; as well as keeping the underlying HDInsight data assets protected from disasters, human errors and application corruption.

To that end, we’re excited to highlight Imanis Data (formerly Talena, Inc.) who just launched their software solution on the Azure. Imanis Data provides data management software that covers a wide range of use cases that will benefit HDInsight customers, including:

Migration of on-premise or other cloud big data workloads to Azure HDInsight: Imanis Data provides a compelling way for companies to migrate their big data workloads to HDInsight, independent of which Hadoop distribution you’re using. This includes both data and application specific metadata as well.
Cloud Disaster Recovery: Imanis Data can easily be used as the basis for moving both data and metadata of your Open Source Workloads such as Hive, HBase, Spark to a secondary region, enabling cross-region DR.
Scalable Backup and Rapid Recovery: Imanis Data enables extremely rapid backup and point-in-time recovery of petabyte-scale data used by open source workloads such as Hive, HBase, Spark.
Test Data Management: As enterprises move data to the cloud, protecting PII is critical. The native data masking capabilities in Imanis Data enable enterprises to protect sensitive data while migrating data to QA, data analytics or other clusters in the cloud.
Archiving for compliance and regulatory requirements.
Native integration with Microsoft Azure Blob Storage and Azure Data Lake Store. 

To support these diverse use cases, the Imanis Data software architecture incorporates:

A distributed and highly-scalable file system that enables support for petabyte-scale workloads.
Rapid recovery capabilities with an intuitive metadata catalog, the flexibility to recover to different database topologies, and support for parallel data transfers.
A built-in storage optimization engine that focuses on incremental-forever backups, global block-level de-duplication, and compression.
Agentless integration with various databases.
Support for data mirroring and replication across multiple Azure regions.

To learn more about Imanis Data offering on Azure, please see this.

Getting started with Imanis Data on Azure HDInsight

You can install Imanis Data from Azure marketplace. Imanis Data software is installed on a VM which sits outside the cluster.

To configure Imanis Data for Azure HDInsight, please read this detailed guide. Following is a screenshot of configuring Imanis Data for HDInsight.

After you install it, connect to the Azure HDInsight cluster and perform the following operations:

Connect Imanis Data to on-premise Hadoop or Spark cluster: Imanis Data can help migrate data from on-premise Hadoop, Spark or HBase, as well as metadata associated with these workloads to the cloud. You can store the data in Azure Blob Storage or Azure Data Lake Store. Once you move the data you can run Hadoop, Spark or HBase or use R Server on Azure HDInsight to perform advanced analytics.
Cloud Disaster Recovery: Imanis Data can easily be used as the basis for moving both data and metadata of your Open Source Workloads such as Hive, HBase, Spark to a secondary region, enabling cross-region DR.
Scalable Backup and Rapid Recovery: Imanis Data enables extremely rapid backup and point-in-time recovery of petabyte-scale data used by open source workloads such as Hive, HBase, Spark.
Test Data Management: As enterprises move data to the cloud, protecting PII is critical. The native data masking capabilities in Imanis Data enable enterprises to protect sensitive data while migrating data to QA, data analytics or other clusters in the cloud.
Archiving for compliance and regulatory requirements.

Joint webinar on cloud migration, backup and restore, and more

We hosted a joint webinar on June 27, during which we highlighted how enterprises can benefit from using Imanis Data to manage their big data applications on HDInsight. We covered various patterns on how you can use Imanis Data to set up a hybrid environment, dev/test management, backup and restore, and replication across different regions in Azure. The following diagram shows a summary of the patterns covered. In case you missed it, you can still watch the webinar to learn more. We look forward to talking with you and getting your feedback.

Resources

The following resources are available to learn more about this integration:

Learn more about Azure HDInsight
Talena Enables Rapid Migration of Modern Data Workloads to Microsoft Azure HDInsight
Get Imanis Data from Azure Marketplace
Getting started with Imanis Data on Azure
Getting started with Imanis Data on Azure HDInsight
Imanis Data Management Solution on Azure Data Sheet
Deploy Imanis Data on Azure HDInsight for a cross-geo replication scenrio
Learn more about Imanis Data on Azure
Ask HDInsight questions on stackoverflow

Summary

This combined offering of Imanis Data on HDInsight and integration with Azure Blob Storage and Data Lake Store enables customers to migrate to cloud faster, protect critical data assets from application or human error. If you have any feedback or questions, feel free to drop us an email at hdiask@microsoft.com. We’d love to hear from you!
Quelle: Azure

On-premises data gateway support for Azure Analysis Services

Azure Analysis Services now supports the shared On-Premises Data Gateway which is used with Power BI, Flow, Logic Apps, and PowerApps. This has been a top ask in our user feedback. The shared gateway allows you to associate many services to one gateway or you can continue to use a dedicated gateway. With the shared gateway, managing connectivity is much easier. For example, you can configure multiple Azure Analysis Services servers to use the same gateway just by associating each one to the same gateway.

To use the shared gateway, the first step is to setup the On-Premises Data Gateway by downloading and running the gateway installer on a local computer. During the install, you will be prompted for your work or school account which will be setup as a gateway administrator in the gateway service. In order to associate your gateway to an Azure resource, you will need to be an administrator. After you set up a recovery key, you may need to change the region of the gateway.

For performance and reliability purposes, Azure Analysis Services will only use a gateway resource from the same region. For instance, if you have an Azure Analysis Services server in the East US 2 region, you will need to have a gateway configured for that region. Multiple Azure Analysis Services servers in East US 2 can use the same gateway. Picking the right region is required or you won’t be able to associate the gateway to Azure Analysis Services.

Once you complete the setup and any needed network configuration for firewalls, ports, et cetera, you will need to create a gateway resource in Azure. You can use the same settings and trouble shooting steps as for the Power BI On-premises Data Gateway since it is the same gateway!

Again, it will need to be in the same region as the gateway and Azure Analysis Services.

After adding the gateway resource in Azure, you can now go to your Azure Analysis Services server and configure it to use that gateway with the new gateway blade. On this blade, just pick the gateway and connect it to Azure Analysis Services.

Now it is connected!

You can use this same gateway for multiple Azure Analysis Services servers in this region. You can also use this gateway with Flow, Logic Apps, PowerApps, and Power BI (if Power BI is in the same region). This is also useful for dev/test configurations. Keep in mind you need to have admin privileges on the gateway and Azure Analysis Services to create the connection, and Azure Analysis Services and the gateway need to be in the same region. Once connected, any Azure Analysis Services data source can use that gateway for Direct Query or processing. 
Quelle: Azure

Investing deeply in Terraform on Azure

As customers increase their deployed applications in Azure, we are seeing a growing interest in DevOps tooling on Azure. We also see customers looking to deploy applications across multiple environments, including hybrid and multi-cloud deployments while using the same tooling and enabling the same DevOps experiences. In order to meet these growing needs, I am excited to announce that we are greatly increasing our investment in Terraform, partnering closely with HashiCorp, a well-known voice in the DevOps and cloud infrastructure management space.

Our partnership with HashiCorp goes back to early 2016, where we jointly announced plans to bring full support for Azure Resource Manager across many tools in HashiCorp’s portfolio including Packer and Terraform. Since then, our customers have found significant value in the HashiCorp support on Azure.

Today, we’re extending our partnership and will offer an increasing number of services directly supported by Terraform, including Azure Container Instances, Azure Container Service, Managed Disks, Virtual Machine Scale Sets and others. We want to give additional flexibility to express infrastructure-as-code and to enable many more native Microsoft Azure services to be easily deployed directly through Terraform. Learn more about the Azure provider for Terraform.

I am really excited about our partnership with HashiCorp. They are well-positioned to support the complexity and diversity of this space. They also have a rich portfolio of products that can help our customers adopt DevOps principles to automate management of their infrastructure on Azure and across multiple environments.

If you’re looking to get started, give Terraform in Azure a try today! Stay tuned for additional updates as we work together in the open source project to deliver this increased support.

 

See ya around,

Corey
Quelle: Azure

Reference Architecture for a high availability SharePoint Server 2016 farm in Azure

The Azure CAT Patterns & Practices team has published a new reference architecture for deploying and running a high availability SharePoint Server 2016 farm in Azure.

It provides prescriptive guidance including the following topics:

Architecture resources necessary for the deployment, including resources.
Scalability considerations.
Availability considerations.
Manageability considerations.
Security considerations.

Like all reference architectures that can be found at the Azure Reference Architectures, it includes prescriptive guidance and a set of PowerShell scripts and Azure Resource Manager templates to deploy a working SharePoint Server 2016 farm with SQL Server Always On and a simulated on-premises network. The deployment time for this reference architecture may only take hours, simplifying a task that previously would take several days to build out and test.

We invite you to review the reference architecture, try out the deployment, and even contribute to this and other reference architectures on GitHub.

Note: The compute requirements for a SharePoint HA farm are significantly higher than many workloads running on-premises or in the cloud. If you do deploy this, be aware that the full deployment will consume 38 cores. So, if you’re just kicking the tires, be sure to shut down your virtual machines when you’re finished to avoid any surprises on your bill.

Below is the resulting configuration:

 

Quelle: Azure

Reference Architecture for SAP NetWeaver and SAP HANA on Azure

The Azure CAT Patterns & Practices team has published their first reference architecture on SAP NetWeaver and SAP HANA on Azure, which covers SAP workloads running in Azure. It provides prescriptive guidance on how to run SAP HANA on Azure including the following topics:

Architecture resources necessary for the deployment, including recommendations.
Scalability considerations.
Availability considerations.
Manageability considerations.
Security considerations.

Like all reference architectures that can be found at the Azure Architecture Center, it provides a set of PowerShell scripts and Azure Resource Manager templates to deploy the reference architecture. The deployment time for this one is about 2 hours, making simple a task that previously would take days.

This reference architecture expands on the Hybrid VPN reference architecture that will typically be used in a production environment. However, this reference architecture does not deploy the Hybrid VPN resources. Instead, it deploys everything but the VPN gateway in the cloud. So, if you plan to implement the SAP HANA reference architecture in a production environment consider deploying the Hybrid VPN reference architecture first. Then, you’ll be able to deploy the SAP HANA reference architecture into your virtual network configured with VPN.

We invite you to review the reference architecture, try out the deployment, and even contribute to this and other reference architectures on GitHub.

Note: The compute requirements for SAP are significantly higher than many workloads running on-premises or in the cloud. If you do deploy this, be aware that the full deployment will consume 49 cores. So, if you’re just kicking the tires, be sure to shut down your virtual machines when you are finished to avoid any surprises on your bill.

Here is the resulting configuration:

 

 

The deployed resources have been tuned for SAP HANA, as follows:

VM SKUs have been validated for small to medium SAP deployments.
VM computer names are set up per SAP requirements.
.NET 3.5 is loaded for the SCS machines, as required by SIOS DataKeeper.
Health probe has been set up for TCP 59999 with a 10 second interval and 30 second idle.
A jumpbox for administrative purposes was deployed.

Quelle: Azure

Announcing the Just-In-Time VM Access public preview

Attackers commonly target cloud environments with Brute Force or Port Scanning attacks, typically against management ports like RDP and SSH that are left open to enable administrators access. In addition to detecting and alerting you to these attacks, Azure Security Center just released a new Just-In-Time (JIT) VM Access mechanism. JIT VM Access, now in public preview, significantly reduces your exposure to these attacks by enabling you to deny persistent access while providing controlled, audited access to VMs when needed.

Based on the security policy you set, Azure Security Center can recommend that JIT Access be enabled on your existing VMs and any new ones that are created. When JIT VM Access is enabled, Azure Security Center locks down inbound traffic to defined ports by creating Network Security Group rule(s). You can request access to the VM when needed, which opens the needed port for an approved amount of time, from approved IP addresses, and only for users with proper permissions. Requests are logged in the Azure Activity Log, so you can easily monitor and audit access. You can also enable JIT VM Access, configure policies and request access through Powershell cmdlets.

Enable JIT VM Access and Apply policies

In the JIT VM Access blade, administrators can easily enable JIT VM Access for all or select VMs. They can configure the policy that will determine the ports to be protected, allowed protocols, IP addresses from which these ports can be accessed, and the maximum time window for which a port can be opened. The policy will determine which options are available to users when they request access.

Requesting JIT Access to a VM

Anyone with the right permissions (based on Azure RBAC), can request access to a VM. Based on the JIT VM Access policy, they can select the ports they need access to, from which IPs, and for what timeframe. Access is automatically granted.

These new capabilities are available within the standard pricing tier of Azure Security Center, and you can try it for free for the first 60 days.

To learn more about JIT VM Access, watch the microlearning video or see the documentation.
Quelle: Azure

Announcing the Just-In-Time VM Access pubic preview

Attackers commonly target cloud environments with Brute Force or Port Scanning attacks, typically against management ports like RDP and SSH that are left open to enable administrators access. In addition to detecting and alerting you to these attacks, Azure Security Center just released a new Just-In-Time (JIT) VM Access mechanism. JIT VM Access, now in public preview, significantly reduces your exposure to these attacks by enabling you to deny persistent access while providing controlled, audited access to VMs when needed.

Based on the security policy you set, Azure Security Center can recommend that JIT Access be enabled on your existing VMs and any new ones that are created. When JIT VM Access is enabled, Azure Security Center locks down inbound traffic to defined ports by creating Network Security Group rule(s). You can request access to the VM when needed, which opens the needed port for an approved amount of time, from approved IP addresses, and only for users with proper permissions. Requests are logged in the Azure Activity Log, so you can easily monitor and audit access. You can also enable JIT VM Access, configure policies and request access through Powershell cmdlets.

Enable JIT VM Access and Apply policies

In the JIT VM Access blade, administrators can easily enable JIT VM Access for all or select VMs. They can configure the policy that will determine the ports to be protected, allowed protocols, IP addresses from which these ports can be accessed, and the maximum time window for which a port can be opened. The policy will determine which options are available to users when they request access.

Requesting JIT Access to a VM

Anyone with the right permissions (based on Azure RBAC), can request access to a VM. Based on the JIT VM Access policy, they can select the ports they need access to, from which IPs, and for what timeframe. Access is automatically granted.

These new capabilities are available within the standard pricing tier of Azure Security Center, and you can try it for free for the first 60 days.

To learn more about JIT VM Access, watch the microlearning video or see the documentation.
Quelle: Azure

Introducing Azure Event Grid – an event service for modern applications

Most modern applications are built using events – whether it is reacting to changes coming from IoT devices, responding to user clicks on mobile apps, or initiating business processes from customer requests. With the growth of event-based programming, there is an increased focus on serverless platforms, like Azure Functions, a serverless compute engine, and Azure Logic Apps, a serverless workflow orchestration engine. Both services enable you to focus on your application without worrying about any infrastructure, provisioning, or scaling.

Today, I am excited to announce that we are making event-based and serverless applications even easier to build on Azure. Azure Event Grid is a fully-managed event routing service and the first of its kind. Azure Event Grid greatly simplifies the development of event-based applications and simplifies the creation of serverless workflows. Using a single service, Azure Event Grid manages all routing of events from any source, to any destination, for any application.

Azure Event Grid is an innovative offering that makes an event a first-class object in Azure. With Azure Event Grid, you can subscribe to any event that is happening across your Azure resources and react using serverless platforms like Functions or Logic Apps. In addition to having built-in publishing support for events with services like Blob Storage and Resource Groups, Event Grid provides flexibility and allows you to create your own custom events to publish directly to the service. In addition to having a wide range of Azure services with built-in handlers for events, like Functions, Logic Apps, and Azure Automation, Event Grid allows flexibility in handling events, supporting custom web hooks to publish events to any service, even 3rd-party services outside of Azure. This flexibility creates endless application options and makes Azure Event Grid a truly unique service in the public cloud. 

Here is how it works:

Here are some additional details of this new Azure service:

Events as first-class objects with intelligent filtering: Azure Event Grid enables direct event filtering using event type, prefix or suffix, so your application will only need to receive the events you care about. Whether you want to handle built-in Azure events, like a file being added to storage, or you want to produce your own custom events and event handlers, Event Grid enables this through the same underlying model. Thus, no matter the service or the use case, the intelligent routing and filtering capabilities apply to every event scenario and ensure that your apps can focus on the core business logic instead of worrying about routing events.
Built to scale: Azure Event Grid is designed to be highly available and to handle massive scale dynamically, ensuring consistent performance and reliability for your critical services.
Opens new serverless possibilities: By allowing serverless endpoints to react to new event sources, Azure Event Grid enables event-based scenarios to span new services with ease, increasing the possibilities for your serverless applications. Both code-focused applications in Functions and visual workflow applications in Logic Apps benefit from Azure Event Grid.
Lowers barriers to ops automation: The same unified event management interface enables simpler operational and security automation, including easier policy enforcement with built-in support for Azure Automation to react to VM creations or infrastructure changes.

Today, Azure Event Grid has built-in integration with the following services:

We are working to deliver many more event sources and destinations later this year, including Azure Active Directory, API Management, IoT Hub, Service Bus, Azure Data Lake Store, Azure Cosmos DB, Azure Data Factory, and Storage Queues.

Azure Event Grid has a pay-per-event pricing model, so you only pay for what you use. Additionally, to help you get started quickly, the first 100,000 operations per month are free. Beyond 100,000 per month, pricing is $0.30 per million operations (per-operation) during the preview. More details can be found on the pricing page.

Azure Event Grid completes the missing half of serverless applications. It simplifies event routing and event handling with unparalleled flexibility. I am excited about the endless possibilities!

Go ahead and give it a try. I can’t wait to see what you build. To learn more try the quick start.

See ya around,

Corey
Quelle: Azure

How Azure Security Center aids in detecting good applications being used maliciously

We’ve written in the past about how Azure Security Center helps detect malicious activity on compromised VMs, including a post detailing a Bitcoin mining attack and one on an outbound DDoS attack. In many cases, attackers use a set of malicious tools to carry out these and other actions on a compromised machine. However, our team of security researchers have identified a new trend where attackers are using good application to carry out malicious actions. This blog will discuss the use of known hacker tools and those tools that are not nefarious in nature, but are being used maliciously, and how Azure Security Center aids in detecting their use.

Hacker tools aid in exploitation

Generally, the first category of tools we see after a brute force attack are the Port and IP address scanning tools. Most of these tools were not written maliciously, but because of their ease of use, an attacker can scan IP ranges and ports to find vulnerable machines that they can target.

One of the more frequent port scanning tool that we come across is KportScan 3.1, which has the ability to scan for open ports as well as local ports. It has a wide range of uses, including working with any port as well as individual addresses and IP ranges.  It is multithreaded (1200 flows), consuming very few resources on compromised machines, and the best part is that the tool is free.  After running a scan, results are stored by default to a file called “results.txt”. In the example below, KportScan is configured to return all IP’s with the ranges specified that have port 3389 open to the internet.

Other scanners that we see dropped on machines after they have been compromised include Masscan, xDedicIPScanner, and D3vSpider.  These tend to be less frequent, but are notable.

Masscan claims to be one of the fastest Internet port scanners out there. It proports to scan the entire internet in under 6 minutes with your own network bandwidth being the only gating factor.  While Linux is its primary platform, it does run on many other operating systems including Windows and Mac OS X.  The below command will scan for open ports on 3389 where the subnet range is 104.208.0.0 to 104.215.255.255, which is 512k worth of addresses.  The results will be stored in a XML file called good.xml.

xDedicIPScanner is another port scanner which is based on Masscan. It has many of the same capabilities as Masscan, but does not require a user to learn Linux as it is GUI based.  Some of its features include, scanning of CIDR blocks, automatic loading of country range`s, small foot print, and ability to scan multiple ports or range of ports. It also has some application dependencies to run, they include Winpcap and Microsoft Visual C++ 2010. It also requires Windows 7 or higher. From our observations, xDedicIPScanner appears to be primarily used maliciously. The below example shows a country range being loaded in the tool for scanning.

Finally, Pastebin D3vSpider is also a scanner we’ve seen often. D3vSpider is not a port scanner, instead it scans Pastebin repositories which are popular for storing and sharing text (including stolen passwords, usernames, and network data). The tool’s output is based on the user’s input search criteria, and provides information including user names and their passwords. The example below shows a scan resulting in 745 user names and passwords for a single month, this can then be exported to a txt file for future use, with other tools. For example, it could be used with NLBrute, which is a known RDP Brute Force tool.

 

Recently, we have begun to see the use of messaging applications being used to drop other malicious and non- malicious tools. The messaging applications are widely used and not malicious in nature. They tend to be cloud-based messaging services, with support for a broad base of devices including both desktop systems and mobile devices. Users of the product can exchange messages, and files of any type with or without end-to-end-encryption, as well as sync the messages across all the user’s devices.  Some of these messaging services even allow messages to be “self-destructed” after delivery, so they are no longer seen on any device. One of the features of these messaging applications is known as “secret” chat. The users exchange encryption keys and after the exchange, they are verified and can then communicate freely without the possibility of being tracked. These features and many more, have made these messaging services a favored addition to some attackers’ tool boxes. 

Their ability to easily drop files on to other machines appears to be one of the main reason attackers use this program. In fact, we started to see the presences of these tool on comprised machines as early as December of 2016.  At first, we dismissed this as coincidence, but after further investigation we started seeing known hacker tools (NLBrute, Dubrute, and D3vSpider) show up on compromised machines after the installation of these messaging applications.

Since, these tools are capable of synchronizing messages across all the user’s devices, anyone that is part of the conversation can at a later time revisit a message to download a file or a picture.

 

Another method we have seen used is messaging Channels being created for the primary purpose of broadcasting messages to an unlimited number of subscribers. Channels can be either publicly available or private. Public Channels can be joined by anyone. However, for private Channels you need to be added or receive an invite to participate. Due to the encryption that these applications deploy, we see very little activity other than the machine joining a chat Channel, example of what is seen is below:

While the joining to a Channel is of interest, the files that appear on the machine after this is what is most interesting.  These tools range from Crack tools, RDP brute force tools, and encryption tools which allow attackers to hide their traffic and obscure the source IP addresses of their activity. Below is an example of what we saw on a host directly after connecting to a private Channel. 

What’s next:

We’ve presented some of the more frequently seen tools favored by the attackers, and used on virtual machines in Azure. These tools were, for the most part, created for legitimate usage without malicious intent, however, because of their functionality and ease of use, they are now being used maliciously.

While the presence of any one of these tools may not be reason for alarm, a closer look into other factors will help to determine if they are being used maliciously or not. For example, if we see more than one of them on an Azure virtual machine, the likelihood that the machine is compromised is much greater, and further investigation may be required. Seeing the tool’s usage in the context of other activity on the Azure machine is also very important in determining if these tools are being used maliciously. Ian Hellen’s blog on Azure Security Center Context Alerts describes how much of the tedious security investigation work is automated by Security Center and relevant context is provided about what else was happening on the Azure machine during and immediately before suspicious activity is detected. Tools like KportScan, Masscan, XDedicIPScanner, D3vSpider and malicious messaging services will be detected and alerted on by Azure Security Center Context Alerts.

From the number of incidents investigated, the usage of legitimate tools for malicious purposes appears to be an upward trend. In response, the Azure Security Center’s team of analysts, investigators, and developers are continuing to actively hunt and watch for these types of indicators of compromise (many of which are simply not detected by some AV signatures). We currently detect all of these tools discussed in this blog, and as we find more we are adding them to our Azure Security Center detections.

Recommended remediation and mitigation steps

Microsoft recommends investigating the attack campaign via a review of available log sources, host-based analysis, and if needed, forensic analysis to help build a picture of the compromise. In the case of Azure ‘Infrastructure as a Service’ (IaaS) virtual machines (VMs), several features are present to facilitate the collection of data including the ability to attach data drives to a running machine and disk imaging capabilities.

In cases where the victim machine cannot be confirmed clean, or a root cause of the compromise cannot be identified, Microsoft recommends backing up critical data and migrating to a new virtual machine. It is also recommended that the virtual machines are hardened prior to bringing them on line to prevent compromise or re-infection. However, with the understanding that this sometimes cannot be done immediately, we recommend implementing the following remediation steps:

Review Applications: In cases where there are individual programs that may or may not be used maliciously, it is good practice to review the applications found on the host with the administrators and users. If it is determined that there are applications that may not have been installed by a known user(s), the recommendation would be to take appropriate action, as determined by your administrators.
Review Azure Security Center Recommendations: Review and address any security vulnerabilities identified by Security Center, including OS configurations that do not align with the recommended rules for the most hardened version of the OS (for example, do not allow passwords to be saved), machines with missing security updates or without antimalware protection, exposed endpoints, and more.
Defender Scan: Run a full antimalware scan using Microsoft Antimalware or another solution, which can flag potential malware.
Avoid Use of Cracked Software: Using cracked software introduces unwanted risk of malware and other threats that are associated with pirated software. Microsoft highly recommends not using of cracked software and following legal software policy as recommended by their respective organization.

To learn more about Azure Security Center, see the following:

Azure Security Center detection capabilities
Managing and responding to security alerts in Azure Security Center
Managing security recommendations in Azure Security Center
Security health monitoring in Azure Security Center
Monitoring partner solutions with Azure Security Center
Azure Security Center FAQ

Get the latest Azure security news and information by reading the Azure Security blog.
Quelle: Azure