How Azure Security Center helps protect your servers with Web Application Firewall

Our adversaries have many tools available on the Internet for use in mounting cyberattacks. Many of these tools enable them to gain access and control of enterprise IT resources. In the meantime, security professionals are not always aware of the vulnerabilities built into the IT resources they are tasked to defend. Azure Security Center (ASC) can help bridge this gap.

This blog post is for IT and security professionals interested in using Azure Security Center (ASC) to detect and protect Azure-based resources from SQL injection attacks among others. The goal of this post is to 1) explain how this well-known code injection occurs and 2) illustrate how ASC detects and resolves this attack to secure your IT resources. 

Tools make SQL injection easy

Servers and applications are easy targets for cybercriminals. One well-known method for attacking data-driven applications is via SQL injection. SQL injection is an attack technique where malicious code is injected for execution which leads to un-intended database access. A popular tool attackers can use for malicious injection is sqlmap. By using sqlmap, it is easy to discover vulnerable SQL databases and expose their contents. An attacker only needs to provide the appropriate request headers to authenticate and discover the databases, their tables, and even dump the users and hashed passwords. Once the attacker has this data, their next step is to use brute force analysis on the exposed hashes, another built-in feature of the sqlmap tool to obtain the plaintext user credentials as depicted below.

Identifying risk with Azure Security Center

Azure Security Center (ASC), available on every subscription tier of Azure including free and trial subscriptions, can help identify connected IT assets with an HTTP endpoint. Additionally, ASC can automate the deployment of a Web Application Firewall (WAF) resource to help protect non-compliant resources, while pointing out detected malicious SQL injection attempts. The list of detections points to unprotected web servers where security remediation is needed. ASC scans virtual machines across an Azure subscription and makes recommendations to add Web Application Firewalls where applicable to at-risk resources.

ASC then offers guidance through the process of deploying and configuring a Web Application Firewall for partner or first party solutions.

Further guidance on tunneling IP traffic through the Web Application Firewall is also provided. This process provides an added layer of protection to the vulnerable web application.

Azure Security Center provides you with visibility, now that it’s been added to your resources, on the protections and detections including the Web Application Firewall.

Remedial actions with prevention

Configuring the WAF into prevention mode will prevent the sqlmap tool from accessing databases and tables it shouldn’t have access to. Thus, sqlmap can be prevented from even enumerating the type of database running on the backend, let alone traversing the databases for content. In prevention mode, the WAF prevents suspicious activity. ASC detects this and reports on the activity as it is blocked!

Conclusion

Attack tools such as sqlmap are cheap and easily available. A “defense in depth” approach is critical to ensure applications are not vulnerable to SQL injection. Having visibility and control to detect and protect your resources against these attacks is crucial. ASC enables IT and security professionals to scan cloud-based resources for at-risk endpoints. Following recommendations by ASC, detection and protection can be achieved, helping organizations to meet their standards of information security compliance. While ASC is available on all subscription tiers of Azure, those on the Standard tiers can access a deeper level of insights, actions, and threat protection. If your enterprise requires a deep and granular level of cloud security, activate a free trial of Azure Standard to see how ASC can help your business.

This blog post compliments a deeper dive, step-by-step playbook. To learn more please read ASC Playbook: Protect Servers with Web App Firewall.

Have questions? Email us at AskCESec@microsoft.com.

– Hayden @hhainsworth
Quelle: Azure

Help keep your Google Cloud service account keys safe

By Grace Mollison, Cloud Solutions Architect

Google Cloud Platform (GCP) offers robust service account key management capabilities to help ensure that only authorized and properly authenticated entities can access resources running on GCP.

If an application runs entirely on GCP, managing service account keys is easy — they never leave GCP and GCP performs tasks like key rotation automatically. But many applications run on multiple environments — local developer laptops, on-premises databases, and even environments running in other public clouds. In that case, keeping keys safe can be tricky.

Ensuring that account keys aren’t exposed as they move across multiple environments is paramount to maintaining application security. Read on to learn about best practices you can follow when managing keys in a given application environment.

Introducing the service account
When using an application to access Cloud Platform APIs, we recommend you use a service account, an identity whose credentials your application code can use to access other GCP services. You can access a service account from code running on GCP, in your on-premises environment, or even another cloud.

If you’re running your code on GCP, setting up a service account is simple. In this example, we’ll use Google Compute Engine as the target compute environment.

Create a service account
Grant least-privilege permissions to the service account using IAM. For example, if your application only needs to access Google Cloud Storage, then grant the service account the roles/storage.admin role.

Now that you have a service account, you can launch instances to run from it. (Note: You can also temporarily stop an existing instance and restart it with an alternative service account).

Next, install the client library for the language in which your application is written. (You can also use the SDK but the client libraries are the most straightforward and recommended approach.) With this, your application can use the service account credentials to authenticate applications running on the instance. You don’t need to download any keys because you are using a Compute Engine instance, and we automatically create and rotate the keys.

Protecting service account keys outside GCP
If your application is running outside GCP, follow the steps outlined above, but install the client library on the destination virtual or physical machine. When creating the service account, make sure that you are following the principles of least-privilege. This is good practice in all cases, but it becomes even more important when you download credentials, as GCP no longer manages the key, increasing the risk of it being inadvertently exposed.

In addition, you’ll need to create a new key pair for the service account, and download the private key (which is not retained by Google). Note that with external keys, you are responsible for security of the private key and other management operations such as key rotation.

Applications need to be able to use external keys to be authorized to call Google Cloud APIs. Using the Google API client libraries facilitates this. Google API client libraries use Application Default Credentials for a simple way to get authorization credentials when they are called. When using an application outside of GCP, you can authenticate using the service account for which the key was generated by pointing the GOOGLE_APPLICATION_CREDENTIALS environment variable to the location where you downloaded the key.

Best practices when downloading service account keys
Now you have a key that can gain access to GCP resources, you need to ensure that you manage the key appropriately. The remainder of this post focuses on best practices to avoid exposing keys outside of their intended scope of use. Here are best practices to follow:

If you’ve downloaded the key for local development, make sure it is not granted access to production resources.
Rotate keys using the following IAM Service Account API methods:

ServiceAccounts.keys.create()
Replace old key with new key
ServiceAccounts.keys.delete()

Consider implementing a daily key rotation process and provide developers with a cloud storage bucket from which they can download the new key every day.
Audit service accounts and keys using either the serviceAccount.keys.list() method or the Logs Viewer page in the console.
Restrict who is granted the Service Account Actor and Service Account User role for a service account, as they will have full access to all the resources.
Always use the client libraries and the GOOGLE_APPLICATION_CREDENTIALS for local development.
Prevent developers from committing keys to external source code repositories.
And finally, regularly scan external repositories for keys and take remedial action if any are located.

Now let’s look at ways to implement some of these best practices.

Key rotation
Keyrotator is a simple CLI tool written in Python that you can use as is, or as the basis for a service account rotation process. Run it as a cron job on an admin instance, say, at midnight, and write the new key to Cloud Storage for developers to download in the morning.

It is essential to control access to the Cloud Storage bucket that contains the keys. Here’s how:

Create a dedicated project setup for shared resources.
Create a bucket in the dedicated project; do NOT make it publicly accessible.
Create a group for the developers who need to download the new daily key.
Grant read access to the bucket using Cloud IAM by granting the storage.objectViewer role to your developer group for the project with the storage bucket.

If you wish to implement stronger controls, use the Google Cloud Key Management Service to manage secrets using Cloud Storage.

Prevent committing keys to external source code repositories
You should not need to keep any keys with your code, but accidents happen and keys may inadvertently get pushed out with your code.

One way to avoid this is not to use external repositories and put processes in place to prevent their use. GCP provides private git repositories for this use case.

You can also put in place preventive measures to stop keys from being committed to your git repo. One open-source tool you can use is git-secrets. This is configured as a git hook when installed

It runs automatically when you run the ‘git commit’ command.

You need to configure git-secrets to check for patterns that match service account keys. This is fairly straightforward to configure:

Here is a service account private key when downloaded as a JSON file:

{
“type”: “service_account”,
“project_id”: “your-project-id”,
“private_key_id”: “randomsetofalphanumericcharacters”,
“private_key”: “—–BEGIN PRIVATE KEY—–thisiswhereyourprivatekeyisn—–END PRIVATE KEY—–n”,
“client_email”: “keyname@your-project-id.iam.gserviceaccount.com”,
“client_id”: “numberhere”,
“auth_uri”: “https://accounts.google.com/o/oauth2/auth”,
“token_uri”: “https://accounts.google.com/o/oauth2/token”,
“auth_provider_x509_cert_url”: “https://www.googleapis.com/oauth2/v1/certs”,
“client_x509_cert_url”: “https://www.googleapis.com/robot/v1/metadata/x509/keyname%40your-project-id.iam.gserviceaccount.com”
}

To locate any service account keys, look for patterns that match the key name such as ‘private_key_id’ and ‘private_key’. Then, to locate any service account files in the local git folder, add the following registered patterns:

git secrets –add ‘private_key’
git secrets –add ‘private_key_id’

Now, when you try to run ‘git commit’ and it detects the pattern, you will receive an error message and be unable to do the commit unless mitigating action is taken.

This screenshot shows a (now deleted) key to illustrate what developers see when they try to commit files that may contain private details.

Scan external repositories for keys
To supplement the use of git-secrets you can also run the open-source tool trufflehog. Trufflehog searches a repo’s history for secrets by using entropy analysis (it uses Shannon entropy) to find any keys that may have been uploaded.

Conclusion
In this post, we’ve shown you how to help secure service account keys, whether you are using them to authenticate applications running exclusively on GCP, in your local environment or in other clouds. Follow these best practices to avoid accidentally revealing keys and to control who can access your application resources. To learn more about authentication and authorization on GCP, check out our Authentication Overview.

Quelle: Google Cloud Platform

Build custom video AI workflows with Video Indexer and Logic Apps

With the Video Indexer connector for Logic Apps, you can now set up custom workflows connecting your most used apps with Video Indexer to further automate the process of extracting deep insights from your videos.

In this blog, I will walk you through an example of using the Video Indexer connector in a Logic App to set up a workflow where whenever a new video file is created in a specific folder of your OneDrive, the video is automatically uploaded and indexed. Once completed, the insights of the newly indexed video will be stored as a JSON file in the designated folder of your OneDrive.

Limitations to note

Currently, there is a 50 MB file size limit for OneDrive and other storage connectors to trigger. The Video Indexer connector allows you to upload a video via file content from a storage connector or a shared access signature URL. Although there is currently no way to access a URL to the video from storage connectors, we are in the process of adding in this feature on OneDrive, OneDrive for Business, and Azure Storage. Once implemented, we will be able to work with videos larger than 50 MB. However, we have to work with the limit for now.

Setting up the Logic App

To begin, log into your Azure Portal and create a new Logic App. You can follow the tutorial to learn how to create and deploy a new Logic App.

Once you have created the Logic App, go to the Logic Apps Designer and select Blank Logic App.

 

The first thing we will need is a “Trigger” that will fire off an event when a new file has been created in your OneDrive folder for videos.

In the search bar for connectors and triggers, search for “OneDrive”. You will see options for OneDrive (consumer) and OneDrive for Business. You can do either based on the account that you have or want because they have similar steps. In this tutorial, I am using OneDrive.

Click on the OneDrive connector. This will show you all of the triggers available for OneDrive.

Select “When a file is created”. This will fire the trigger in the Logic App each time a new file is dropped into the designated OneDrive folder. Once complete you will be prompted to sign into your OneDrive account.

After you have signed in, you will see the trigger and its different fields. For the Folder field, click on the folder icon and navigate to your folder for videos. I have selected a folder for videos on my OneDrive called “Video”.  You can choose any folder that is appropriate or create a new folder specific to your own workflow. It is important to note that this folder should only have video or audio files. Any other files will result in an upload error in the Video Indexer connector.

You can also set how often you want the trigger to check whether a file has been created in the specified folder. Under the “How often do you want to check for items?” section, I have set Frequency to Minute and Interval to 3 to have my trigger check every 3 minutes.

Next, you will need to set an action that uploads the video that has been created in your OneDrive folder to your Video Indexer portal. Click Next Step and select Add an action.

Search for “Video Indexer” and select the Video Indexer connector. You should see the different actions listed out. We currently do not have any triggers for the Video Indexer, however, triggers will come later when support for WebHooks will be added to the Video Indexer API.

   

You should see two options for uploading a video onto your Video Indexer portal. One is called Upload video and index and will allow you to upload a video using file content data. The other option is called Upload video and index (using a URL) and will allow you to upload a video using an URL. Both options will automatically index the videos upon upload.

In this tutorial we will be uploading the video using file content data, so select Upload video and index.

You should be prompted to create a connection using your Video Indexer API Key. Enter in a name for the connection as well as your API Key. You can follow the tutorial to learn how to subscribe to the Video Indexer API and how to access your API key.

Upon creating the connection, it should open the Upload video and index action. If you click on any of the fields, you should see response elements from the OneDrive trigger. For the File Content field, select the File content response element. For Video Name, you can select the File name response element or type any name that you want. Set your privacy as you want. Here, I have set privacy to “Private”.

Upon clicking Show advanced options, you will see many more fields that you can fill out to provide more information on your video. I will be leaving them blank here because they are not required fields.

The Upload video and index action returns the id of the video upon upload, however, that does not mean that the indexing has been completed. For this, you need to add in a check that will only let the Logic App move forward if the video has been fully processed. Select New Step and then More. You can then select Add a do until.

You should now see an Until loop. Within the Until loop, select Add an action. Search for the Video Indexer connector again and select the action Get processing state. For the Video Id field, select the Video Id response element from Upload video and index.

   

For the “Choose a value” field in the Until loop, select the State response element from Get processing state. For the field that says “Choose a value”, type in “Processed”. The State being “Processed” is an indication that the indexing of the video is complete.

Within the Until loop and after the Get processing state action, select Add an action. Search for and select the Delay action (it is a part of the Schedule connector). You will need to set the count and unit fields to essentially determine how often to check if the State is “Processed”. Here, I have set Count to “3” and Unit to “Minute” to check every 3 minutes.

The next step is to attain the insights of the newly processed video. Outside of the Until loop, select New Step and search for the Video Indexer Connector. Select the Get video breakdown action. For the Video Id field, select the Video Id response element from Upload video and index. This action will give you all of the insights of the video.

Now that you have the insights of the video through Get video breakdown, you will now create a file with the new insights and store it in an appropriate folder of your OneDrive.

Select Add an action and search for the OneDrive connector. Select the action Create file. For the Folder path field, click on the folder icon and navigate to the appropriate folder for storing the insights of your video. I chose my folder called “Insights”.

For the File name field, type or select a name for the new text file. Here, I selected the Name response element from the Get video breakdown and typed in “Insights” after. For the File content field, select Summarized Insights or whichever specific response element from Get video breakdown that you want to store. Learn more about the response elements.

Save your logic app, and you are done! You should now test the logic app.

Testing the Logic App

Start by selecting Run. Then, to trigger the logic app, upload a video file onto the OneDrive video folder that you specified in the trigger. As mentioned before, there is a 50 MB file size limit for the OneDrive trigger, so you will need to select your video file appropriately. 

You should be able to look at the run details of your Logic App under the Run History section of the Overview page of your logic app.

You are now ready to test out many different combinations of workflows using the Video Indexer connector to find what works best to make your processes automated and more efficient.

You can create custom workflows that integrate live and on demand workflows in Media Services with Video Indexer using samples from the Media Services GitHub site and the Video Indexer connector as long as your video files are within the 50 MB limit for now. You can also create a Logic App to push the insights from Video Indexer into systems like Cosmos DB and use Azure Search to query across the metadata or to join the insights to your own custom metadata.
Quelle: Azure

Artificial Intelligence tunes Azure SQL Databases

Automatic tuning To stay competitive, today’s businesses need to make sure that they focus on their core competencies that directly deliver customer value while relying on their cloud provider to offer an affordable, reliable, and easy to use computing infrastructure that scales with their needs. In the world of cloud services, where many aspects of running the application stack are delegated to the cloud platform, having artificial intelligence to manage resources is a necessity when dealing with 100,000’s of resources. Azure SQL Database is continuously innovating by building artificial intelligence into the database engine to improve performance, reduce resource usage, and simplify management. The most prominent use of artificial intelligence is the automatic tuning feature that has been globally available since January 2016. Automatic tuning uses artificial intelligence to continuously monitor database workload patterns and recognize opportunities to improve the database performance. Once confidence is built that a certain tuning action would improve the performance of a database, the Azure SQL Database service automatically does the tuning action in a safe and managed fashion. The service monitors each tuning action and the benefits to performance are reported to the database owners. In the infrequent case of a performance regression, the service quickly reverts the tuning action. Click here to read more about automatic tuning. In this blog, I want to share a few examples of how Azure SQL Database customers have benefited from the automatic tuning feature. Tailored indexes for each out of 28K databases SnelStart is a company from Netherlands that uses Azure and Azure SQL Database to run their software as a service. Over the last few years, SnelStart has worked closely with the SQL Server product team to leverage the Azure SQL Database platform to improve performance and reduce DevOps costs. In 2017, SnelStart received the Microsoft Country Partner Netherlands award, proving their heavy investment in Azure and collaboration with Microsoft. SnelStart provides an invoicing and bookkeeping application to small and medium-sized businesses. By moving from desktop software to a hybrid software-as-a-service offering built on Azure, SnelStart has drastically decreased time to market, increased feature velocity and met new demands from their customers. By using the Azure SQL Database platform, SnelStart became a SaaS provider without incurring the major IT overhead that an infrastructure-as-a-service solution requires. SnelStart uses a database per tenant architecture. A new database is provided for each business administration and each database starts off with the same schema. However, each of these thousands of customers have specific scenarios and specific queries. Before automatic tuning, it was infeasible to tune every database to its specific usage scenario. The result was over indexing from trying to optimize for every usage scenario in one fixed schema. Individual databases did not get the attention they needed to be tuned and this resulted in less than optimum performance for each database workload. Now, using automatic tuning, all of this is history. SnelStart now has about 28,000 databases and automatic tuning takes care of them all. Automatic tuning focuses on each database individually, monitors its workload pattern, and applies tuning recommendations to each individual database based on its unique workload. These recommendations are applied safely by choosing the time when database is not highly active. All automatic tuning actions are non-blocking and the database can be fully used before, during, and after each tuning action. For two months, SnelStart gradually enabled automatic tuning for their databases. During that period, automatic tuning executed 262 tuning operations on 210 databases, resulting in improved performance on 346 unique queries across these databases. The following chart shows the roll out of automatic tuning across the SnelStart database fleet. By enabling automatic tuning on their databases, SnelStart got a virtual employee that focused on optimizing database performance. In case of SnelStart, this virtual employee did a great job and optimized an average of ~3.5 databases per day. SnelStart saved a lot of time and was able to focus on improvements in their core value prop instead of on database performance. “Using automated index tuning, we can further maximize the performance of our solution for every individual customer.” – Henry Been, Software Architect at SnelStart. Managed index clean-up AIMS360 is a cloud-based service provider for fashion businesses that empowers fashion labels to manage and grow their business by giving their customers more control of and visibility into their business. Additionally, AIMS360 gives back the time to their customers to focus on fashion instead of processes around their business. AIMS360 has been in this business for over thirty years and working with their software is taught in fashion-related schools throughout the United States. Each fashion business that buys the AIMS360 service, gets its own database. AIMS360 has thousands of databases. The database schema for each database is identical and has evolved over time as new features and capabilities were added into their product. However, trying to optimize the performance of each workflow in their application left their databases with duplicated indexes. SQL Server allows duplicated indexes and once they exist for every related update, duplicated indexes need to be updated – resulting in unneeded use of database resources. Over indexing is a wide spread problem that exists on large numbers of databases. The cause is different people working on the same database without the time to analyze and/or track what happened previously on the database. Looking at the automatic tuning internal statistics, our team was surprised to see that there is double the amount of drop duplicate index recommendations compared to the create index recommendations. AIMS360 enabled automatic tuning across all their databases to simply take care of this duplicate index problem. Since enabling automatic tuning, the SQL Database service has executed 3345 tuning actions on 1410 unique databases and improving 1730 unique queries across these databases. By choosing the right time to drop duplicated indexes, automatic tuning got the problem safely out of the way. In background, over a couple of days, automatic tuning dropped all duplicated indexes. Automatic tuning takes care of not putting a lot of pressure on databases or elastic pools. When multiple tuning actions need to be executed on a single database or within a single elastic pool, these actions are queued and executed with safe parallelism. “Using the automatic tuning feature, our team was able to quickly and efficiently fine tune our databases. Since we have over 1400 databases, traditional methods of tuning would be very labor intensive. However, with automatic tuning we were able to analyze 1000’s of queries and get them tuned instantly. “ – Shahin Kohan, President of AIMS360 Reducing resource usage for thousands of applications Microsoft IT applications use Azure SQL Database heavily for thousands of applications. These applications support various internal applications at Microsoft. The footprint of Microsoft IT in Azure SQL Database is in the thousands of databases. These workloads are diverse – spanning from light, sporadic usage to enterprise grade workloads using resources in higher premium tiers. This variety of applications is not easy to keep an eye on. Microsoft is enabling automatic tuning on all internal workloads, including Microsoft IT, to reduce the DevOps cost and improve the performance across applications that are relying on Azure SQL Database. These same problems are present in any enterprise IT department around the world and all of them have a set of common goals: reduce the total cost of ownership, reduce DevOps cost, and improve performance. Automatic tuning, by continuously monitoring and tuning all the databases in parallel, is constantly making progress towards these goals. Microsoft IT started using automatic tuning as soon as it became available for preview, but the stronger engagement to enable it for all databases started in Q2 2017. Gradually rolling out the automatic tuning for different groups within Microsoft IT has enabled us to carefully measure the benefits achieved by each group. Special success was achieved within Microsoft IT Finance group. In the following chart, you can see the number of databases tuned each day that belong to the Microsoft IT Finance group. Spike that happened on 3rd of May is caused by enabling automatic tuning on all the databases that belong to this group. After every tuning action, Azure SQL Database measures the performance improvement by comparing the resource usage of queries before and after the tuning action. Verification lasts until statistically significant sample has been collected so improvements in performance can be accurately measured. During this verification frequency of queries that have been improved is measured as well. Using this information, we can calculate the amount of CPU hours that have been saved due to tuning actions. The preceding chart shows that databases that belong to Microsoft IT Finance group now use ~250H less CPU than before enabling automatic tuning. In addition to improving the performance by reducing the duration of the queries, this reduction of resource usage directly translates to the reduction of price – Microsoft IT Finance can now decrease the pricing tier of certain databases while keeping the improved performance. You can find all the details regarding this case in this video. Summary Azure SQL Database customers are already heavily relying on automatic tuning to achieve optimal performance. In these customer stories, you see how different applications benefit by using automatic tuning – from optimizing similar workloads for SaaS applications to optimizing thousands of different applications for enterprises. Additionally, automatic tuning helps you to finally get rid of all those unused and duplicated indexes without an effort! Enable automatic tuning on your database and let the platform do the work for you – Click here to read more about how to enable automatic tuning.
Quelle: Azure