SMB Version 1 disabled Azure Gallery Windows operating system images

The Azure security team has recently driven some changes into the default behavior of Windows operating system images that are available in the Azure gallery. These changes are in response to recent concerns over malware that has been able to take advantage of issues with the Server Message Block Version 1 network file sharing protocol. The Petya and WannaCry ransomware attacks are just two types of malware that have been able to spread due to weaknesses in SMB v1.

Due to the security issues related to the use of SMB v1, the SMB v1 protocol is disabled on almost all Windows operating systems in the Azure Gallery. The result of this change is that when you create a new virtual machine in the Azure Virtual Machines service, that virtual machine will have the SMB v1 protocol disabled by default. You will not need to manually disable the protocol, such as using the method shown in the figure below.

While we expect to have little or no disruption due to these changes, there may be issues you want to consider:

What specific Windows operating system images are impacted by this change?
What is your current SMB v1 footprint?
What effect does this change have on your currently running virtual machines?
What about Linux and SMB v1?
What about PaaS Images? Are they involved with this change?
What tools are available for you to be alerted when SMB v1 is enabled on your virtual machines? Can Azure Security Center be helpful in this context?

To learn more about this change and these issues, please read Disabling Server Message Block Version 1 (SMB v1) in Azure.
Quelle: Azure

Stream Processing Changes: #Azure #CosmosDB change feed + Apache Spark

Azure Cosmos DB: Ingestion and storage all-in-one

Azure Cosmos DB is a blazing fast, globally distributed, multi-model database service. Regardless of where your customers are, they can access data stored in Azure Cosmos DB with single-digit latencies at the 99th percentile at a sustained high rate of ingestion. This speed supports using Azure Cosmos DB, not only as a sink for stream processing, but also as a source. In a previous blog, we explored the potential of performing real-time machine learning with Apache Spark and Azure Cosmos DB. In this article, we will further explore stream processing of updates to data with Azure Cosmos DB change feed and Apache Spark.

What is Azure Cosmos DB change feed?

Azure Cosmos DB change feed provides a sorted list of documents within an Azure Cosmos DB collection in the order in which they were modified. This feed can be used to listen for modifications to data within the collection to perform real-time (stream) processing on updates. Changes in Azure Cosmos DB are persisted and can be processed asynchronously, and distributed across one or more consumers for parallel processing. Change feed is enabled at collection creation and is simple to use with the change feed processor library.

Designing your system with Azure Cosmos DB

Traditionally, stream processing implementations first receive a high volume of incoming data into a temporary message queue such as Azure Event Hub or Apache Kafka. After stream processing the data, a materialized view or aggregate is stored into a persistent, query-able database. In this implementation, we can use the Azure Cosmos DB Spark connector to store Spark output into Azure Cosmos DB for document, graph, or table schemas. This design is great for scenarios where only a portion of the incoming data or only an aggregate of incoming data is useful.

Figure 1: Traditional stream processing model

Let’s consider the scenario of credit card fraud detection. All incoming data (new transactions) need to be persisted as soon as they are received. As new data comes in, we want to incrementally apply a machine learning classifier to detect fraudulent behavior.

Figure 2: Detecting credit card fraud

In this scenario, Azure Cosmos DB is a great choice for directly ingesting all the data from new transactions because of its unique ability to support a sustained high rate of ingestion while durably persisting and synchronously indexing the raw records, enabling these records to be served back out with low latency rich queries. From the Azure Cosmos DB change feed, you can connect compute engines such as Apache Storm, Apache Spark or Apache Hadoop to perform stream or batch processing. Post processing, the materialized aggregates or processed data can be stored back into Azure Cosmos DB permanently for future querying.

Figure 3: Azure Cosmos DB sink and source

You can learn more about change feed in the Working with the change feed support in Azure Cosmos DB article, and by trying the change feed + Spark example on GitHub. If you need any help or have questions or feedback, please reach out to us on the developer forums on Stack Overflow. Stay up-to-date on the latest Azure Cosmos DB news and features by following us on Twitter @AzureCosmosDB.
Quelle: Azure

Announcing Azure Blob storage events preview

In conjunction with the recently announced Azure Event Grid preview, we are pleased to announce the preview of Azure Blob storage events.

Azure Blob storage is a massively-scalable object storage platform. With exabytes of capacity and massive scalability, Azure Blob storage easily and cost-effectively stores hundreds to billions of objects, in hot or cool tiers, and supports any type of data—images, videos, audio, documents, and more.

Blob storage events allow applications to react to the creation and deletion of blobs without the need for complicated code and expensive, inefficient polling services. Instead, events are pushed directly to event handlers such as Azure Functions, Azure Logic Apps, or your own custom http listener.

Blob storage events are made possible by Azure Event Grid, which enables event based programming in Azure by providing reliable distribution of events for all services in Azure and third-party services. With publisher/subscriber semantics, event sources like Azure Blob Storage push events to Event Grid, which routes, filters, and reliably distributes them to subscribers with WebHooks, queues, and Event Hubs as endpoints. Event Grid is baked into the Azure ecosystem so it is as easy as point and click to connect your storage account to your event handler. Azure Event Grid has a pay-per-event pricing model, so you only pay for what you use. Additionally, to help you get started quickly, the first 100,000 operations per month are free. Beyond 100,000 per month, pricing is $0.30 per million operations (per-operation) during the preview. More details can be found on the pricing page.

The preview of Blob storage events is available now for Blob storage accounts in the US West Central location with additional locations coming soon. To learn more, and to sign up for the preview, see Azure Blob storage events.

We would love to hear more about your experiences with the preview and get your feedback! Are there other storage events you would like to see made available? Drop us a line at azurestorageevents@microsoft.com and let us know.

Happy eventing!
Quelle: Azure

How to use GPUs in OpenShift 3.6 (Still Alpha)

Run general-purpose compute workloads on Graphics Processing Units (GPUs) with these instructions for using OpenShift 3.6 GPU support in Kubernetes. GPU support in Kubernetes remains in alpha through the next several releases. The Resource Management Working Group is driving progress towards stabilizing these interfaces.
Quelle: OpenShift

Using Stackdriver Logging for visual effects and animation pipelines: new tutorial

By Joseph Holley and Adrian Graham, Cloud Solutions Architects

Capturing logs in a visual effects (VFX), animation or games pipeline is useful for troubleshooting automated tools, keeping track of process runtimes and machine load and capturing historical data that occurs during the life of a production.

But collecting and making sense of these logs can be tricky, especially if you’re working on the same project from multiple locations, or have limited resources on which to collect the logs themselves. 

Collecting logs in the cloud enables you to understand this data by mining it with tools that deliver speed and power not possible from an on-premise logging server. Storage and data management is simple in the cloud and not bound by physical hardware. Additionally, you can access cloud logging resources globally; visual effects or animation facilities can access the same logging database regardless of physical location, making international productions far simpler to manage and understand.

We recently put together a tutorial that shows you how to integrate Stackdriver Logging, our hosted log management and analysis service for data running on Google Cloud Platform (GCP) and AWS, into your own visual effects or animation pipeline. It also shows some key storage strategies and how to migrate this data to BigQuery and other Google Cloud tools. Check it out, and let us know what other Google Cloud tools you’d like to learn how to use in your visual effects or animation pipeline. You can reach us on Twitter at @gcpjoe or @agrahamvfx.

Quelle: Google Cloud Platform