New connectors added to Azure Data Factory empowering richer insights

Data is essential to your business. The ability to unblock business insights more efficiently can be a key competitive advantage to the enterprise. As data grows in volume, variety, and velocity, organizations need to bring together a continuously increasing set of diverse datasets across silos in order to perform advanced analytics and uncover business opportunities. The first challenge to building such big data analytics solutions is how to connect and extract data from a broad variety of data stores. Azure Data Factory (ADF) is a fully-managed data integration service for analytic workloads in Azure, that empowers you to copy data from 80 plus data sources with a simple drag-and-drop experience. Also, with its flexible control flow, rich monitoring, and CI/CD capabilities you can operationalize and manage the ETL/ELT flows to meet your SLAs.

Today, we are excited to announce the release of a set of new ADF connectors which enable more scenarios and possibilities for your analytic workloads. For example, you can now:

Ingest data from Google Cloud Storage into Azure Data Lake Gen2, and process using Azure Databricks jointly with data coming from other sources.
Bring data from any S3-compatible data storage that you may consume from third party data vendors into Azure.
Copy data from MongoDB and others to Azure Cosmos DB MongoDB API for application consumption.
Retrieve data from any RESTful endpoint as an extensible point to reach hundreds of SaaS applications.

For more information, see the following updates on new connectors and additional features for existing connectors.

Connector updates

Azure Cosmos DB MongoDB API

You can now copy data to and from Azure Cosmos DB MongoDB API, in addition to the already supported SQL API. For writing into Azure Cosmos DB specifically, the connector sink is built on top of the Azure Cosmos DB bulk executor library to provide the best performance. Learn more about Azure Cosmos DB MongoDB API.

Amazon S3

ADF enables a custom S3 endpoint configuration in Amazon S3 connector. With this you can now copy data from any S3-compatible storage providers using the connector and are no longer limited to the official Amazon S3 service. Learn more about Amazon S3 connector.

Google Cloud Storage

As Google Cloud Storage provides S3-compatible interoperability, you can now copy data from Google Cloud Storage. This leverages the S3 connector with Google Cloud Storage’s corresponding S3 endpoint. Learn more about Google Cloud Storage connector.

MongoDB

To address the feedback on MongoDB feature coverage, performance, and scalability, ADF releases a new version of MongoDB connector. It provides comprehensive native MongoDB support including generic MongoDB connection string with connection options, native MongoDB query, extracting hierarchical data, and more. Learn more about MongoDB connector.

Azure Database for MariaDB

You can copy data from Azure Database for MariaDB. Learn more about Azure Database for MariaDB connector.

Generic REST

You can now retrieve data from various RESTful services and apps. ADF releases a more targeted REST connector in addition to the generic HTTP connector. To fulfill the two most common asks we’ve received, this REST connector supports Azure Active Directory (AAD), service principal, Managed Identity for Azure resource (MSI) authentications, as well as pagination rules. Learn more about REST connector.

Generic OData

ADF now supports AAD service principal and Managed Identity for Azure resource (MSI) authentications when copying data form OData endpoint. Learn more about OData connector.

Dynamics AX (preview)

You can now copy data from Dynamics AX using OData protocol with service principal authentication. This connector also works with Dynamics 365 Finance and Operations (F&O). Learn more about Dynamics AX connector.

You are encouraged to give these additions a try and provide us with feedback. We hope you find them helpful in your scenarios. Please post your questions on Azure Data Factory forum or share your thoughts with us on Data Factory feedback site.
Quelle: Azure

Find out when your virtual machine hardware is degraded with Scheduled Events

One of the benefits of moving to the cloud is that you, our customer, don’t need to deal with hardware maintenance and repairs; you can focus your time on your business applications. Azure continuously monitors for hardware that shows signs of degradation or potential failure. When these conditions are detected, Azure will attempt to live migrate your virtual machines (VMs). If live migration isn’t possible, Azure will automatically redeploy VMs to a healthy machine. If you have a disaster recovery setup, which is highly recommended, the impact of this redeployment will be minimal. However, a redeployment to a healthy machine may be problematic for some applications that can’t tolerate disruption. We’ve received feedback that in this situation,  when possible, customers prefer to control the time the redeployment to a healthy machine will occur.

We introduced Scheduled Events in Azure as a programmatic way to notify your VMs and act on upcoming maintenance events such as a live migration, redeployment, reboot, etc. Upon receiving the scheduled event, customers can take actions such as failover, saving state, drain sessions in the VMs, schedule a time for manual maintenance, notify customers, etc. We’re excited to announce that Scheduled Events will now be triggered when Azure predicts that hardware issues will require a redeployment to healthy hardware in the near future, and provide a time window when Azure will redeploy the VMs to healthy hardware if a live migration was not possible. Customers can initiate the redeployment of their VMs ahead of Azure automatically doing it.

Hardware failure prediction

Azure has taken insight from operating millions of servers in its data centers to identify when hardware health is degrading and predict in many cases a failure before it happens. For example, Azure can detect if there is degradation in disk IO performance on a given node, or detect memory errors, and determine if this will become fatal.

When Azure detects imminent hardware failure, VMs are proactively live migrated when possible. This should have minimal impact on your workloads and the customer experience is typically a freeze of a few seconds during the final phase. Subscribing to Scheduled Events allows your VM to be notified a few minutes before the live migration process is started. However, there are cases where live migration isn’t possible, like on specialized computer hardware such as M-Series, G-Series, etc. or on legacy hardware, in which case the VMs would be redeployed to a new instance. Some of our customers have expressed interest in being able to control the time to initiate a reallocation from the node and control the experience during the process. Based on this feedback, we enhanced Scheduled Events to notify the time the hardware is detected as unhealthy, and give the time the VM will be moved to another machine, provided the hardware does not fail sooner. In many cases there can be multiple days before the hardware fails and through mitigations, Azure tries to delay this failure time. Because the time to fail varies, we recommend customers move from degraded hardware as soon as possible.

How to listen to these Scheduled Events

Your VM must subscribe to Scheduled Events to get events related to maintenance. Watch this video to learn how to programmatically enable and react to Scheduled Events. You can also find code samples of how to listen to Scheduled Events and then approve them once you have done your mitigation.

To listen to hardware-related events, you don’t have to do anything different! Hardware-related events are delivered as a redeploy event. The NotBefore time, which is the property that gives the time window before the maintenance is performed, could range from a few hours to a few days and can change depending on the severity of the hardware fault. As Azure’s estimation for the time to failure improves, the NotBefore time window will change to become more accurate. But note that since you’re running on degraded hardware that can fail suddenly, you should initiate a redeployment or approve the scheduled event as soon as possible after initiating the corresponding automated or manual actions. Once you approve the request, your VM will be redeployed to a new physical machine. You can track the completion of the redeploy via Scheduled Events. If you don’t approve the scheduled event within the NotBefore time, you will no longer have control of the experience and Azure will redeploy your VM to a healthy machine.

Support for hardware degradation information via Scheduled Events is already available worldwide! There are no API changes so this feature that is available from api-version=2017-08-01.

If you are sensitive to platform maintenance events, I would highly encourage you to build automation by handling Scheduled Events. Try this out and let us know what you think in the comments below.
Quelle: Azure

Azure Stream Analytics now supports Azure SQL Database as reference data input

Our goal on the Azure Stream Analytics team is to empower developers and make it incredibly easy to leverage the power of Azure to analyze big data in real-time. We achieve this by continuously listening for feedback from our customers and ship features that are delightful to use and serve as a tool for tackling complex analytics scenarios. We are excited to share the public preview of Azure SQL Database as a reference data input for Stream Analytics, which is the most requested feature on UserVoice!

Typical scenarios for reference data

Reference data is a dataset that is static or slow changing in nature which you can correlate with real-time data streams to augment the data. Stream Analytics leverages versioning of reference data to augment streaming data by the reference data that was valid at the time the event was generated.

An example scenario would be storing currency exchange rates in Azure SQL Database which is regularly updated to reflect market trends, and then converting a stream of billing events in different currencies to a standard currency.

In IoT scenarios, you could have millions of IoT devices emitting a stream of events with critical values like temperature and pressure being monitored. Using Stream Analytics, you can join this real-time data stream with metadata about each IoT device stored in Azure SQL Database to define per-device threshold and metadata.

Easily integrate with Azure SQL Database input

Until today, Azure Blob Storage was the only way to store your reference data. We heard from our customers that Azure SQL Database is a natural place to store datasets that need to be used in correlation with real-time data streams.

Instead of writing your logic and building custom pipelines to transfer data periodically from Azure SQL Database to Azure Blob Storage, Stream Analytics now provides out-of-the-box support for Azure SQL Database as reference data input. We knew providing just this capability alone wouldn’t delight our customers. So, we took it one step further and are providing the ability to automatically refresh your reference dataset periodically. You can easily configure this refresh interval when adding your input to the job. The refresh interval can be as short as one minute.

You might have a complex query to pull reference data from Azure SQL Database. In order to preserve the performance of your Stream Analytics job, we also provide the option to fetch incremental changes from your Azure SQL Database by writing a delta query.

Getting started

You can try using Azure SQL Database as a source of reference data input to your Stream Analytics job today. This feature is available for public preview in all Azure regions. This feature is also available in the latest release of Stream Analytics tools for Visual Studio. We hope you take full advantage of this functionality and are excited to see what you build with Stream Analytics.

Providing feedback and ideas

The Azure Stream Analytics team is highly committed to listening to your feedback. We welcome you to join the conversation and make your voice heard via our UserVoice. You can stay up-to-date on the latest announcements by following us on Twitter @AzureStreaming. You can also reach out to us at askasa@microsoft.com.
Quelle: Azure

Amazon Corretto ist jetzt allgemein verfügbar

Amazon Corretto 8, eine kostenfreie, produktionsbereite Multiplattform-Distribution des Open Java Development Kit OpenJDK 8, ist jetzt allgemein für den Produktionsbetrieb verfügbar. Corretto war seit unserer Ankündigung im November 2018 als Vorversion erhältlich. 
Quelle: aws.amazon.com

Help us shape new Azure migration capabilities: Sign up for early access!

Based on Azure Migrate and Azure Site Recovery usage trends, we know that many of you are well along on your Azure migration journey. We’re now working on the next wave of innovation to further enhance and simplify your migration experience. We have a great opportunity for you to influence and shape product direction through early access to new capabilities.

Delivering an integrated end-to-end migration experience that enables you to discover, assess, and migrate servers to Azure is the goal. To that end, we have several new capabilities in our roadmap, including a new user experience with partner tool integration, Hyper-V environment assessment, and server migration enhancements. You are welcome to migrate your workloads to Azure using these new features, and we will enable you by providing production support.

If you’d like to be part of this awesome opportunity, please fill and submit this form as soon as possible. We will review your submission and follow up with on-boarding steps, including detailed guidance on how to participate and provide feedback.

Your feedback is extremely valuable in helping us improve our product offerings. We look forward to sharing more about what we’ve been working on and look forward to your inputs!

Regards,

Azure Migrate Team
Quelle: Azure

Introducing WebSockets support for App Engine Flexible Environment

Do you have an application that could benefit from being able to stream data from the app to the client with minimal latency—without the client having to poll for updates? Today, we are excited to announce that App Engine Flexible Environment now supports the WebSocket protocol in beta—the first time that App Engine supports a streaming protocol. Many users have been looking forward to this feature, as this capability is useful in a number of scenarios, including:Real time event updates, such as sports scores, stock market prices, etc.User notifications, such as software updates, or content updatesChat applicationsCollaborative editing toolsMultiplayer gamesFeeds, such as social media and newsWebSockets is available to your App Engine Flexible Environment application with no special setup. Take a look at our documentation to learn more: Python | Java | Nodejs.For clients that don’t support WebSockets, some libraries like socket.io fall back on HTTP long polling. To help you achieve better performance in these cases, we have also added a new “session affinity” setting to app.yaml that allows requests from a single client to be preferentially sent to the same App Engine instance. You should only use session affinity for performance optimization and continue to store application state in a persistent way outside the instance memory since App Engine instances are all periodically restarted.Our alpha customers are already using WebSockets in production. Shine is a French provider of mobile banking services, and has implemented WebSockets across several parts of its platform.”We use WebSockets in App Engine Flex to exchange information like banking transactions, user profiles or user metadata between our front-end and back-end. It has worked perfectly for us for several months, was easy to set up and has significantly reduced latency and consumed bandwidth.” – Raphaël Simon, ShineSupport for WebSockets is in beta today and we look forward to making it generally available soon. Check out App Engine and try the new WebSocket protocol today!
Quelle: Google Cloud Platform

Completers in Azure PowerShell

Since version 3.0, PowerShell has supported applying argument completers to cmdlet parameters. These argument completers allow you to tab through a set of values that are valid for the parameter. However, unlike ValidateSet which enforces that only the provided values are passed to the cmdlet, argument completers do not restrict the values that can be passed to the string parameter. Additionally, argument completers can be either a static or a dynamic set of strings. Using this feature, we have added argument completers to the Azure PowerShell modules which allow you to select valid parameter values without needing to make additional calls to Azure. These completers make the required calls to Azure to obtain the valid parameter values.

To best capture the functionality of the completers, I have modified the key binding for “Tab” in the examples below to display all the possible values at once. If you want to replicate this setup, simply run: “Set-PSReadLineKeyHandler -Key Tab -Function Complete.”

Location completer

The first completer that we created was the Location completer. Since each resource type has a distinct list of available Azure regions, we wanted to create an easy, quick way to select a valid region when creating a resource. Thus, for every parameter in our modules which accepts an Azure region (which in most cases is called Location), we added an argument completer that returns only the regions in which the resource type can be created. In the example below, you can see the result of pressing tab immediately after -Location for the New-AzResourceGroup cmdlet.

In addition to listing out all available regions that a Resource Group can be created in, the Location completer allows you to filter the results by a typing in the first few characters of the region you are looking for.

Resource Group Name completer

The second completer that we added to the PowerShell modules is the Resource Group Name completer. This completer was applied to all parameters which accept an existing resource group and returns all resource groups in the current subscription. Similar to the Location completer, you can filter the results by typing the first few characters of the resource group before pressing tab.

Resource Name completer

The third completer that we added to the PowerShell modules is the Resource Name completer. This completer returns the list of names of all resources that match the resource type required by the parameter. Additionally, this argument completer will filter by the resource group name if it is already provided to the cmdlet invocation. For example, in the screenshot below, when we tab after typing “Get-AzVM -Name test,” we see all four VMs in the current subscription that starts with “test.” Then, when we tab after typing “Get-AzVM -ResourceGroupName maddie1 -Name test,” we only see the two VMs that are contained in the “maddie1” resource group.

Not only does the Resource Name completer filter by the resource group name, but, for all subresources, it also filters by the parent resources, if they are provided to the cmdlet invocation. The results will be filtered by each of the parent resources provided. In the example below, you can see the results of tab completion over “maddiessqldatabase” for various combinations of parameters being provided.

At the moment, this completer has only been applied to the Compute, Network, KeyVault, and SQL modules. If you enjoy this feature and would like to see it applied to more modules, please let us know by sending us feedback using the Send-Feedback cmdlet.

Resource Id completer

The final completer that we added to the Az modules is a Resource Id completer. This completer returns all resource Ids in the current subscription, filtered for the resource type that the parameter requires. The Resource Id completer allows you to filter the results by a typing in a few characters, using '*<characters>*' wildcard pattern. This completer was applied to all parameters in our cmdlets that accept an Azure resource Id.

Try it out

To try out Azure PowerShell for yourself, install our Az module via the PowerShell Gallery. For more information about our new Az module, please check out our Az announcement blog. We look forward to getting your feedback, suggestions or issues via the built-in “Send-Feedback” cmdlet. Alternatively, you can always open an issue in our GitHub repository.
Quelle: Azure