Integral Analytics moves to Azure SQL Data Warehouse from AWS for high performance

With greater focus on new forms of energy and technological progress in the space of solar energy, the world in which traditional power utilities operate has been going through significant change. Integral Analytics helps power utilities be on top of this change by helping them tap into other sources of data, such as econometrics and customer-owned power assets, to learn how customers currently use power and how that usage will likely change over time. This helps the power utilities plan the right level of resources and be well positioned for the future. For example, by fusing econometric data with traditional load data, utilities can gain valuable insights on the future growth of electrical load for better planning.

A key piece of the solution that Integral Analytics delivers involves extracting intelligent insights from huge amounts of data – workloads can run up to 25 TB of data, including text data and aggregates from meters, transformers and substations.

Integral Analytics (IA) deployed the first version of their solution on Amazon Redshift, but ran into limitations related to the offering and costs. For IA, not being able to pause the instance when not in use proved to be a hassle. IA had to take a snapshot of the data for backup purposes and then get rid of the entire instance. Looking to reduce costs, the IA team found the complete offering from Azure much more compelling. Some of the key reasons IA chose Azure SQL Data Warehouse were the ease of scaling an instance, instance pausing for cost optimization, support for .NET tools, flexibility of scaling compute, and storage separately. IA was also impressed with the easy integration of advanced analytics tools like Azure Machine Learning.

Learn more about Integral Analytics switch to Azure from AWS.

In the words of Bill Sabo, Managing Director of Information Technology, Integral Analytics, “When we learned about the pause and resume capabilities of SQL Data Warehouse and integrated services like Azure Machine Learning and Data Factory, we switched from Amazon Redshift, migrating over 7 TB of uncompressed data over a week for the simple reasons of saving money and enabling a more straightforward implementation for advanced analytics. To meet our business-intelligence requirements, we load data once or twice a month and then build reports for our customers. Not having the data-warehouse service running all the time is key for our business and our bottom line.”

IA chose Azure SQL Data Warehouse not only for the differentiated capabilities it delivers, but also because of its integration with various products within Azure to provide a full end to end solution – from data storage and processing to advanced analytics and intelligent insights.

Quoting Kevin Kushman, COO, Integral Analytics – "Azure SQL Data Warehouse is a fundamental part of our IT ecosystem. Cloud-based data solutions like Azure are going to be crucial for data-intensive companies like ours."

Learn more about the story of how Integral Analytics is using Azure SQL Data Warehouse.

If you have not already explored this fully managed, petabyte scale cloud data warehouse service, learn more at the links below.

Learn more

What is Azure SQL Data Warehouse?
SQL Data Warehouse best practices
Video library
MSDN forum
Stack Overflow forum
Quelle: Azure

Several New Azure Services now available in UK

We’re pleased to announce the following services which are now available in the UK!

SQL Server Stretch Database now available in UK: SQL Server Stretch Database migrates your cold data transparently and securely to the Microsoft Azure cloud. SQL Server Stretch Database provides cost-effective availability for data you do not use regularly, doesn&;t require changes to your queries or applications, and keeps your data secure even during migration. Stretch Database targets transactional databases with large amounts of infrequently used data, which are typically stored in a small number of tables.
Learn more about SQL Server Stretch Database

Azure Functions now available in UK: Azure Functions is a solution for easily running small pieces of code ("functions") in the cloud. You can write the code you need for the problem at hand, without worrying about a whole application of the infrastructure to run it. You can also develop in the language of your choice, such as C#, F#, Node.js, Python, or PHP.
Learn more about Azure Functions

Power BI now available in UK: Power BI is a cloud-based business analytics service that enables anyone to connect to, visualize, and analyze data with greater speed, efficiency, and understanding. It connects users to a broad range of live data through easy-to-use dashboards, provides interactive reports, and delivers compelling visualizations that bring data to life.
Learn more about Power BI

Power BI Embedded now availble in UK: Power BI Embedded is an Azure service that enables application developers to embed stunning, fully interactive reports and visualizations in customer-facing apps without the time and expense of having to build custom controls from the ground up.
Learn more about Power BI Embedded

Azure DocumentDB now available in UK: Azure DocumentDB is a fully managed NoSQL database service built for fast and predictable performance, high availability, elastic scaling, global distribution, and ease of development. As a schema-free NoSQL database, DocumentDB provides rich and familiar SQL query capabilities with consistent low latencies on JSON data
Learn more about DocumentDB

Azure DevTest Labs now available in UK:  Azure Devtest Labs is a service that helps developers and testers quickly create environments in Azure while minimizing waste and controlling cost. The goal for this service is to solve the problems that IT and development teams have been facing: delays in getting a working environment, time-consuming environment configuration, production fidelity issues, and high maintenance costs. 
Since the launch, DevTest Labs has been helping our customers to quickly get “ready to test” with a worry-free self-service environment. The reusable templates in the DevTest Labs can be used everywhere once created. The public APIs, PowerShell cmdlets and VSTS extensions make it super easy to integrate your Dev/Test environments from labs to your release pipeline. In addition to the Dev/Test scenario, Azure DevTest Labs can also be used in other scenarios like training and hackathon. For more information about its value propositions, please check out our GA announcement blog post. If you are interested in how DevTest Labs can help for training, check out this article to use Azure DevTest Labs for training.
Learn more about Azure Devtest Labs
Quelle: Azure

Azure Media Services Video Subclipper open-source release

We are excited to announce the open-source release of the Azure Media Services (AMS) Video Subclipper Plugin for Azure Media Player on GitHub.

The Azure Media Player (AMP) subclipper plugin provides a user interface to find subclip mark-in and mark-out points on a live or video-on-demand stream. The plugin outputs the mark-in and mark-out points in the stream that can then be consumed by a video content management system or the Azure Media Services subclipper API to produce dynamic manifest filters or new subclipped rendered video assets. This tool allows users to find mark-in and mark-out points with group-of-pictures (GOP) accuracy or frame accuracy.

Special thanks to Ian N. Bennett for his help and contributions to this project.

Public deployment

A public, hosted deployment of the subclipper is available at the Azure Media Subclipper Plugin webpage. This deployment supports subclipping from URL-sourced videos. A detailed usage guide is available in our repository documentation. Below is a preview of the subclipper interface.

Supported output modes

The subclipper supports three output modes: trim, virtual, and rendered.

Trim

Trim mode is segment boundary (GOP) accurate and creates a clip starting from the segment boundary closest to the mark-in point and ending at the end of the stream.

Virtual

Virtual mode is segment boundary accurate and creates a clip starting with the segment boundary that is closest to the mark-in point and ending with the segment boundary that is closest to the mark-out point.

Rendered

Rendered mode creates a new clip starting with the frame that is closest to the mark-in point and ending with the frame that is closest to the mark-out point. Frames are calculated based on the frame rate which can be set in the settings.

User Interface capabilities

Find mark-in & mark-out points

Once your stream has loaded, you can create the mark-in point by using the scrubber to scrub, playback to play to, or rewind/forward to the desired time, and then tapping/clicking on the set in button:

Once your stream has loaded, you can create the mark-in point by using the scrubber to scrub, playback to play to or rewind/forward to the desired time and then tapping/clicking on the set out button:

Change mark points

Mark bar

The mark bar sits above the scrubber and shows both the mark-in and mark-out points as a handle on the bar. The mark bar’s scale is the same as the scrubber and its length represents the timeline of the stream. To change the mark-in and/or mark-out pint using the mark bar, simply tap/click on the handle and drag to the desired point and release:

Mark arrows

On either side of the set in and set out buttons you’ll see arrows. Clicking on the left arrow moves the target mark point back one segment boundary for Trim and Virtual Modes or one frame for Rendered Mode. Clicking on the right arrow moves the target mark point forward one segment boundary for Trim and Virtual Modes or one frame for Rendered Mode:

Export a clip

With mark-in and mark-out points set (mark-out not required for Trim mode), the export button will be enabled and you can export your clip data. Clicking on export will open the submit dialog. When the submit dialog first opens, you’ll see a loading indicator, which will be present until the thumbnails are generated. After the thumbnails are generated, you can select a thumbnail, set a title, set a description, and also set other metadata depending on whether or not the provider has added custom form fields. With the thumbnail chosen and the metadata set, you can export the data by tapping/clicking on submit. After exporting the clip data, the provider’s application can process the data accordingly.

Configure frame rate & number of thumbnails choices

There are two settings that you can modify to change the behavior of the subclipper, frame rate and number of thumbnail choices. Frame rate should be set to the frame rate of the source stream. The supported frame rates are 23.976 fps, 25 fps, 29.976, 30 fps, and 60 fps. The frame rate setting will be used to find the frames in the stream. The number of thumbnails setting determines how many thumbnail choices are generated in the submission dialog.

Quelle: Azure

Enhanced loading, monitoring, and troubleshooting experience for Azure SQL Data Warehouse

We are excited to share that Azure SQL Data Warehouse has introduced updates to the Azure portal and SQL Server Management Studio (SSMS) to provide a seamless experience when loading, monitoring, and developing your SQL Data Warehouse. The updates include integrated support for loading from 20+ data stores on premise and in the cloud, a simple process to troubleshoot common issues, and highly requested functionality within SSMS.

SQL Data Warehouse is your go-to SQL based view across data, offering a fast, fully managed, petabyte-scale cloud solution. It is highly elastic, enabling you to provision and scale up to 60 times larger in seconds. You can scale compute and storage independently, allowing you to range from burst to archival scenarios, and pay based off what you&;re using instead of being locked into a confined bundle.

Azure Portal

Quick Load

You can now quickly integrate your SQL Data Warehouse with Azure Data Factory (ADF) with the new task panel which consists of common commands to execute against your data warehouse.

Azure Data Factory is a fully managed cloud-based data integration service which can be used to populate a SQL Data Warehouse with data from your existing system. It saves you valuable time enabling you to focus on evaluating and building solutions and deriving faster insights. Here are the key benefits of loading data using Azure Data Factory with SQL Data Warehouse:

Rich data store support: built-in support for 20+ data stores on-premises and in the cloud.
Easy to set up: intuitive wizard with no scripting required.
Secure and compliant: data is transferred over HTTPS or ExpressRoute and the global service presence ensures your data never leaves the geographical boundary.
Unparalleled performance by using PolyBase: using PolyBase is the most efficient way to move data into Azure SQL Data Warehouse. To load from Azure Blob Storage, ADF uses PolyBase directly. To load from data stores other than Azure Blob Storage, ADF uses the staging blob feature to convert your source data into PolyBase compatible format, and then use PolyBase to load into SQL DW.

Troubleshooting

You can simply troubleshoot issues related to your SQL Data Warehouse with the new version of the troubleshoot blade. This provides instructions on how to self diagnose your SQL Data Warehouse for common issues.

Monitoring

In the query drill-down blade, you can now conveniently view the number of concurrency slots that your queries are consuming along with its resource class. This enables you to manage your workload.

SQL Server Management Studio

We have addressed issues across SQL Server Management Studio (SSMS) allowing you to:

Execute the Generate Scripts wizard for database users and user defined functions
View your logical server name within the table properties of your database
Drop your database simply through the Object Explorer
Leverage templates for stored procedures and scalar-valued functions

Learn More

Create a data warehouse today or check out the many resources for learning more about SQL Data Warehouse, including:

What is Azure SQL Data Warehouse?
SQL Data Warehouse best practices
Video library
MSDN forum
CAT team blogs
Stack Overflow forum

Quelle: Azure

AzCopy 5.1.1 Release

We are pleased to announce the release of AzCopy 5.1.1, and have made the following updates in this release:

Integration of Data Movement Library into AzCopy

A request we get often is "how does AzCopy work?" With this release, we have now integrated DMLib into AzCopy, which allows you to see exactly how our transfer logic works. Please refer to our Data Movement Library page to learn more.

We&;ve also made the following improvements:

Perform Blob and File uploads using a Write-Only SAS. When you use a Write-Only destination SAS, you must add the /Y parameter and remove the /XO and /XN parameters.
Create a destination container, share, or table with Account SAS.
During copy operation, show total amount of data transferred and average transfer speed.
Use full path in verbose log instead of relative path.
Journal file name has changed from AzCopyEntires.jnl to AzCopyCheckpoint.jnl.

Download the latest release version and learn more about how to use AzCopy by checking out the Getting Started with AzCopy documentation.

As always, we look forward to your feedback.

Microsoft Azure Storage Team
Quelle: Azure

Announcing Exclude Disk Support for Hyper-V VMs with ASR

A key ask that Azure Site Recovery (ASR) customers often have is the ability to exclude disks from replication, either to optimize the replication bandwidth consumed or to optimize the target-side resources utilized by such disks. ASR&;s VMware to Azure scenario has had this capability since earlier this year. Today, we are announcing the availability of this feature for ASR&039;s Hyper-V to Azure scenario as well.

Why do customers exclude disks from replication?

Excluding disks from replication is often necessary because:

The data churned on the excluded disk is not important or doesn’t need to be replicated (or / and)
Storage and network resources can be saved by not replicating this churn

Let&039;s elaborate on what data isn&039;t “important”? The importance of replicated data is determined by its usefulness at the time of Failover. Data that is not replicated must also not be needed at the time of failover. The absence of this data should not impact the Recovery Point Objective (RPO) in any way.

What are the typical scenarios?

There are some specific examples of data churn that can be easily identified and are great candidates for exclusion – for example any page file writes, Microsoft SQL server tempdb writes etc. Depending on the workload and the storage subsystem, the page file can register a significant amount churn. However, replicating this data from the primary site to Azure would be resource intensive and completely worthless. Thus the replication of a VM with a single virtual disk having both the OS and the page file can be optimized by:

Splitting the single virtual disk into two virtual disks – one with the OS and one with the page file
Excluding the page file disk from replication

Similarly, for Microsoft SQL Server with tempdb and system database file on the same disk can be optimized by:

Keeping the system database and tempdb on two different disks
Excluding the tempdb disk from replication.

How to Exclude disk from replication?

Follow the normal Enable replication workflow to protect a VM from ASR portal. In the 4th step of Enable replication there is a new column named DISK TO REPLICATE to exclude disk from the replication. By default all the disks are selected for the replication. Unselect the VHD that you want to exclude from replication and complete the steps to enable the replication. Learn more about it in the Hyper-V to Azure(with VMM) or Hyper-V to Azure(no VMM) documentation. You can also view this video to see the feature in action.

Excluding the SQL Server tempdb disk

Let&039;s consider the Hyper-V to Azure scenario for a SQL Server virtual machine that has a tempdb which can be excluded.

Name of the Hyper-V VM: SalesDB

Disks on the source Hyper-V VM:

VHD name
Guest OS disk#
Drive letter
Data type on the disk

DB-Disk0-OS
DISK0
C:
OS disk

DB-Disk1
Disk1
D:
SQL system database and User Database1

DB-Disk2 (Excluded the disk from the protection)
Disk2
E:
Temp files

DB-Disk3 (Excluded the disk from the protection)
Disk3
F:
SQL tempdb database (folder path(F:MSSQLData) –> note down the folder path before failover

DB-Disk4
Disk4
G:
User Database2

Since data churn on two disks of the VM are temporary in nature, while protecting SalesDB VM, exclude Disk2 and Disk3 from the replication. ASR will not replicate those disks and on failover those disks will not be present on the failover VM on Azure

Disks on the Azure VM after failover:

Guest OS disk#
Drive letter
Data type on the disk

DISK0
C:
OS disk

Disk1
E:
Temporary storage [Azure adds this disk and assigns the first available drive letter]

Disk2
D:
SQL system database and User Database1 

Disk3
G:
User Database2

Since Disk2 and Disk3 were excluded from SalesDB VM, E: is the first drive letter from the available list. Azure assigns E: to temporary storage volume. For all the replicated disks, drive letter remains the same.

Disk3 which was SQL tempdb disk (tempdb folder path F:MSSQLData) and excluded from replication, the disk is not available on the failover VM. As a result, the SQL service is in stopped state and it needs the F:MSSQLData path.

There are two ways in which you can create this path.

Add a new disk and assign tempdb folder path or
Use existing temporary storage disk for tempdb folder path

Add a new disk:

Note down the SQL tempdb.mdf and tempdb.ldf path before failover.
From the Azure portal, add a new disk to the failover VM with the same or more size as that of source SQL tempdb disk (Disk3).
Login to the Azure VM. From the disk management(diskmgmt.msc) console initialize and format the newly added disk.
Assign the same drive letter that was used by SQL tempdb disk (F:).
Create tempdb folder on F: volume (F:MSSQLData).
Start SQL service from service console.

Use existing temporary storage disk for SQL tempdb folder path:

      1.    Open a command line console

      2.    Run SQL server in recovery mode from command line console

Net start MSSQLSERVER /f / T3608

       3.   Run the following sqlcmd to change the tempdb path to new path

sqlcmd -A -S <SQL server instance name>

sqlcmd -A -S SalesDB

USE master;

GO

ALTER DATABASE tempdb

MODIFY FILE (NAME = tempdev, FILENAME = &039;E:MSSQLtempdatatempdb.mdf&039;);

GO

ALTER DATABASE tempdb

MODIFY FILE (NAME = templog, FILENAME = &039;E:MSSQLtempdatatemplog.ldf&039;);

GO

      4.   Stop Microsoft SQL server service.

Net stop MSSQLSERVER

       5.   Start Microsoft SQL server service.

Net start SSQLSERVER

Refer to the following Azure guideline for temporary storage disk

Using SSDs in Azure VMs to store SQL Server TempDB and Buffer Pool Extensions
Performance best practices for SQL Server in Azure Virtual Machines

Failback (from Azure to on-premises)

Now let&039;s understand what all disks will be replicated when you do failover from Azure to your on-premises Hyper-V host. Disks that you create manually in Azure will be not be replicated. For example, if you fail over three disks and create two directly in Azure VM, only three disks which were failed over will be failed back. You can&039;t include disks created manually in failback or in re-protect from on-premises to Azure. It also does not replicate temporary storage disk to on-premises.

Failback to OLR

When failback is done to the original location, failback VM disk configuration remains the same as that of original VM disk configuration. That means the disks which were excluded from Hyper-V site to Azure, will be available on the failback VM.

In the above example, Azure VM disk configuration:

Guest OS disk#
Drive letter
Data type on the disk

DISK0
C:
OS disk

Disk1
E:
Temporary storage [Azure adds this disk and assigns the first available drive letter]

Disk2
D:
SQL system database and User Database1

Disk3
G:
User Database2

After planned failover from Azure to on-premises Hyper-V, disks on the Hyper-V VM(Original Location Replication):

VHD name
Guest OS disk#
Drive letter
Data type on the disk

DB-Disk0-OS
DISK0
C:
OS disk

DB-Disk1
Disk1
D:
SQL system database and User Database1

DB-Disk2 (Excluded disk)
Disk2
E:
Temp files

DB-Disk3
(Excluded disk)
Disk3
F:
SQL tempdb database (folder path(F:MSSQLData)

DB-Disk4
Disk4
G:
User Database2

Exclude Paging file disk

Let&039;s consider the Hyper-V to Azure scenario for a virtual machine which has a pagefile disk that can be excluded.

There are two cases:

Case1: Pagefile is configured on the D: drive

Hyper-V VM disk configuration:

VHD name
Guest OS disk#
Drive letter
Data type on the disk

DB-Disk0-OS
DISK0
C:
OS disk

DB-Disk1
(Excluded the disk from the protection)
Disk1
D:
pagefile.sys

DB-Disk2
Disk2
E:
User data 1

DB-Disk3
Disk3
F:
User data 2

Pagefile settings on the Hyper-V VM:

After you failover the VM from Hyper-V to Azure, disks on Azure VM:

VHD name
Guest OS disk#
Drive letter
Data type on the disk

DB-Disk0-OS
DISK0
C:
OS disk

DB-Disk1
Disk1
D:
Temporary storage –> pagefile.sys

DB-Disk2
Disk2
E:
User data 1

DB-Disk3
Disk3
F:
User data 2

Since Disk1 (D:) was excluded, D: is the first drive letter from the available list, Azure assigns D: letter to temporary storage volume.  Since D: is available on the Azure VM, pagefile setting of the VM remains the same.

Pagefile settings on Azure VM:

 

Case2: Pagefile file is configured on any other drive(other than D: drive)

Hyper-V VM disk configuration:

VHD name
Guest OS disk#
Drive letter
Data type on the disk

DB-Disk0-OS
DISK0
C:
OS disk

DB-Disk1 (Excluded the disk from the protection)
Disk1
G:
pagefile.sys

DB-Disk2
Disk2
E:
User data 1

DB-Disk3
Disk3
F:
User data 2

Pagefile settings on the Hyper-V VM:

After you failover the VM from Hyper-V to Azure, disks on Azure VM:

VHD name
Guest OS disk#
Drive letter
Data type on the disk

DB-Disk0-OS
DISK0
C:
OS disk

DB-Disk1
Disk1
D:
Temporary storage –> pagefile.sys

DB-Disk2
Disk2
E:
User data 1

DB-Disk3
Disk3
F:
User data 2

Since D: is the first drive letter available from the list, Azure assigns D: to temporary storage volume. For all the replicated disks, drive letter remains the same. Since G: disk is not available system will use C: drive for pagefile.

Pagefile settings on Azure VM:

You can check out additional product information, and start replicating your workloads to Microsoft Azure using Azure Site Recovery today. You can use the powerful replication capabilities of Site Recovery for 31 days at no charge for every new physical server or virtual machine that you replicate. Visit the Azure Site Recovery forum on MSDN for additional information and to engage with other customers, or use the ASR User Voice to let us know what features you want us to enable next.

Azure Site Recovery, as part of Microsoft Operations Management Suite, enables you to gain control and manage your workloads no matter where they run (Azure, AWS, Windows Server, Linux, VMware or OpenStack) with a cost-effective, all-in-one cloud IT management solution. Existing System Center customers can take advantage of the Microsoft Operations Management Suite add-on, empowering them to do more by leveraging their current investments. Get access to all the new services that OMS offers, with a convenient step-up price for all existing System Center customers. You can also access only the IT management services that you need, enabling you to on-board quickly and have immediate value, paying only for the features that you use.

Pagefile is configured on D: drive
Pagefile file is configured on any other drive(other than D: drive)

Quelle: Azure

New Azure Logic Apps innovation – general availability of cloud-based Enterprise Integration Pack

Businesses are looking for more ways to reduce infrastructure costs without compromising service availability. This results in companies looking for newer cloud development architectures like serverless, giving rise to the need for event triggered integration across multiple third party services.  Developers are turning to serverless solutions like Azure Logic Apps and Azure Functions to automate workflows and integrate systems, thereby accelerating application delivery and reducing costs. Logic Apps enables customers to quickly and easily build powerful integration solutions using a visual designer and a wide set of out-of-the-box connectors such as Dynamics CRM, Salesforce, Office 365 and many more.

Today I am excited to announce another important milestone in integration- the general availability of Enterprise Integration Pack within Logic Apps, which further simplifies business-to-business (B2B) communications in the cloud. It enables you to more easily process business transactions reliably, track and troubleshoot B2B events and leverage additional out-of-the-box connectors.

Electronic Data Interchange (EDI) and Business-to-Business (B2B) transactions

With Enterprise Integration Pack, you can take advantage of a faster, more reliable and versatile B2B/EDI solution than traditional integration solutions. Integration accounts within Enterprise Integration Pack quickly create and manage cloud based B2B related artifacts such as maps, schemas, trading partners, agreements and certificates. With this release, electronic data interchange (EDI) has never been easier. You can send, receive and troubleshoot B2B transactions across a wide variety of protocols including  AS2, EDIFACT and X12. Customers like Mission Linen Supply are already realizing the benefits of EDI capabilities in Logic Apps: “Today, with our Azure Logic Apps solution, we can get suppliers onboarded within two weeks versus the two months or longer that the [Electronic Data Interchange] provider required. The faster we can integrate partners, the faster we can grow our business.” – Dave Pattison, Director of IT, Mission Linen Supply

Below is a view of the new integration account within Enterprise Integration Pack:

Management capabilities

The ability to view and troubleshoot B2B events via system management solutions is as important as enabling comprehensive EDI capabilities. With the Enterprise Integration Pack, you can track B2B events in a number of flexible ways like built-in tracking which can be routed to Microsoft Operations Management Suite (OMS) using the out-of-the-box tracking portal. You can easily view and troubleshoot B2B transactions over AS2 and X12 formats (with EDIFACT coming in the next few weeks). Additionally, a new RESTful tracking API enables you to send tracking events from both Logic App executions as well as other applications for end to end visibility. You can also add and correlate tracking data across your entire business process in Operations Management Suite.

Here is a view of tracking B2B events through the Operations Management Suite portal:

Enterprise connectors

We realize that many of our customers are often dealing with mission-critical applications and business processes that can lead to complex, time consuming connection configuration steps. With Logic Apps, we have added more enterprise connectors that make it simple and fast to establish connections with business applications. For instance, with the SAP connector, you can easily connect your on-premises SAP systems to cloud applications using Logic Apps without the need for complex coding required by other serverless products in the market. Today, the MQ Series and SAP ECC connectors are in preview, with more connectors coming in the next few months. For a full list of all currently available connectors, please visit the Logic Apps connectors reference.

Get started today!

With the general availability of Enterprise Integration Pack, you can now start using these services in production with full SLA and support. We are committed to continuous delivery of serverless compute and integration capabilities and will continue to share updates about investments and new releases. In the meanwhile, learn more and try our serverless offerings – Azure Logic Apps and Azure Functions. As always, please share your comments and continue to engage with our Logic Apps and Functions teams.

Also, be sure to check out Logic Apps pricing page and the documentation page on how you can start consuming these features today.
Quelle: Azure

Introducing Change Feed support in Azure DocumentDB

 We’re excited to announce the availability of Change Feed support in Azure DocumentDB! With Change Feed support, DocumentDB provides a sorted list of documents within a DocumentDB collection in the order in which they were modified. This feed can be used to listen for modifications to data within the collection and perform actions such as:

Trigger a call to an API when a document is inserted or modified
Perform real-time (stream) processing on updates
Synchronize data with a cache, search engine, or data warehouse

DocumentDB&;s Change Feed is enabled by default for all accounts, and does not incur any additional costs on your account. You can use your provisioned throughput in your write region or any read region to read from the change feed, just like any other operation from DocumentDB.

In this blog, we look at the new Change Feed support, and how you can build responsive, scalable and robust applications using Azure DocumentDB.

Change Feed support in Azure DocumentDB

Azure DocumentDB is a fast and flexible NoSQL database service that is used for storing high-volume transactional and operational data with predictable single-digit millisecond latency for reads and writes. This makes it well-suited for IoT, gaming, retail, and operational logging applications. These applications often need to track changes made to DocumentDB data and perform various actions like update materialized views, perform real-time analytics, or trigger notifications based on these changes. Change Feed support allows you to build efficient and scalable solutions for these patterns.

Many modern application architectures, especially in IoT and retail, process streaming data in real-time to produce analytic computations. These application architectures (“lambda pipelines”) have traditionally relied on a write-optimized storage solution for rapid ingestion, and a separate read-optimized database for real-time query. With support for Change Feed, DocumentDB can be utilized as a single system for both ingestion and query, allowing you to build simpler and more cost effective lambda pipelines. For more details, read the paper on DocumentDB TCO.

 

Stream processing: Stream-based processing offers a “speedy” alternative to querying entire datasets to identify what has changed. For example, a game built on DocumentDB can use Change Feed to implement real-time leaderboards based on scores from completed games. You can use DocumentDB to receive and store event data from devices, sensors, infrastructure, and applications, and process these events in real-time with Azure Stream Analytics, Apache Storm, or Apache Spark using Change Feed support.

Triggers/event computing: You can now perform additional actions like calling an API when a document is inserted or modified. For example, within web and mobile apps, you can track events such as changes to your customer&039;s profile, preferences, or location to trigger certain actions like sending push notifications to their devices using Azure Functions or App Services.

Data Synchronization: If you need to keep data stored in DocumentDB in sync with a cache, search index, or a data lake, then Change Feed provides a robust API for building your data pipeline. Change feed allows you to replicate updates as they happen on the database, recover and resume syncing when workers fail, and distribute processing across multiple workers for scalability.

 

Working with the Change Feed API

Change Feed is available as part of REST API 2016-07-11 and SDK versions 1.11.0 and above. See Change Feed API for how to get started with code.

 

 

The change feed has the following properties:

Changes are persistent in DocumentDB and can be processed asynchronously.
Changes to documents within a collection are available immediately in the change feed.
Each change to a document appears only once in the change feed. Only the most recent change for a given document is included in the change log. Intermediate changes may not be available.
The change feed is sorted by order of modification within each partition key value. There is no guaranteed order across partition-key values.
Changes can be synchronized from any point-in-time, that is, there is no fixed data retention period for which changes are available.
Changes are available in chunks of partition key ranges. This capability allows changes from large collections to be processed in parallel by multiple consumers/servers.
Applications can request for multiple Change Feeds simultaneously on the same collection.

Next Steps

In this blog post, we looked the new Change Feed support in Azure DocumentDB.

Learn more about Change Feed support in Azure DocumentDB
Upgrade to .NET SDK 1.11.0 with Change Feed support
Create a new DocumentDB account from the Azure Portal or download the DocumentDB Emulator
Stay up-to-date on the latest DocumentDB news and features by following us on Twitter @DocumentDB or reach out to us on the developer forums on Stack Overflow

Quelle: Azure

Use the Cloud to help people in need

In this holiday time, peoples’ thoughts turn to helping those less fortunate. If you’re a charity, helping people is your business all year round, and every dollar spent on computing infrastructure is a dollar taken away from those you’re trying to help. That’s why, increasingly, charities and nonprofits are turning away from expensive on-premises solutions and moving to the cloud.

Take Minnesota-based RREAL, the Rural Renewable Energy Alliance, which helps low-income households receive the advantages of solar power.

As their business grew, RREAL needed ways to track new customers, to manage their manufacturing, and to collaborate without incurring the costs and overhead of expensive infrastructure. Azure-based Dynamics GP and Dynamics CRM along with Office 365 met these needs. And with Dynamics’ Business Analyzer they track KPI’s, and with PowerBI they have a have real-time dashboard.

The result: they have cut installation time by over 50 percent: more people get cheap, renewable power faster.

World Animal Protection is a global organization focusing on animal welfare. Like all charities, transparency is a key goal: donors need to know how their contributions are being used. And, because their teams can be anywhere in the world, their applications must be accessible globally. Finally, as with all nonprofits, keeping administrative costs down is essential.

With infrastructure and applications managed by the Microsoft Cloud, workers can devote more time to their core mission: improving the lot of the world’s animals. Their technology that best suited their needs: Office 365, Dynamics and Azure.

The mission of UK-based JustGiving is to “ensure no great cause goes unfunded.” With a large social network of donors, they found that an individual’s “social graph” is a good predictor of that person’s next donation: if your friends care about something, you probably do too.

But in considering a technology solution, the size of this dataset — with 361 million connections – was daunting. Enter HDInsight, Microsoft Azure’s big data service, based on Hadoop, Spark, and R. A relational database – based on rows and columns – was inappropriate for a scenario where one person could link to any number of others. Instead, their solution, called GiveGraph™, integrated Facebook’s OpenGraph technology with HDInsight to provide a robust, massively scalable database of people’s relationships to other people — and to their causes.

I hope that you too will consider giving during this holiday season. We’re proud that it’s an important part of our corporate culture: Since our giving program began, Microsoft employees have donated over $1 billion to worthy causes, and just this year our Philanthropies group announced a program to provide Azure, Office 365, Dynamics and other products free to eligible organizations.

* * *

Of course, the benefits reaped by non-profits and charitable organizations from the cloud – reduced costs, increased agility, new kinds of innovative solutions, global reach – are the same ones any enterprise will reap by using the cloud.

As I mentioned in my last post, we see the cloud providing nearly limitless opportunities to improve our lives; the cloud should benefit everyone. We believe governments and enterprises, along with vendors like ourselves, should work together to ensure that, jointly, the cloud we all build is trusted, responsible and inclusive.

Our ebook, A Cloud for Global Good, provides an initial set of 78 policy recommendations in 15 categories for governments and regulatory bodies to consider. The topic areas are thought-provoking, and far from simple: ranging from how we all should about personal privacy to government access to data, to cross-border data flows to supporting digital literacy around the world. As I said last time, we don’t have all the answers; we don’t even have all the questions. However, if with A Cloud for Global Good we can start a conversation, we’ll have achieved something.

So: let us know what you think!
Quelle: Azure

Latest update of Azure Analysis Services preview brings scale up and down

The latest update to the Azure Analysis Services preview brings the ability to increase or reduce the capacity of a server after it has been created. For example, if there is a particular time of year when you expect query volume to be higher, you can simply scale the service up to get more QPUs. After that period is over, you can reduce your capacity by simply scaling down. The ability to scale up and down and adapt to changing workload demand is a one of the benefits of using Azure Analysis Services to host your models

How to scale

Pricing tier scale up and down can be done manually in the Azure portal or automated through the Azure ARM APIs and PowerShell.

To use this feature in the Azure portal, simply locate your server.

On the left-hand side, you will see an option called scale.

Clicking on this will now let you choose a new pricing tier. Currently my server is set to S1. I will scale it up to S2 by selecting S2 Standard and clicking Select.

Note: When scaling down, ensure that you have enough cache capacity for the models which are already deployed to the server or all models may not be able to load until you scale back up. You can see your current memory usage by clicking on Metrics on the left-hand side.

Within a few moments, you will see that the server now has the capacity of an S2.

 

 

Learn more about Azure Analysis Services.
Quelle: Azure