Root cause analysis and time exploration updates to Azure Time Series Insights

Azure Time Series Insights is currently in public preview, and we’ve been hard at work the last couple months to help our customers better manage and find value in their time series data.  Time Series Insights is a fully managed analytics, storage, and visualization service that makes it simple to explore and analyze billions of IoT events simultaneously. Additionally, it allows you to visualize and explore time series data streaming into Azure in minutes, all without having to write a single line of code. For more information about the product, pricing, and getting started, please visit the Time Series Insights website on Azure.com.

Faster root cause analysis and investigations

We’ve heard a lot of feedback from our manufacturing, and oil and gas customers that they are using Time Series Insights to help them conduct root cause analysis and investigations, but it’s been difficult for them to quickly pinpoint statistically significant patterns in their data. To make this process more efficient, we’ve added a feature that proactively surfaces the most statistically significant patterns in a selected data region. This relieves users from having to look at thousands of events to understand what patterns most warrant their time and energy. Further, we have made it easy to then jump directly into these statistically significant patterns to continue conducting an analysis.

This new feature is also helpful for post-mortem investigations into historical data. Most of our customers have existing alerting mechanisms in place (for example, Azure Stream Analytics jobs) and use Time Series Insights as a complementary investigative tool to understand the context of an alert. These customers are using Time Series Insights to look back during a postmortem for additional clues to help mitigate and prevent similar issues from occurring in the future.

Below is a GIF showing patterns in the stats tab and adding a pattern as a new term:

Greater control of time for data exploration

Additionally, we have heard from customers across many verticals that they are using Time Series Insights to help them triage and diagnose issues involving sensor data from their key assets, but they have been asking for finer control over their ability to navigate time in our visualizations. To give these customers more control, we have provided several new usability improvements to time navigation to make triage and diagnosing easier.

First, we’ve added a time interval slider for more precise control of movement between large slices of time that show smooth trends down to slices as small as the millisecond, allowing customers to see granular, high-resolution cuts of their data. Further, we’ve set the slider’s default starting point to be the most optimal view of the data from their selection; balancing resolution, query speed, and granularity.

Below is a GIF showing the slider in action:

Secondly, we heard from customers that they would like an easier way to move between time ranges when conducting diagnostics on their sensor data. Previously, a user needed to leave their search and reselect the period they wanted to explore from their environment all over again to complete this task. To make their workflow more seamless, we have added a time brush to make it easier to navigate from one time span to another, putting intuitive UX front and center for easy movement between time ranges.

Below is a GIF showing how simple it is to navigate using the brush:

We are excited about these new updates, but we are even more excited about what’s to come, so be on the lookout for more product news soon! You can also explore Time Series Insights and take these new updates for a test drive using our free demo environment, you’ll just need an Azure.com account to get started. You can also stay up to date on all things Time Series Insights by following us on Twitter.
Quelle: Azure

New Electric Imp and Particle seamless integration with Azure IoT Hub

New Azure IoT integration with device connectivity platforms brings the best of the “Internet” and the “Things” together, making both hardware connectivity and Cloud development simple.

Electric Imp and Particle, each in their own unique way, offer device connectivity platforms for seamlessly, securely, and reliably connecting Things to the Internet. To complete their solutions, and give easy access to the power of the Cloud to their customers, both companies now offer a seamless integration with Azure IoT Hub. Once device data lands in Azure through IoT Hub, it can easily be analyzed for insights, stored, and visualized, opening up a wide range of capabilities such as predictive maintenance, remote monitoring, and full integration into Line of Business Applications for workflows automation.

Making IoT easy, yet secure and powerful

The IoT device lifecycle management is not always an easy task: to securely provision, connect, communicate with, monitor, update, manage, and retire IoT devices requires deep hardware expertise. Particle and Electric Imp both propose unique answers to make this simpler and more secure.

But securely connecting devices to the Cloud is not all that is required. In order to make the most out of the Things and harness their data, you need powerful and easy to configure Cloud services to analyze the data on the flight, instantly get insights from millions of data points, easily browse through and visualize huge amounts of data, automate notifications, optimize maintenance processes… Azure IoT offers a variety of Platform-as-a-Service solutions to address these new IoT scenarios.

IoT Hub integration with the Particle Cloud and with the Electric Imp Cloud both consist in bridging the device connectivity platform to IoT Hub, representing each of the devices in the field as a unique device in IoT Hub.

All it takes to enable the integration is to grant access for your IoT Hub instance to Particle Cloud or Electric Imp Cloud which will take care of associating each of the devices it knows with a corresponding device ID in IoT Hub, creating, updating and deleting the IoT Hub device identities for you. When the data arrives in IoT Hub, it is no different than if devices had been connected directly. The new devices can seamlessly be integrated into an existing or new IoT solution.

New business models enabled

Not having to spend valuable time and resources on areas that are not in their domain of expertise (hardware and cloud development), companies like Kelly Roofing can extend their businesses to new models in record time.

Microsoft Dynamics is working with PowerObjects and Kelly Roofing to run a pilot which utilizes the Particle/IoT Hub Integration. By outfitting a roof with leak sensors connected to Azure through Particle, Kelly Roofing can move away from the traditional model of selling customers a roof every 20 years to instead offering customers a leak-proof roof for a yearly fee. This connected roof sends sensor data to Azure from Particle devices. If a leak is detected, a service alert is triggered in Dynamics and a contractor is automatically dispatched to service the roof. This reduces the upfront cost to the consumer while increasing their loyalty, satisfaction and lifetime value. It illustrates end to end the value this integration brings:

Particle Electrons for Connectivity
Particle Cloud for Device Management and Over-the-air firmware updates
Azure IoT Hub as an entrance point for the data
Azure Machine Learning and Power BI for anomaly detection
Dynamics to trigger service alerts and schedule technicians

Electric Imp offers an “All-Azure” solution to Industrial IoT

In addition to providing an advanced integration that virtually eliminates the complexities of deploying, commissioning, securing, and managing IoT devices at scale, Electric Imp also offers customers the option of a private managed Electric Imp Cloud instance, fully hosted on Microsoft Azure. This gives customers further control for deployment-specific data privacy, scalability, and flexibility across all their Cloud resources.

Start playing with devices and Azure IoT Hub today!

You can get started in minutes following the step by step guides:

QuickStart your impCloud-to-Azure IoT Hub integration
Setup the Particle cloud to connect to Azure IoT Hub

Once you have securely connected your device to Azure, you can rapidly implement common IoT solution patterns:

Save IoT Hub messages to Azure data storage
Use Power BI to visualize real-time sensor data from Azure IoT Hub
Use Azure Web Apps to visualize real-time sensor data from Azure IoT Hub
Weather forecast using the sensor data from your IoT hub in Azure Machine Learning
Device management with iothub-explorer
Remote monitoring and notifications with Logic Apps

Quelle: Azure

August updates to the Azure Analysis Services web designer

Last month we released a preview of the Azure Analysis Services web designer. This new browser-based experience will allow developers to start creating and managing Azure Analysis Services (AAS) semantic models quickly and easily. While SQL Server Data Tools (SSDT) and SQL Server Management Studio (SSMS) are still the primary tools for development, this new experience is intended to make simple changes fast and easy. It is great for getting started on a new model or to do things such as adding a new measure to a development or production AAS model.

Today we are announcing the first set of updates which include a mix of fixes and new features. In the upcoming months, we will continue to evolve the web designer to allow for easier and more advanced model creation in the web. New functionality includes:

DAX syntax highlighting for measures

Adding measures is a bit simpler with the use of a multiline code editor which recognizes DAX formula syntax.

New mini map in JSON editor

The model JSON editor now includes a mini document map on the right hand side to make browsing the JSON document simpler.

Display folder and hierarchy support in the query designer

You can now use hierarchies and display folders when graphically designing queries.

Table relationship editor

Create new relationships or edit existing ones between table with the new relationship editor dialog.

Copy server name

When needing to connect to your server from other tools such as SSMS or SSDT, you can now simply copy your full server name from the server blade.

You can try the Azure Analysis web designer today by linking to it from a server in the Azure portal.

Submit your own ideas for features on our feedback forum. Learn more about Azure Analysis Services and the Azure Analysis Services web designer.
Quelle: Azure

Introducing the #Azure #CosmosDB Change Feed Processor Library

Azure Cosmos DB is a fast and flexible globally-replicated database service that is used for storing high-volume transactional and operational data with predictable millisecond latency for reads and writes. To help you build powerful applications on top of Cosmos DB, we built change feed support, which provides a sorted list of documents within a collection in the order in which they were modified. Now, to address scalability while preserving simplicity of use, we introduce the Cosmos DB Change Feed Processor Library. In this blog, we look at when and how you should use Change Feed Processor Library.

Change feed: Event Sourcing with Cosmos DB

Storing your data is just the beginning of the adventure. With change feed support, you can integrate with many different services depending on what you need to do once changes appear.

Example #1: You are building an online shopping website and need to trigger an email notification once a customer completes a purchase. Whether you prefer to use Azure Functions, Azure Notification Hub, Azure App Services, or your custom-built micro services, change feed allows seamless integration by surfacing changes in the order that they occur.

Example #2: You are storing data from an autonomous vehicle and need to detect abnormalities in incoming sensor data. As new entries are stored in Cosmos DB, these changes that appear on the change feed can be directly processed by Azure stream analytics, Azure HDInsight, Apache Spark, or Apache Storm. With change feed support, you can apply intelligent processing in real-time while data is stored into Cosmos DB.

Example #3: Due to architecture changes, you need to change the partition key for your Cosmos DB collection. Change feed allows you to move your data to a new collection while processing incoming changes. The result is zero down time while you move data from anywhere to Cosmos DB.
 

What about working with larger data storage with multiple partitions?

As your data storage needs grow, it’s likely that you will use multiple partitions to store your data. Although it’s possible to manually read changes from each partition, the Change Feed Processor makes it easier by abstracting the change feed API. This function facilitates the reading across partitions and distributes change feed event processing across multiple consumers. This library provides a thread-safe, multi-process, safe runtime environment with checkpoint and partition lease management for change feed operations. The Change Feed Processor Library is available as a NuGet package for .NET development.

When to use Change Feed Processor Library:

Pulling updates from the change feed when data is stored across multiple partitions
Moving or replicating data from one collection to another
Parallel execution of actions triggered by updates to data and the change feed

Getting started with the Change Feed Processor Library is simple and lightweight. In the following example, we have a collection of documents containing news events associated with different cities. We use “city” as the partition key. In just a few steps, we can print out all changes made to any document from any partition.

To set this up, install the Change Feed Processor Library Nuget package and create a lease collection. The lease collection should be created through an account close to the write region. This collection will keep track of change feed reading progress per partition and host information.
 

To define the logic performed when new changes surface, edit the ProcessChangesAsync function. Here, we are simply printing out the document ID of the new or updated document. You can also modify this function to perform different tasks.

 

public Task ProcessChangesAsync(ChangeFeedObserverContext context, IReadOnlyList<Document> docs)
{
Console.WriteLine("Change feed: total {0} doc(s)", Interlocked.Add(ref totalDocs, docs.Count));
foreach (Document doc in docs)
{
Console.WriteLine(doc.Id.ToString());
}

return Task.CompletedTask;
}

 

Next, to begin the Change Feed Processor, instantiate ChangeFeedProcessorHost, providing the appropriate parameters for your Azure Cosmos DB collections. Then, call RegisterObserverAsync to register your IChangeFeedObserver (DocumentFeedObserver in this example) implementation with the runtime. At this point, the host attempts to acquire a lease on every partition key range in the Azure Cosmos DB collection using a "greedy" algorithm. These leases last for a given timeframe and must then be renewed. As new nodes come online, in this case worker instances, they place lease reservations. Over time the load shifts between nodes as each host attempts to acquire more leases.

 

DocumentFeedObserver docObserver = new DocumentFeedObserver();

ChangeFeedEventHost host = new ChangeFeedEventHost(hostName, documentCollectionLocation, leaseCollectionLocation, feedOptions, feedHostOptions);

await host.RegisterObserverFactoryAsync(docObserverFactory);

 

Next steps

Review the documentation: Working with the Change Feed support in Azure CosmosDB.
Try out sample code: An example to read and copy changes to new collection.
Download the NuGet Package to get started.

Stay up-to-date on the latest Azure Cosmos DB news and features by following us on Twitter @AzureCosmosDB and #CosmosDB, and reach out to us on the developer forums on Stack Overflow.
Quelle: Azure

Online training for Azure Data Lake

We are pleased to announce the availability of new, free online training for Azure Data Lake. We’ve designed this training to get developers ramped up fast. It covers all the topics a developer needs to know to start being productive with big data and how to address the challenges of authoring, debugging, and optimizing at scale.

Explore the training

Click on the link below to start!

Microsoft Virtual Academy: Introduction to Azure Data Lake

Looking for more?

You can find this training and many more resources for developers.

Course outline

1 | Introduction to Azure Data Lake

Get an overview of the entire Azure Data Lake set of services including HDI, ADL Store, and ADL Analytics.

2 | Introduction to Azure Data Lake Tools for Visual Studio

Since ADL developers of all skill levels use Azure Data Lake Tools for Visual Studio, review the basic set of capabilities offered in Visual Studio.

3 | U-SQL Programming

Explore the fundamentals of the U-SQL language, and learn to perform the most common U-SQL data transformations.

4 | Introduction to Azure Data Lake U-SQL Batch Job

Find out what’s happening behind the scenes, when running a batch U-SQL script in Azure.

5 | Advanced U-SQL

Learn about the more sophisticated features of the U-SQL language to calculate more useful statistics and learn how to extend U-SQL to meet many diverse needs.

6 | Debugging U-SQL Job Failures

Since, at some point, all developers encounter a failed job, get familiar with the causes of failure and how they manifest themselves.

7 | Introduction to Performance and Optimization

Review the basic concepts that drive performance in a batch U-SQL job, and examine strategies available to address those issues when they come up, along with the tools that are available to help.

8 | ADLS Access Control Model

Explore how Azure Data Lake Store uses the POSIX Access Control model, which is very different for users coming from a Windows background.

9 | Azure Data Lake Outro and Resources

Learn about course resources.
Quelle: Azure

First Hyperscale CSP with Graphic-Intensive Supportable VMs in UK: Microsoft Azure’s G/GS/LS/H/N-series now available in UK South  

Today Microsoft Azure Virtual Machine customers can take advantage of the Azure G/GS/LS/H/N-series of Virtual Machine sizes, available in UK South. We’re also excited to announce that Microsoft Azure is the first Hyperscale Cloud Provider offering VMs able to run graphic intensive workloads in the UK (see N series below)!

New Azure N series – The NC and NV sizes are also known as GPU-enabled instances. These are specialized virtual machines that include NVIDIA®'s GPU cards, optimized for different scenarios and use cases. The NV sizes are optimized and designed for remote visualization, streaming, gaming, encoding, and VDI scenarios utilizing frameworks such as OpenGL and DirectX. The NC sizes are more optimized for compute-intensive and network-intensive applications and algorithms, including CUDA- and OpenCL-based applications and simulations.

Learn more about N-Series.

New Azure G/GS/LS series –  G/GS series are ideal for applications that demand faster CPUs, better local disk performance, or have higher memory demands. They offer a powerful combination for many enterprise-grade applications. The LS-series is optimized for workloads that require low latency local storage, like NoSQL databases (for example, Cassandra, MongoDB, Cloudera, and Redis).

Learn more about G/LS-Series.

New Azure H series –  H-series VMs are an excellent fit for compute-intensive workloads and provide cutting-edge performance, as well as an RDMA back-end network for MPI workloads.

Learn more about H-Series.

For more information, please visit the Virtual Machines page and the Virtual Machines pricing page.
Quelle: Azure

Mesosphere DCOS, Azure, Docker, VMware & Everything Between – Deploying DC/OS with Azure Container Service

This post is part of the “Mesosphere DC/OS, Azure, Docker, VMware & Everything Between” multiple blog post series. In the previous posts for this series, I looked at the following topics:

Mesosphere DCOS, Azure, Docker, VMware and everything between – Architecture and CI/CD Flow

Mesosphere DCOS, Azure, Docker, VMware and everything between – Security & Docker Engine Installation

Mesosphere DCOS, Azure, Docker, VMware & Everything Between – SSH Authorized Keys

Mesosphere DCOS, Azure, Docker, VMware & Everything Between – Deploying DC/OS on VMware vSphere

Mesosphere DCOS, Azure, Docker, VMware & Everything Between – Deploying DC/OS with Azure Container Service

We have two working DC/OS clusters, one on Azure and another on vSphere – Great progress so far! Now, it’s time deploy Azure Container Registry (ACR) which will be used as a private catalog for our Docker images.

This is going to be a VERY short post as deploying ACR takes no more than 5min tops as the process is super straightforward.

Azure Container Registry Deployment

So, let’s get to work and look for ACR in Azure Marketplace. Start the deployment wizard.

Read more about all the details around DC/OS 1.9 deployment on top of VMware vSphere on my personal blog.
Quelle: Azure

Migrating a Web App from ClearDB to Azure Database for MySQL

With the introduction of Azure Database for MySQL, I’ve seen a lot of interest and questions from customers on how they can move their existing Web App from using ClearDB as their MySQL database provider over to Azure Database for MySQL. If you’re not using ClearDB, but rather MySQL In-App as your provider, and want to move over to Azure Database for MySQL, a great blog has already been written on this that you should check out. For this blog, I’ll be migrating my WordPress website’s database from ClearDB to Azure Database for MySQL, as well as updating my Web App to point to the new database server.

Preparing for the migration

I’ll be using MySQL Workbench as the tool to do the data migration. Of course you can use other common tools or CLI as well. First, download MySQL Workbench. Once you’ve downloaded and installed MySQL Workbench, you will need to create a connection to your ClearDB database in order to kick-off the migration. To create the connection, you’ll need some information about your ClearDB database.

In the Azure Portal, open up your ClearDB database and click on the Properties task on the left navigation pane. Keep this open as you’ll need this for the next step in creating your MySQL Workbench connection.

Now open MySQL Workbench and create a new connection by clicking on the + icon at the top of the home screen. In the Setup New Connection screen, give your connection a name (this can be anything – I choose “My ClearDB Database”), and then switch back over to your browser where you have your ClearDB database properties open and copy the HOSTNAME, USERNAME, and PASSWORD into the respective fields of MySQL Workbench. Note that to enter the password in MySQL Workbench, you’ll need to click on the Store in Vault… button first.

Once this is done, save your connection and create another new connection for your Azure Database for MySQL. If you haven’t created an Azure Database for MySQL yet, refer to our documentation Quickstart on how to do so. Similar to ClearDB, you’ll need to get the hostname, username, and password of your Azure Database for MySQL through the portal.  But before we do that, we’ll need to configure control access to your Azure Database for MySQL server.

Open your Azure Database for MySQL Server in the Azure Portal and click on the Connection Security setting on the left navigation pane.  For simplicity in the migration process, I have disabled SSL connectivity by clicking on the Enforce SSL connection toggle switch to Disabled. With regards to firewall rules, you can either click on the + Add My IP icon at the top of the screen which will add your local IP address to the firewall, or in my case I’ve created a firewall rule that allows all IP addresses access to my server for the time being. Later, when I configure my Web App to connect to my database server, I’ll add the appropriate IP addresses and remove this rule. For more information on configuring SSL connectivity for your server, check out our documentation.

Now that you have configured access to your Azure Database for MySQL server, you can continue to create a new MySQL Workbench connection just as you did for your ClearDB database server.  Open your Azure Database for MySQL server and in the main Essentials dashboard, you’ll see that the Server Name and Server admin login name are in the main page. Your password is not exposed here. If you don’t remember what your password is for your server, you can always reset it using the Reset Password option in the upper left corner of the Essentials pane. With this information, create another connection in MySQL Workbench for your Azure Database for MySQL server the same way you did for your ClearDB database.

Migrate your database

Open MySQL Workbench and click on Database and then Migration Wizard from the drop-down to start your database migration.

In the migration wizard you’ll be asked to select your source and target database servers. For your source, choose the ClearDB database server connection you created first, and for the destination choose the Azure Database for MySQL Server you just finished creating.

Continue through the wizard until you get to the screen that asks you which schemas you want to migrate. Make sure to only migrate the schema that is applicable to your application. In this case of a WordPress application, there is only one schema which is applicable. The name of your schema should be similar as ClearDB randomly generates the schema name.

Once you select your source schema to migrate, the remainder of your migration should be the defaults selected in the migration wizard. The time it takes to migrate your database should not be long, even for larger databases assuming your Azure Database for MySQL server is in the same region as your source ClearDB database.

Configuring your Azure Web App

Now that your database has been migrated, you’ll need to connect your Web App to your Azure Database for MySQL. You’ll need to both update your Web App as well as the firewall rules of your Azure Database for MySQL if you choose to restrict access to your database to your Web App exclusively. We’ll start with this, so open up your Web App in the Azure portal. On the left navigation pane, select Properties and note the Outbound IP Addresses. These will be the specific IP addresses that you will create firewall rules for in your Azure Database for MySQL.

Open up your Azure Database for MySQL server and create firewall rules for each IP address to allow access for your Web App to the server. This is the same process as described above for adding firewall rules.

Now go back to your Azure Web App and open up the Application settings on the left navigation pane, and then scroll down to the Connection strings section of the main pain. You will now need to modify your connection string to point to the new database server. You can simply click on the string value and edit it directly. You will need to replace all of the values except the Database value, as you migrated the database (schema) intact from ClearDB which preserves the database name. In my case, my original connection string was as follows:

Database=acsm_8cb9eb8d372ebbd;Data Source=us-cdbr-azure-west-b.cleardb.com;User Id=b8c2e429e67ac2;Password=47bd9069

After I updated it to point to my new Azure Database for MySQL Server, it looks like this:

Database=acsm_8cb9eb8d372ebbd;Data Source=jasonsnewserver.mysql.database.azure.com;User Id=jason@jasonsnewserver;Password=MyPassword12

Make sure to click the Save button at the top of the screen, and that’s all you need to do. You’re now using Azure Database for MySQL on your existing Web App. Congratulations!

Jason – JasonMA_MSFT
Quelle: Azure

Announcing public preview of Azure Batch Rendering

This week SIGGRAPH 2017 is blasting away in Los Angeles and I can’t imagine a better place than the premier event for computer graphics to announce that Azure Batch Rendering will now move into public preview.

The complexities of cinematic productions, associated workflows, and infrastructure have always intrigued me, and they are honestly one of the very best examples of the hands-on value that Azure provides. Abstracting away infrastructure considerations, deployment, and management rarely made more sense, while at the same time being able to scale beyond the physical boundaries of your on-premises environments.

Enabling artists, engineers, and designers to submit rendering jobs seamlessly via client applications such as Autodesk Maya, 3ds Max, or via our SDK, Azure Batch Rendering accelerates large scale rendering jobs to deliver results to our customers faster.

Back in May during the Microsoft Build conference, we announced the first limited preview of Batch Rendering, a milestone in integrating the high-end graphics user experience with the power of Azure. Since then, hundreds of curious and excited customers have been putting Batch Rendering through its paces and have provided invaluable feedback to us on the product – thank you!

While Azure Batch Rendering with Autodesk is moving to public preview, we are also excited to announce a limited preview of V-Ray in partnership with Chaos Group. With V-Ray being supported for Maya and 3ds Max, this is another great step forward in supporting a rich and vibrant ecosystem on Azure.

Azure will continue to work with Autodesk, Chaos Group, and other partners to enable customers to run their day to day rendering workloads seamlessly on Azure. Batch Rendering will provide tools, such as client plugins, offering a rich integrated experience allowing customers to submit jobs from within the applications with easy scaling, monitoring, and asset management. Additionally, the SDK, available in various languages, allows custom integration with customer’s existing environments.

In addition to the our Batch Rendering announcements, we are launching a preview of a cool new management application, Batch Labs! Batch Labs is a cross-platform desktop management tool which includes job submission capabilities as well as a rich management and monitoring experience, along with the ability to manage asset uploads and downloads. Batch Labs hosts a marketplace of supported applications which can be easily extended by customers for their own applications and custom workflows.

Lastly, I’d like to invite you to come and meet our team at SIGGRAPH 2017. We’re hosting sessions and will be at booth #923, showing off a bunch of cool demos with partners like Conductor, Avid, Vizua, JellyFish Pictures, and PipelineFX along with exciting Microsoft hardware like the HoloLens and new Surface Studio.

If you are in the Los Angeles area during the week, you’re more than welcome to use the promo code “MSFT2017” to register for a complimentary visitor pass to the expo floor of SIGGRAPH.

Thank you all for your support in hitting this important milestone for Azure Batch Rendering. We are looking forward to continue working with you on the further expansion of the product and welcome your continued feedback!

Get more information and documentation on using Azure Batch Rendering.
Quelle: Azure

Teradata Bolsters Analytics and Database capabilities for Azure

This post is co-authored with Rory Conway, Product Manager, Teradata on Azure.

The enterprise-class capabilities of Teradata Database have been enhanced for Microsoft Azure Marketplace. This is great news for organizations using, or wanting to try, Teradata Database as their engine for advanced analytics to deliver optimal business outcomes for areas such as customer experience, risk mitigation, asset optimization, finance transformation, product innovation, and operational excellence. Coupled with the recent launch of Teradata Demand Chain Management on Azure, these are substantial improvements that yield an impressive solution set worthy of your attention. Try it for yourself.

Teradata Primer

As you may already know, Teradata Corporation has long been regarded as the market’s leading data warehouse provider for analytics at scale. With more than $2B in revenue from over 1,400 customers and powered by 10,000 employees, Teradata has the deep roots and technical strength that large organizations seek when aligning with a strategic partner.

 

Teradata by the numbers:

35+ years of innovation and leadership
~1,400 customers in 77 countries
~10,000 employees in 43 countries
$2.8B in revenue in 2016

Teradata works with many leading firms:

Airlines – All top 6 airlines
Banking – 18 of top 20 global commercial and savings banks
Communications – 19 of top 20 telecommunications companies
Manufacturing – 13 of top 20 manufacturing companies
Retail – 15 of top 20 global retailers

Teradata’s reputation is based on analytic performance at scale. The company’s deployment strategy is centered on hybrid cloud and license portability – “Teradata Everywhere™” – which make it easier for customers to buy Teradata in smaller increments and grow consumption as needed, where needed.

Teradata’s market research predicts that more than 90 percent of companies will employ a mix of on-premises and cloud resources by 2020. As such, Teradata emphasizes software consistency across all its deployment options aided by a strong bench of services experts helping organizations evolve to hybrid architectures and derive the most value from their analytic investments.

Teradata Database on Azure

Now let’s turn our attention to the options. Teradata Database on Azure provides four tiers of software with different features at varying price points. You choose a bundle corresponding to what you need for the workload at hand. From low to high, the Teradata Database tiers are:

Developer – Free software for application development
Base – Positioned for low concurrency, entry-level data warehouses
Advanced – Supports high-concurrency, production mixed workloads
Enterprise – Top-of-the-line offer with sophisticated workload management

Azure deployment is offered on multiple Virtual Machine (VM) types available in nearly every region globally, including DS15_v2 and DS14_v2 (Azure premium storage) and G5 and D15_v2 (local storage). There are also other analytical ecosystem components available, such as Teradata QueryGrid which enables you to pull and combine insights from multiple data repositories.

The table below shows the currently available options. Please see Teradata’s website for the latest configurations and software pricing.

Getting going with Teradata on Azure is easy. An Azure Marketplace Solution Template leads you through an intuitive step-by-step provisioning process and you can be up and running with an entire analytical ecosystem in about an hour. Here’s a screenshot illustrating the comprehensive guided deployment process:

Teradata Demand Chain Management on Azure

It’s no secret that many companies, particularly in retail and consumer goods, have aligned themselves with Azure as their public cloud provider of choice. An additional software as a service (SaaS) option from Teradata for the retail and consumer goods segments is Teradata Demand Chain Management (DCM), an application suite that provides forecasting, fulfillment, and demand chain analytics. 

Teradata DCM employs consumer demand data to develop daily and/or weekly sales forecasts of each item in multiple store locations based on historical performance with seasonal and causal identification. The forecast is then combined with inventory and fulfillment strategies which pull inventory through your supply chain based on expected sales across each location. The result is a reversal in the traditional supply chain flow of information, allowing store and SKU-level demand to serve its proper role at the peak of the pyramid.

Get started with Teradata on Azure today

Teradata brings powerful analytic capabilities to the Azure community. For existing Teradata customers, consistent software in Azure means that you can leverage investments you’ve already made. For anyone else, trying Teradata on Azure is an easy, low-risk way to determine whether it’s right for you. Try it today by deploying Teradata from Azure Marketplace.

To be secure-by-default, Teradata deployment does not automatically create public IP addresses for the VMs. After deployment, you can access the Teradata VMs from either a jumpbox VM that you already have in the same Virtual Network, via VPN/ExpressRoute, or by manually associating public IP addresses and specific Network Security Groups to the VMs you need to access such as Viewpoint, Data Mover, Data Stream Controller, Ecosystem Manager, or Server Management.

For additional information, see the Teradata on Azure Getting Started Guide.
Quelle: Azure