Announcing the general availability of Azure Data Box Disk

Since our preview announcement, hundreds of customers have been moving recurring workloads, media captures from automobiles, incremental transfers for ongoing backups, and archives from remote/office branch offices (ROBOs) to Microsoft Azure. We’re excited to announce the general availability of Azure Data Box Disk, an SSD-based solution for offline data transfer to Azure. Data Box Disk is now available in the US, EU, Canada, and Australia, with more country/regions to be added over time. Also, be sure not to miss the announcement of the public preview for Blob Storage on Azure Data Box below!

Top three reasons customers use Data Box Disk

Easy to order and use: Each disk is an 8 TB SSD. You can easily order a pack(s) of up to five disks from the Azure portal for a total capacity of 40 TB per order. The small form-factor provides the right balance of capacity and portability to collect and transport data in a variety of use cases. Support is available for Windows and Linux.
Fast data transfer: These SSD disks copy data up to USB 3.1 speeds and support the SATA II and III interfaces. Simply mount the disks as drives and use any tool of choice such as Robocopy, or just drag-and-drop to copy files to the disks.
Security: The disks are encrypted using 128-bit AES encryption and can be locked with your custom passkeys. After the data upload to Azure is complete, the disks are wiped clean in accordance with NIST 800 88-r1 standards.

Get started now

Data Box Disk is currently available in the US, EU, Australia, and Canada, and we will continue to expand to more county/regions in the coming months. To get started, refer to the online tutorial and order your Data Box Disk today. A complete list of supported operating systems can be found in our documentation, “Azure Data Box Disk system requirements.” For a deep dive on the toolset, see our documentation “Tutorial: Unpack, connect, and unlock Azure Data Box Disk.”

Announcing Blob Storage on Azure Data Box – Public preview now available

We also are launching the public preview of Azure Data Box Blob Storage. When enabled, this feature will allow you to copy data to Blob Storage on Data Box using blob service REST APIs. We are working with leading partners in the space to ensure you can use your favorite data copy tools.

For more details on using Blob Storage with Data Box, see our official documentation for “Azure Data Box Blob Storage requirements” and a tutorial on copying data via Azure Data Box Blob Storage REST APIs.

Thank you to everyone who participated in the preview of Azure Data Box Disk, and to those continuing to participate in previews for other products in the Data Box family including Data Box Heavy, Data Box Edge, and Data Box Gateway! In the coming months, we plan to make many enhancements based on your suggestions. Please continue to provide your valuable comments by posting on Azure Feedback.
Quelle: Azure

Multi-modal topic inferencing from videos

Any organization that has a large media archive struggles with the same challenge – how can we transform our media archives into business value? Media content management is hard, and so is content discovery at scale. Content categorization by topics is an intuitive approach that makes it easier for people to search for the content they need. However, content categorization is usually deductive and doesn’t necessarily appear explicitly in the video. For example, content that is focused on the topic of ‘healthcare’ may not actually have the word ‘healthcare’ presented in it, which makes the categorization an even harder problem to solve. Many organizations turn to tagging their content manually, which is expensive, time-consuming, error-prone, requires periodic curation, and is not scalable.

In order to make this process much more consistent and effective, cost and timewise, we introduce Multi-modal topic inferencing in Video Indexer. This new capability can intuitively index media content using a cross-channel model to automatically infer topics. The model does so by projecting the video concepts onto three different ontologies – IPTC, Wikipedia, and the Video Indexer hierarchical topic ontology (see more information below). The model uses transcription (spoken words), OCR content (visual text), and celebrities recognized in the video using the Video Indexer facial recognition model. The three signals together capture the video concepts from different angles much like we do when we watch a video.

Topics vs. Keywords

Video Indexer’s legacy Keyword Extraction model highlights the significant terms in the transcript and the OCR texts. Its added value comes from the unsupervised nature of the algorithm and its invariance to the spoken language and the jargon. The main difference between the existing keyword extraction model and the topics inference model is that the keywords are explicitly mentioned terms whereas the topics are inferred, for example, higher-level implicit concepts by using a knowledge graph to cluster similar detected concepts together.

Example

Let’s look at the opening keynote of the Microsoft Build 2018 developers’ conference which presented numerous products and features as well as the vision of Microsoft for the near future. The main theme of Microsoft leadership was how AI and ML are infused into the cloud and edge. The video is over three and a half hours long which would take a while to manually label. It was indexed by Video Indexer and yielded the following topics: Technology, Web Development, Word Embeddings, Serverless Computing, Startup Advice and Strategy, Machine Learning, Big Data, Cloud Computing, Visual Studio Code, Software, Companies, Smartphones, Windows 10, Inventions, and Media Technology.

The experience

Let’s continue with the Build keynote example. The topics are available both on the Video Indexer portal on the right as shown in Figure 2, as well as through the API using the Insights JSON like in Figure 3 where both IPTC topics like “Science and Technology” and Wikipedia categories topics like “Software” appear side by side.

Under the hood

The artificial intelligence models applied under the hood in Video Indexer are illustrated in Figure 4. The diagram represents the analysis of a media file from its upload, shown on the left-hand side, to the insights on the far-right hand side. The bottom channel applies multiple computer vision algorithms, OCR, Face Recognition. Above, you’ll find the audio channel starting from fundamental algorithms such as language identification and speech-to-text, higher level models like keyword extraction, and topic inference, which are based on natural language processing algorithms. This is a powerful demonstration of how Video Indexer orchestrates multiple AI models in a building block fashion to infer higher level concepts using robust and independent input signals from different sources.

Video Indexer applies two models to extract topics. The first is a deep neural network that scores and ranks the topics directly from the raw text based on a large proprietary dataset. This model maps the transcript in the video with the Video Indexer Ontology and IPTC. The second model applies spectral graph algorithms on the named entities mentioned in the video. The algorithm takes input signals like the Wikipedia IDs of the celebrities recognized in the video, which is structured data with signals like OCR and transcript that are unstructured by nature. To extract the entities mentioned in the text, we use Entity Linking Intelligent Service aka ELIS. ELIS recognizes named entities in free-form text so that from this point on we can use structured data to get the topics. We later build a graph based on the similarity of the entities’ Wikipedia pages and cluster it to capture different concepts within the video. The final phase ranks the Wikipedia categories according to its posterior probability to be a good topic where two examples per cluster are selected. The flow is illustrated in Figure 5.

Ontologies

Wikipedia categories – Categories are tags that could be used as topics. They are well edited, and with 1.7 million categories, the value of this high-resolution ontology is both in its specificity and its graph-like connections with links to articles as well as other categories.

Video Indexer Ontology – The Video Indexer Ontology is a proprietary hierarchical ontology with over 20,000 entries and a maximum depth of three layers.

IPTC – The IPTC ontology is popular among media companies. This hierarchically structured ontology can be explored on IPTC's NewsCode. IPTC topics are provided by Video Indexer per most of the Video Indexer ontology topics from the first level layer of IPTC.

The bottom line

Video Indexer’s topic model empowers media users to categorize their content using an intuitive methodology and optimize their content discovery. Multi-modality is a key ingredient for recognizing high-level concepts in video. Using a supervised deep learning-based model along with an unsupervised Wikipedia knowledge graph, Video Indexer can understand the inner relations within media files, and therefore provide a solution that is accurate, efficient, and less expensive than manual categorization.

If you want to convert your media content into business value, check out Video Indexer. If you’ve indexed videos in the past, we encourage you to re-index your files to experience this exciting new feature.

Have questions or feedback? Using a different media ontology and want it in Video Indexer? We would love to hear from you!

Visit our UserVoice to help us prioritize features, or email VISupport@Microsoft.com with any questions.
Quelle: Azure

Streamlined development experience with Azure Blockchain Workbench 1.6.0

We’re happy to announce the release of Azure Blockchain Workbench 1.6.0. It includes new features such as application versioning, updated messaging, and streamlined smart contract development. You can deploy a new instance of Workbench through the Azure portal or upgrade existing deployments to 1.6.0 using our upgrade script.

Please note the breaking changes section, as the removal of the WorkbenchBase base class and the changes to the outbound messaging format will require modifications to your existing applications.

This update includes the following improvements:

Application versioning

One of the most popular feature requests from you all has been that you would like to have an easy way to manage and version your Workbench applications instead of having to manually change and update your applications as you are in the development process.

We’ve continued to improve the Workbench development story with support for application versioning with 1.6.0 via the web app as well as the REST API. You can upload new versions directly from the web application by clicking “Add version.” Note that if you have any changes in the application role name, the role assignment will not be carried over to the new version.

You can also view the application version history. To view and access older versions, select the application and click “version history” in the command bar. Note, that as of now by default older versions are read only. If you would like to interact with older versions, you can explicitly enable the previous versions.

New egress messaging API

Workbench provides many integration and extension points, including via a REST API and a messaging API. The REST API provides developers a way to integrate to blockchain applications. The messaging API is designed for system to system integrations.

In our previous release, we enabled more scenarios with a new input messaging API. In 1.6.0, we have implemented an enhanced and updated output messaging API which publishes blockchain events via Azure Event Grid and Azure Service Bus. This enables downstream consumers to take actions based on these events and messages such as, sending email notifications when there are updates on relevant contracts on the blockchain, or triggering events in existing enterprise resource planning (ERP) systems.

Here is an example of a contract information message with the new output messaging API. You’ll get the information about the block, a list of modifying transactions for the contract, as well as information about the contract itself such as contract ID and contract properties. You also get information on whether or not the contract was newly created or if a contract update occurred.

{
“blockId”: 123,
“blockhash”: “0x03a39411e25e25b47d0ec6433b73b488554a4a5f6b1a253e0ac8a200d13f70e3″,
“modifyingTransactions”: [
{
“transactionId”: 234,
“transactionHash”: “0x5c1fddea83bf19d719e52a935ec8620437a0a6bdaa00ecb7c3d852cf92e18bdd”,
“from”: “0xd85e7262dd96f3b8a48a8aaf3dcdda90f60dadb1″,
“to”: “0xf8559473b3c7197d59212b401f5a9f07b4299e29″
},
{
“transactionId”: 235,
“transactionHash”: “0xa4d9c95b581f299e41b8cc193dd742ef5a1d3a4ddf97bd11b80d123fec27506e”,
“from”: “0xd85e7262dd96f3b8a48a8aaf3dcdda90f60dadb1″,
“to”: “0xf8559473b3c7197d59212b401f5a9f07b4299e29″
}
],
“contractId”: 111,
“contractLedgerIdentifier”: “0xf8559473b3c7197d59212b401f5a9f07b4299e29″,
“contractProperties”: [
{
“workflowPropertyId”: 1,
“name”: “State”,
“value”: “0”
},
{
“workflowPropertyId”: 2,
“name”: “Description”,
“value”: “1969 Dodge Charger”
},
{
“workflowPropertyId”: 3,
“name”: “AskingPrice”,
“value”: “30000”
},
{
“workflowPropertyId”: 4,
“name”: “OfferPrice”,
“value”: “0”
},
{
“workflowPropertyId”: 5,
“name”: “InstanceOwner”,
“value”: “0x9a8DDaCa9B7488683A4d62d0817E965E8f248398″
},
],
“isNewContract”: false,
“connectionId”: 1,
“messageSchemaVersion”: “1.0.0”,
“messageName”: “ContractMessage”,
“additionalInformation”: {}
}

Read more about the newly designed messaging API in our documentation “Azure Blockchain Workbench messaging integration.” Note that this redesign of the output messaging model will impact existing integrations you have done.

WorkbenchBase class is no longer needed in contract code

For customers who have been using Workbench, you will know that there is a specific class that you need to include in your contract code, called WorkbenchBase. This class enabled Workbench to create and update your specified contract. When developing custom Workbench applications, you would also have to call functions defined in the WorkbenchBase class to notify Workbench that a contract had been created or updated.

With 1.6.0, this code serving the same purpose as WorkbenchBase will now be autogenerated for you when you upload your contract code. You will now have a more simplified experience when developing custom Workbench applications and will no longer have bugs or validation errors related to using WorkbenchBase. See our updated samples, which have WorkbenchBase removed.

This means that you no longer need to include the WorkbenchBase class nor any of the contract update and contract created functions defined in the class. To update your older Workbench applications to support this new version, you will need to change a few items in your contract code files:

Remove the WorkbenchBase class.
Remove calls to functions defined in the WorkbenchBase class (ContractCreated and ContractUpdated).

If you upload an application with WorkbenchBase included, you will get a validation error and will not be able to successfully upload until it is removed. For customers upgrading to 1.6.0 from an earlier version, your existing Workbench applications will be upgraded automatically for you. Once you start uploading new versions, they will need to be in the 1.6.0 format.

Get available updates directly from within Workbench

Whenever a Workbench update is released, we announce the updates via the Azure blog and post release notes in our GitHub. If you’re not actively monitoring these announcements, it can be difficult to figure out whether or not you are on the latest version of Workbench. You might be running into issues while developing which have already been fixed by our team with the latest release.

We have now added the capability to view information for the latest updates directly within the Workbench UI. If there is an update available, you will be able to view the changes available in the newest release and update directly from the UI.

Breaking changes in 1.6.0

WorkbenchBase related code generation: Before 1.6.0, the WorkbenchBase class was needed because it defined events indicating creation and update of Blockchain Workbench contracts. With this change, you no longer need to include it in your contract code file, as Workbench will automatically generate the code for you. Note that contracts containing WorkbenchBase in the Solidity code will be rejected when uploaded.

Updated outbound messaging API: Workbench has a messaging API for system to system integrations. We have had an outbound messaging API which has been redesigned. The new schema will impact the existing integration work you have done with the current messaging API. If you want to use the new messaging API you will need to update your integration specific code.

The name of the service bus queues and topics has been changed in this release. Any code that points to the service bus will need to be updated to work with Workbench version 1.6.0.

ingressQueue – the input queue on which request messages arrive.
egressTopic – the output queue on which update and information messages are sent.
The messages delivered in version 1.6.0 are in a different format. Existing code that interrogates the messages from the messaging API and takes action based on its content will need to be updated. You can read more about the newly designed messaging API in our documentation “Azure Blockchain Workbench messaging integration.”

Workbench application sample updates: All Workbench applications sample code are updated since we no longer need the WorkbenchBase class in contract code. If you are on an older version of Workbench and use the samples on GitHub, or vice versa, you will see errors. Upgrade to the latest version of Workbench if you want to use samples.

You can stay up to date on Azure Blockchain by following us on Twitter @MSFTBlockchain. Please use our Blockchain User Voice to provide feedback and suggest features/ideas for Workbench. Your input is helping make this a great service. We look forward to hearing from you.
Quelle: Azure

New Azure Migrate and Azure Site Recovery enhancements for cloud migration

We are continuously enhancing our offerings to help you in your digital transformation journey to the cloud. You can read more about these offerings in the blog, “Three reasons why Windows Server and SQL Server customers continue to choose Azure.” In this blog, we will go over some of the new features added to Microsoft Azure Migrate and Azure Site Recovery that will help you in your lift and shift migration journey to Azure.

Azure Migrate

Azure Migrate allows you to discover your on-premises environment and plan your migration to Azure. Based on popular demand, we have now enabled Azure Migrate in two new geographies, Azure Government and Europe. Support for other Azure geographies will be enabled in future.

Below is the list of regions within the Azure geographies where the discovery and assessment metadata is stored.

Geography
Region for metadata storage

United States
West Central US, East US

Europe
North Europe, West Europe

Azure Government
U.S. Gov Virginia

When you create a migration project in the Azure portal, the region for metadata storage is randomly selected. For example, if you create a project in the United States, we will automatically select the region as West Central US or East US. If you are specific about storing the metadata in a certain region in the geography, you can use our REST APIs to create the migration project and can specify the region accordingly in the API request.

Note, the geography selection does not restrict you from planning your migration for other Azure target regions. Azure Migrate allows you to specify more than 30 Azure target regions for migration planning. You can learn more by visiting our documentation, “Customize an assessment.”

Azure Site Recovery

Azure Site Recovery (ASR) helps you migrate your on-premises virtual machines (VMs) to IaaS VMs in Azure, this is the lift and shift migration. We are listening to your feedback and have recently made enhancements in ASR to make your migration journey even more smooth. Below is the list of enhancements recently done in ASR:

Support for physical servers with UEFI boot type: VMs with UEFI boot type are not supported in Azure. However, ASR allows you to migrate such on-premises Windows servers to Azure by converting the boot type of the on-premises servers to BIOS while migrating them. Previously, ASR supported conversion of boot type for only virtual machines. With the latest update, ASR now also supports migration of physical servers with UEFI boot type. The support is restricted to Windows machines only (Windows Server 2012 R2 and above).
Linux disk support: Previously, ASR had certain restrictions regarding directories in Linux machines, it required the directories such as /(root), /boot, /usr, and more to be on the same OS disk of the VM in order to migrate it. Additionally, it did not support VMs that had /boot on an LVM volume and not on a disk partition. With the latest update, ASR now supports directories in different disks and also supports /boot on an LVM volume. This essentially means, ASR allows migration of Linux VMs with LVM managed OS and data disks, and directories on multiple disks. You can learn more by visiting our documentation, “Support matrix for disaster recovery of VMware VMs and physical servers to Azure.”
Migration from anywhere: ASR helps you migrate any kind of server to Azure no matter where it runs, private cloud or public cloud. We are happy to announce that the guest OS coverage for AWS has now expanded, and ASR now supports the following operating systems for migration of AWS VMs to Azure.

Source
OS versions

AWS

RHEL 6.5+ New
RHEL 7.0+ New
CentOS 6.5+ New
CentOS 7.0+ New
Windows Server 2016
Windows Server 2012 R2
Windows Server 2012
64-bit version of Windows Server 2008 R2 SP1 or later

Learn more about how you can migrate from AWS to Azure in our documentation, “Migrate Amazon Web Services (AWS) VMs to Azure.”

VMware and physical servers
Get more details on the supported OS versions by reading our documentation, “Support matrix for disaster recovery of VMware VMs and Physical servers to Azure.”

Hyper-V
Guest OS agnostic

We are listening and continuously enhancing these services. If you have any feedback or have any ideas, do use our UserVoice forums for Azure Migrate and ASR and let us know.

If you are new to these tools, get started at the Azure Migration Center. Make sure you also start your journey right by taking the free Assessing and Planning for Azure Migration course offered by Microsoft.
Quelle: Azure

Gain insight into your Azure Cosmos DB data with QlikView and Qlik Sense

Connecting data from various sources in a unified view can produce valuable insights that are otherwise invisible to the human eye and brain. As Azure Cosmos DB allows for collecting the data from various sources in various formats, the ability to mix and match this data becomes even more important for empowering your businesses with additional knowledge and intelligence.

This is what Qlik’s analytics and visualization products, QlikView and Qlik Sense, have been able to do for years and now they support Azure Cosmos DB as a first-class data source. The following table summarizes the variety of connectivity options you have for to getting Azure Cosmos DB data in QlikView and Qlik Sense.

Azure Cosmos DB API

Connectivity Method

Qlik detailed instruction

Qlik live demo

Core (SQL) API

REST

Connecting to Azure CosmosDB SQL API from Qlik Sense using the built-in REST Connector

Core (SQL) API

ODBC driver

Connecting to Azure Cosmos DB SQL API from Qlik Sense using the Azure Cosmos DB ODBC Connector

Azure Cosmos DB ODBC – Video Game Sales

MongoDB API

MongoDB Wire Protocol

Connecting to Azure Cosmos DB Mongo API from Qlik Sense using the Qlik MongoDB Connector

Azure Cosmos DB via Mongo DB API using Qlik Connector

MongoDB API

Qlik gRPC connector

Same as MongoDB Wire Protocol

Qlik Sense and QlikView are data visualization tools that combine the data from different sources into a single view. Qlik Sense indexes every possible relationship between entities in the data so that you can gain immediate insights into it without making the connections manually. You can visualize Azure Cosmos DB data by using Qlik Sense.

Here is a step-by-step guide on how to set up a Azure Cosmos DB account and configure the ODBC connection to it in Qlik Sense.

1. Create a Core (SQL) API account in Azure Cosmos DB.

2. Create a database with a collection in it. Keep in mind that Azure Cosmos DB allows you to provision throughput for your databases and collections as described as Request units in Azure Cosmos DB article.

3. Import the data. There are many ways to load data into Azure Cosmos DB collection, the simplest way is to use a tool called Azure Cosmos DB Data Migration tool. You can find a connection string on the keys page in the portal.

4. Next in Qlik Sense, you need to install an ODBC driver for Azure Cosmos DB and configure it following the instructions providing in our documentation, “Connect to Azure Cosmos DB using BI analytics tools with the ODBC driver.”

5. Open your app in Qlik Sense and click Add data from files and other sources. Select ODBC and configure an ODBC connection you created in previous step.

6. Next, choose the database and the collection with the imported data.

7. Add data to your app and configure your data insight visualizations. The following picture shows an example of the resulting view.

To learn more about Qlik tools and how to use them with Azure Cosmos DB please see the following resources.

Connect Qlik Sense to Azure Cosmos DB using our documentation, “Connect Qlik Sense to Azure Cosmos DB and visualize your data” to help guide you.

Please note, the above instructions and screenshots apply to Qlik Sense, but QlikView can also be connected to Azure Cosmos DB in a similar way. For more information visit the product pages for Qlik Sense, QlikView and Qlik Desktop.
Quelle: Azure

Cognitive Services Speech SDK 1.2 – December update – Python, Node.js/NPM and other improvements

Developers can now access the latest improvements to Cognitive Services Speech Service including a new Python API and more. Details below.

Read the updated Speech Services documentation to get started today.

What’s new

Python API for Speech Service

Python 3.5 and later versions on the Windows and Linux operating systems are supported.
Python is the first language that the Speech Service supports on macOS X (version 10.12 and later).
Python modules can be conveniently installed from PyPI.

Node.js support

Support for Node.js is now available, in addition to support for JavaScript in the browser. Through the npm package manager, developers can install the Speech Service module and its prerequisites.

The JavaScript version of Speech Service is now also available as an opensource project on GitHub.

Linux support

Support for Ubuntu 18.04 is now available in addition to pre-existing support for Ubuntu 16.04.

New features by popular demand

Lightweight SDK for greater performance

By reducing the number of required concurrent threads, mutexes, and locks, Speech Services now offers a more lightweight SDK with enhanced error reporting.

Control of server connectivity and connection status

A newly added connection object enables control over when the SDK connects to the Speech Service. You can also now subscribe to receive connection notifications that report the exact time of server connection and termination.

Unlimited audio session length support

For JavaScript, length restrictions for recorded audio sessions have been lifted. The SDK buffers the audio file and then automatically reconnects and retransmits audio data to the service.

Support for ProGuard during Android APK generation is also now available.

For more details and examples for how your business can benefit from the new functionality for Speech Services, check out release notes and samples in the GitHub sample repository for Speech Services.
Quelle: Azure

Teradata to Azure SQL Data Warehouse migration guide

With the increasing benefits of cloud-based data warehouses, there has been a surge in the number of customers migrating from their traditional on-premises data warehouses to the cloud. Microsoft Azure SQL Data Warehouse (SQL DW) offers the best price to performance when compared to its cloud-based data warehouse competitors. Teradata is a relational database management system and is one of the legacy on-premises systems that customers are looking to migrate from.

The Teradata to SQL DW migrations involve multiple steps. These steps include analyzing the existing workload, generating the relevant schema models, and performing the ETL operation. The intent of this discussed whitepaper is to provide guidance for these aforesaid migrations with emphasis on the migration workflow, the architecture, technical design considerations, and best practices.

Migration Phases

The Teradata migration should pivot on the following six areas. Though recommended, proof of concept is an alternative step. With the benefit of Azure, you can quickly provision Azure SQL Data Warehouses for your development team to start business object migration before the data is migrated and speed up the migration process.

Phase one – Fact finding

Through a question and answers session you can define what your inputs and outputs are for the migration project.

Phase two – Defining success criteria for proof of concept (POC)

Taking the answers from phase one, you identify a workload for running a POC to validate the outputs required and run the following phases as a POC.

Phase three: Data layer mapping options

This phase is about mapping the data you have in Teradata to the data layout you will create in Azure SQL Data Warehouse. Some of the common scenarios are data type mapping, date and time format, and more.

Phase four – Data modeling

Once you’ve defined the data mappings, phase four concentrates on how to tune Azure SQL Data Warehouse. This provides the best performance for the data you will be landing into it.

Phase five: Identify migration paths

What is the path of least resistance? What is the quickest path given your cloud maturity? Phase five helps describe the options open to you and then for you to decide on the path you wish to take.

Phase six: Execution of migration

Migrating your Teradata data to SQL Data Warehouse involves a series of steps. These steps are executed in three logical stages, preparation, metadata migration, and data migration.

Migration solution

To ingest data, you need a basic cloud data warehouse setup for moving data from your on-premise solution to Azure SQL Data Warehouse, and to enable the development team to build Azure Analysis Cubes once the majority of the data is loaded.

Azure Data Factory Pipeline is used to ingest and move data through the store, prep, and train pipeline.
Extract and load files via Polybase into the staging schema on Azure SQL DW.
Transform data through staging, source (ODS), EDW and sematic schemas on Azure SQL DW.
Azure Analysis services will be used as the sematic layer to serve thousands of end users and scale out Azure SQL DW concurrency.
Build operational reports and analytical dashboards on top of Azure Analysis services to serve thousands of end users via Power BI.

For more insight into how to approach a Teradata to Azure SQL Data Warehouse migration check the following whitepaper, “Migrating from Teradata to Azure SQL Data Warehouse.”

This whitepaper is broken into sections which detail the migration phases, the preparation required for data migration including schema migration, migration of the business logic, the actual data migration approach, and testing strategy.

The scripts that would be useful for the migration are available in github under Teradata to Azure SQL DW Scripts.
Quelle: Azure

To infinity and beyond: The definitive guide to scaling 10k VMs on Azure

Every platform has limits, workstations and physical servers have resource boundaries, APIs may be rate-limited, and even the perceived endlessness of the virtual public cloud enforces limitations that protect the platform from overuse or misuse. You can learn more about these limitations by visiting our documentation, “Azure subscription and service limits, quotas, and constraints.” When working on scenarios that take platforms to their extreme, those limits become real and therefore thought should be put into overcoming them.

The following post includes essential notes taken from my work with Mike Kiernan, Mayur Dhondekar, and Idan Shahar. It also covers some iterations where we try to reach a limit of 10K virtual machines running on Microsoft Azure and explores the pros/cons of the different implementations.Load tests at cloud scale

Load and stress tests before moving a new version to production are critical on the one hand, but pose a real challenge for IT on the other. This is because they require a considerable amount of resources to be available for only a short amount of time, every release-cycle. When purchased the infrastructure doesn’t justify its cost over extended periods, making this a perfect use-case for a public cloud platform where payment is billed only per usage.

This post is in fact based on a customer we’ve been working with, and discusses challenges we have met. However, the provided solution is general enough to be used for other use cases where large clusters of VMs in Azure exist, such as:

Scaling requirements beyond a single VMSS, and the cluster is static in size once provisioned (HPC clusters).
DDoS simulation – Please note, in this case ethics must be practiced and the targeted endpoint should be owned by you, otherwise you assume risk the liability for damages.The process

At a high level, to provision and initialize a cluster of x VMs that “do something” the following steps should be taken:

Start from a base image.
Provision x VMs from the base image.
Download and install required software and data to each VM.
Start the “do-something” process on each VM.

However, given the targeted hyper-scale there are a number of critical elements that must be taken into account. It quickly becomes clear that the concerns of implementing such scenarios are as much about management, cost optimization, and avoiding platform limits as they are about infrastructure and the provisioning process.

How do you manage 10K VMs? How do you even count them?
What is the origin of data and can it handle the load of 10K concurrent downloads?
How would you know that the process completes?
Can the cloud provide 10K VMs in one region and which?
How long would it take to provision and reach its scale?

The next section describes a load-test scenario implemented using different services and tackling the questions raised previously with the following goals:

Generate stress on a backend service located in some other datacenter using client machines (VMs) in Azure.
Trigger the process using HTTP POST.
Avoid manual steps, pre-requisites, and custom images which may be outdated over time.
Minimal time to reach a full-scale cluster.The solution outline

Read more about all the details of the solution in the blog post, “To Infinity and Beyond (or: The Definitive Guide to Scaling 10k VMs on Azure).” You can also see the solution code and deployment scripts on GitHub.
Quelle: Azure

Azure.Source – Volume 64

Updates

Azure Migrate is now available in Azure Government

The Azure Migrate service assesses on-premises workloads for migration to Azure. The service assesses the migration suitability of on-premises machines, performs performance-based sizing, and provides cost estimations for running on-premises machines in Azure. If you're contemplating lift-and-shift migrations, or are in the early assessment stages of migration, this service is for you. Azure Migrate now supports Azure Government as a migration project location. This means that you can store your discovered metadata in an Azure Government region (US Gov Virginia). In addition to Azure Government, Azure Migrate supports storing the metadata in United States and Europe geographies. Support for other Azure geographies is planned for the future.

Python 2.7 Now Available for App Service on Linux

Last month, built-in Python images for Azure App Service on Linux became available in public preview for Python 3.7 and 3.6. Python 2.7 is now available in the public preview of Python on Azure App Service (Linux). When you use the official images for Python on App Service on Linux, the platform automatically installs the dependencies specified in the requirements.txt​ file.

If you’re interested in building with Python on Azure, be sure to check out the four-part Python on Azure series with Nina Zakharenko and Carlton Gibson to get an introduction to building and running Django apps with Visual Studio Code and Azure Web Apps, setting up CI/CD pipelines with Azure Pipelines, and running serverless Django apps with Azure Functions.

News

Microsoft Certified Azure Developer Associate

Microsoft Azure Developers design, build, test, and maintain cloud solutions, such as applications and services, partnering with cloud solution architects, cloud DBAs, cloud administrators, and clients to implement these solutions. Based on feedback received about the Azure Developer Associate certification beta exams, AZ-200: Microsoft Azure Developer Core Solutions and AZ-201: Microsoft Azure Developer Advanced Solutions, the decision was taken to simplify the path and transition to a single exam, AZ-203: Developing Solutions for Microsoft Azure. By the way, Exam AZ-900: Microsoft Azure Fundamentals is as an optional first step in learning about cloud services and how those concepts are exemplified by Microsoft Azure. You can take AZ-900 as a precursor to AZ-203, but it is not a pre-requisite for it.

Technical content

Introduction to Cloud Storage for Developers

This introductory level post covers data storage options in a platform-agnostic way, with a focus on Azure Storage examples, to help developers understand that traditional NoSQL and SQL databases aren't the only option. Jeremy Likness shares when and why cloud storage is better option, definitions for various storage terms and concepts, simple ways to get started, and resources to learn more.

KubeCon 2018: Tutorial – Deploying Windows Apps with Kubernetes, Draft, and Helm

Curious about deploying Windows apps to Kubernetes? Would you like to use Draft and Helm, just as you would if you were deploying Linux apps or containers? Check out this blog post from Jessica Deen, which includes her session from KubeCon 2018.

Apache Spark: Tips and Tricks for Better Performance

Building on her "Apache Spark Deep Dive" exploration, Adi Polak shares her top five tips for improving Spark performance and writing better queries — from why you should avoid custom user defined functions to understanding and optimizing your cloud configuration. In her next post, she’ll dive into how to use Apache Spark on Azure, including real life use cases.

Using Object Detection for Complex Image Classification Scenarios Part 1: The AI Computer Vision Revolution

AI and ML are theoretically as easy as consuming a few APIs, but how do you apply them to real business scenarios? In this series, you’ll walk through how major Central Eastern European candy company uses computer vision, AI, and ML to solve a problem: automatically validating store shelves are properly stocked, eliminating costly audits and manual processes. By the end of the series, you’ll understand how to compare and contrast different Machine Learning approaches and technologies, understand available services and tools, and build, train, and deploy your own custom models to the cloud and remote clusters.

Azure shows

Episode 260 – Azure Sphere | The Azure Podcast

In addition to the usual updates, Cale, Russell and Sujit break down the Azure Sphere offering from Microsoft and what it means for the future of IoT development.

HTML5 audio not supported

Interning in Azure Engineering and the Visual Studio Code extension for ACR Build | Azure Friday

What is it like to intern at Microsoft? Scott Hanselman meets with three interns from the Microsoft Explorer Program (a cross-discipline internship designed for college freshmen and sophomores) to talk about their experience working on the Azure Container Registry and their contribution of ACR Build and Task capabilities to the Visual Studio Code Docker Extension.

Visual Azure Provisioning From a Whiteboard | The Xamarin Show

On this week's Xamarin Show, James is joined by good friend Christos Matskas who shows off a beautiful Xamarin application that is infused with AI to generate a full Azure backend just by drawing pictures on a white board. You don't want to miss this mind blowing demo and walkthrough of the code.

How the Azure DevOps teams plan with Aaron Bjork | DevOps Interviews

In this interview, Donovan Brown interviews Group Program Manager Aaron Bjork about Agile Planning.

IPFS in Azure | Block Talk

This episode will introduce the use of IPFS (Interplanatory File System) in a consortium setting. The concepts of how this technology can be helpful to remove centralization of storage that is not part of the block in the blockchain is shown. Along with this is a short demonstration of how the marketplace offering for IPFS in Azure can make creating these storage networks simple is shown.

Live demo of BeSense, an application built by Winvision on Azure Digital Twins | Internet of Things Show

Winvision has leveraged the spatial intelligence capabilities of Azure Digital Twins to build BeSense, a smart building application that provides real-time data that optimizes space utilization and occupant experience. Remco Ploeg, a Solution Architect at Winvision demos the application.

How to add logic to your Testing in Production sites with PowerShell | Azure Tips and Tricks

Learn how to add additional logic by using PowerShell to automatically distribute the load between your production and deployment slot sites with the Testing in Production feature.

Gopinath Chigakkagari on Key Optimizations for Azure Pipelines | The Azure DevOps Podcast

In this episode, Jeffrey Palermo is joined by his guest, Gopinath Chigakkagari. Gopinath hits on some fascinating points and topics about Azure Pipelines, including (but not limited to): what listeners should be looking forward to, some highlights of the new optimizations on the platform, key Azure-specific offerings, as well as his recommendations on what listeners should follow up on for more information!

HTML5 audio not supported

Events

Microsoft Ignite | The Tour

Learn new ways to code, optimize your cloud infrastructure, and modernize your organization with deep technical training. Join us at the place where developers and tech professionals continue learning alongside experts. Explore the latest developer tools and cloud technologies and learn how to put your skills to work in new areas. Connect with our community to gain practical insights and best practices on the future of cloud development, data, IT, and business intelligence. Find a city near you and register today.

SolarWinds Lab #72: Two Geeks and a Goddess II: Azure the Easy Way

Wednesday, January 16 – 1:00-2:00 PM Central (UTC/GMT -6)

If there’s one takeaway from 2018, it’s that most organizations now run at least some production workloads in somebody else’s data center, especially Azure — and we're here to show you how to monitor those cloud resources with the tools you already have. Join us – Phoummala Schmitt (Microsoft Cloud Advocate), Thomas LaRock (Head Geek and 10-year Microsoft MVP). and Patrick Hubbard (Head Geek) for a special hybrid IT/Cloud operations episode. We'll have live chat and experts on hand, so come with you Azure operations questions.You'll learn how to break down remote monitoring barriers, get a telemetry plan in place before migrating your apps, manage cloud costs, throttle dev sprawl. We'll also cover the new Azure and Office 365 Server & Application Monitor (SAM) templates and account activation.

Time Series Forecasting Build and Deploy your Machine Learning Models to Forecast the Future

Wednesday, January 23 – 8:00-11:00 AM Pacific (UTC/GMT -8)

In this O'Reilly three-hour live training course, Francesca Lazzeri walks you through the core steps for building, training, and deploying your time series forecasting models. First, you’ll learn common time series forecast methods, like simple exponential smoothing and recurrent neural networks (RNN), then get hands-on experience, using machine learning components like Keras, TensorFlow, and other open source Python packages to apply models to a real-world scenario.

A Cloud Guru | Azure This Week – 4 January 2019

This time on Azure This Week, Lars talks about 2019 predictions for Azure, changes and new certificates for Azure, and a new version of the Bot Framework SDK.

Quelle: Azure

Azure.Source – Volume 63

Now in preview

Transparent Data Encryption (TDE) with customer managed keys for Managed Instance

Announces the public preview of Transparent Data Encryption (TDE) with Bring Your Own Key (BYOK) support for Microsoft Azure SQL Database Managed Instance. Azure SQL Database Managed Instance is a new deployment option in SQL Database that combines the best of on-premises SQL Server with the operational and financial benefits of an intelligent, fully-managed relational database service. TDE with BYOK support has been generally available for single databases and elastic pools since April 2018. TDE with BYOK support is offered in addition to TDE with service managed keys which is enabled on all new Azure SQL Databases, single databases, pools, and managed instances by default.

Transforming your data in Azure SQL Database to columnstore format

Announces a public preview of a new feature in Azure SQL Database, both in logical server and Managed Instance, called CLUSTERED COLUMNSTORE ONLINE INDEX build. This operation enables you to migrate your data stored in row-store format to the columnstore format and maintain your columnstore data structures with the minimal downtime on your workload. Learn why this format is valuable and how it can compress data and boost the performance of your analytical queries. This feature is currently in preview in all flavors of Azure SQL Database including logical servers, elastic pools, and Managed Instances.

Also in preview

Cognitive Services Speech Services neural text-to-speech capability is in preview

Now generally available

Virtual Network Service Endpoints for serverless messaging and big data

Virtual Networks and Firewall rules for both Azure Event Hubs and Azure Service Bus are now generally available. This feature adds to the security and control you have over your cloud environments. Now, traffic from your virtual network to your Azure Service Bus Premium namespaces and Standard and Dedicated Azure Event Hubs namespaces can be kept secure from public Internet access and completely private on the Azure backbone network. Customers dealing with PII (Financial Services, Insurance, etc.) or looking to further secure access to their cloud visible resources will benefit the most from this feature.

Also now available

Migrating to the Az.ApiManagement PowerShell module
Azure Monitor for Containers agent updates
Azure IoT Edge 1.0.5 release
Azure Cosmos DB emulator support for Cassandra API
Dev/test pricing for Azure SQL Database Managed Instance is now available
Support for SQL to Azure SQL DB Managed Instance online migrations
Premium tier now available for the Azure Database Migration Service
SQL Data Warehouse integration with Informatica iPaaS on Azure
Azure DevTest Labs: CIS Windows Server 2016 Benchmark L2 available in your lab
Power BI service December update
Power BI Desktop December Update
Power BI Embedded new workspace experience creation API
Power BI Embedded zero-downtime capacity scale
Azure Resource Health monitoring for Power BI Embedded
Power BI Embedded capacity metrics to monitor workloads
Self-service big data prep (dataflows) available in Power BI Embedded

News and announcements

Microsoft open sources Trill to deliver insights on a trillion events a day

An internal Microsoft project known as Trill for processing “a trillion events per day” is now being open sourced on GitHub to address the need to process massive amounts of data each millisecond is becoming a common business requirement. Trill started as a research project at Microsoft Research in 2012, and since then, has been extensively described in research papers. The roots of Trill’s language lie in Microsoft’s former service StreamInsight, a powerful platform allowing developers to develop and deploy complex event processing applications. Both systems are based off an extended query and data model that extends the relational model with a time component. By open-sourcing Trill, we want to offer the power of the IStreamable abstraction to all customers the same way that IEnumerable and IObservable are available. Trill powers internal applications and external services, reaching thousands of developers. A number of powerful, streaming services are already being powered by Trill, such as Bing Ads, Azure Stream Analytics, and Halo.

Conversational – AI updates December 2018

Bot Framework SDK version 4.2 is now available. The team used this opportunity to provide additional updates on Conversational-AI releases from Microsoft. In the SDK 4.2 release, the team focused on enhancing monitoring, telemetry, and analytics capabilities of the SDK by improving the integration with Azure App Insights. As with any release, we fixed a number of bugs, continued to improve Language Understanding (LUIS) and QnA integration, and enhanced our engineering practices. There were additional updates across the other areas like language, prompt and dialogs, and connectors and adapters.

Azure PowerShell ‘Az’ Module version 1.0

Az is a new Azure PowerShell module that is built to harness the power of PowerShell Core and Cloud Shell and maintain compatibility with Windows PowerShell 5.1. Az ensures that Windows PowerShell and PowerShell Core users can get the latest Azure tooling in every PowerShell on every platform. Az also simplifies and normalizes Azure PowerShell cmdlet and module names. Az is open source and ships in Azure Cloud Shell and is available from the PowerShell Gallery. The Az module version 1.0 was released on December 18, 2018, and will be updated on a two-week cadence in 2019, starting with a January 15, 2019 release.

Participate in the 16th Developer Economics Survey

The Developer Economics Q4 2018 survey is an independent survey from SlashData, an analyst firm in the developer economy that tracks global software developer trends. Every year more than 40,000 developers around the world participate in this survey, so this is a chance to be part of something big, voice your thoughts, and make your contribution to the developer community. The Developer Economics Q4 2018 survey is for all developers (professionals, hobbyists, and students) engaging in the following software development areas: web, mobile, desktop, backend services, IoT, AR/VR, machine learning and data science, and gaming.

The biggest IoT stories of 2018

As 2018 draws to a close, the IoT Team took a look back at the topics that drove the most interest and excitement here on the Azure blog—and a window into what’s coming for this technology in the near future, covering everything from smart spaces, to the intelligent edge, to open standards and interoperability. We’re seeing new ecosystems and solutions emerge that unify data and insights from multiple places to enable new possibilities. As smart cities, vehicles, buildings, spaces, energy, and more converge, the opportunities grow—and so do needs for end-to-end manageability and security. We are committed (in April, we announced our intention to invest $5 billion in IoT over the next five years) to solving these challenges with built-in connectivity, real-time performance, and security innovation at the intelligent edge.

The year in review: Hybrid applications for developers

Ricardo Mendes Principal Program Manager, Azure Stack, takes a look at the technology landscape supporting hybrid scenarios and does a retrospective of the myriad announcements throughout 2018 that enabled developers to focus more on building apps and worry less about infrastructure. This year has been amazing for developers that design, develop, and maintain cloud-based apps. Azure Stack has improved support for DevOps practices. You can use Kubernetes containers. You can use API Profiles with Azure Resource Manager and the code of your choice.

Additional news and updates

Azure Log Analytics is available in West US 2
Retirement of Media Hyperlapse (in preview) on March 29, 2019
Azure Scheduler will retire on September 30, 2019

Technical content

Fine-tune natural language processing models using Azure Machine Learning service

Learn how you can fine-tune Bidirectional Encoder Representations from Transformers (BERT) easily using the Azure Machine Learning service, as well as topics such as using distributed settings and tuning hyperparameters for the corresponding dataset. In this post, you’ll see some preliminary results to demonstrate how to use Azure Machine Learning service to fine tune the NLP models. After BERT is trained on a large corpus (for example, English Wikipedia), the assumption is that because the dataset is huge, the model can inherit a lot of knowledge about the English language. In addition to tuning different hyperparameters for various use cases, Azure Machine Learning service can be used to manage the entire lifecycle of these kinds of experiments. Azure Machine Learning service provides an end-to-end cloud-based machine learning environment, so customers can develop, train, test, deploy, manage, and track machine learning models  All the code is available on the GitHub repository.

Anatomy of a secured MCU

Azure Sphere is an end-to-end solution containing three complementary components that provide a secured IoT platform. They include an Azure Sphere microcontroller unit (MCU), an operating system optimized for IoT scenarios that is managed by Microsoft, and a suite of secured, scalable online services. Broadly, any MCU-based device belongs in one of two categories – devices that may connect to the Internet and devices designed to never connect to the Internet. Connecting an MCU-based device to the Internet is a watershed moment because any MCU can become a potential general-purpose digital weapon in the hands of an attacker. Learn how Azure Sphere-certified MCUs go beyond a typical hardware root of trust used in an MCU. This post discusses what puts the “secured” in a secured Azure Sphere MCU. Specifically, the Pluton Security Subsystem design details, as well as some other general silicon security improvements.

How to migrate from AzureRM to Az in Azure PowerShell

As noted above, the Azure PowerShell team released Az, a new cross-platform PowerShell module that will replace AzureRM. You can install this module by running Install-Module Az in an elevated PowerShell prompt. With the introduction of PowerShell Core, PowerShell is a cross-platform product. Therefore, it became a priority for Azure PowerShell to have cross-platform support. Because of the changes required to support running Azure PowerShell cross-platform, we created a new module rather than modifying the existing AzureRM module. Moving forward, all new functionality will be added to the Az module, while AzureRM will only be updated with bug fixes. In this post, you’ll learn how to migrate from AzureRM to Az in Azure PowerShell.

Top 3 free resources developers need for learning Azure

I wrote this post, which covers three free resources every developer needs for learning Azure. Dan Fernandez leads the team responsible for bringing our technical documentation and learning resources into a more modern experience that supports new capabilities that were impossible to deliver via MSDN. Recently, I invited Dan to record a few episodes of Azure Friday with Donovan Brown and spend some time showing off the work his team is doing to provide the best doc and learning experience.

Best practices for queries used in log alerts rules

There are several "Dos and Don'ts" you can follow to make you query run faster. Yossi Yossifon, Senior Program Manager on Microsoft Azure, provides some best practices for Log alerts rules queries in Log Analytics and Application Insights. Check his post for a few tips and a link to the query best practices in the Azure documentation.

Connect Azure Data Explorer to Power BI for visual depiction of data

Azure Data Explorer (ADX) is a lightning-fast indexing and querying service helps you build near real-time and complex analytics solutions for vast amounts of data. ADX can connect to Power BI, a business analytics solution that lets you visualize your data and share the results across your organization. The various methods of connection to Power BI enable interactive analysis of organizational data such as tracking and presentation of trends. Learn the various ways to query data from Azure Data Explorer to Power BI. Additional connectors and plugins to analytics tools and services will be added in the weeks to come.

Azure shows

Episode 258 – Live from KubeCon 2018 | The Azure Podcast

We are live at KubeCon+CloudNative in Seattle where Microsoft, together with the whos-who of the tech world, are talking about Kubernetes, We are very fortunate to get Lachie Evenson, Principal PM in the Azure team, Tommy Falgout, a Cloud Solution Architect and Daniel Selman, a Kubernetes Consultant, together in a room to discuss the current state of Kubernetes and AKS.

HTML5 audio not supported

Pix2Story- Neural AI Storyteller | AI Show

We are live at KubeCon+CloudNative in Seattle where Microsoft, together with the whos-who of the tech world, are talking about Kubernetes, We are very fortunate to get Lachie Evenson, Principal PM in the Azure team, Tommy Falgout, a Cloud Solution Architect and Daniel Selman, a Kubernetes Consultant, together in a room to discuss the current state of Kubernetes and AKS.

Building a Pet Detector in 30 minutes or less! | AI Show

Storytelling is at the heart of human nature and Natural Language Processing is a field that is driving a revolution in the computer-human interaction. That is why we decided to explore AI Pix2Story to see if we could teach an AI to be creative, be inspired by a picture and take it to another level.

Connect devices from other IoT clouds to Azure IoT Central | Internet of Things Show

Learn about how to connect other IoT clouds like Sigfox, Particle, and The Things Network to IoT Central with the IoT Central device bridge open-source solution. We'll talk about what the device bridge is and how it works, and demo a device connected to The Things Network use the device bridge to connect to your IoT Central app.

Running your First Docker Container in Azure | The DevOps Lab

Damian catches up with fellow Cloud Advocate Jay Gordon at Microsoft Ignite | The Tour in Berlin. Containers are still new for a lot of people and with the huge list of buzzwords, it's hard to know where to get started. Jay shows how easy it is to get started running your first container in Azure, right from scratch.

Introduction to Multi-Signature Wallets | Block Talk

This video provides an overview of multi-signature wallets (smart contract) along with a walkthrough of simple multi-signature wallet written in Solidity language. The topics covered in this video include adding owners to the wallet and the workflow that takes place in order to capture multiple signatures from owners before the transfer of value can be completed.

How to run an app inside a container image with Docker | Azure Tips and Tricks

Learn how to create a container based on an image, and then create a running app inside of it. Once you get set up with Docker on your local dev machine by installing the Docker desktop application for your operating system, you can easily run an app.

Chris Patterson on the Future of Azure Pipelines – Episode 015 | The Azure DevOps Podcast

Jeffrey Palermo and Chris Patterson, Principal Program Manager at Microsoft, discuss how the infrastructure of Azure Pipelines is changing, what a build will mean in the future, the goal of Azure Pipelines evolution, and more.

HTML5 audio not supported

Customers, industries, and partners

A fintech startup pivots to Azure Cosmos DB

Fintech start-up and Microsoft partner clearTREND Research had a plan was to commercialize a financial trend engine and provide a subscription investment service to individuals and professionals. Learn how their reasons for choosing Azure Cosmos DB as the best solution that could adapt, evolve, and enable their business to innovate faster in order to turn opportunities into strategic advantages. You’ll also get some tips from the clearTREND team to consider when designing and implementing a solution with Azure Cosmos DB. The team that designed and implemented the clearTREND solution are architects and developers with Skyline Technologies.

IoT in Action: New insights for retail

For in-depth insights around the latest developments in IoT for retail, including how customer expectations are changing and how IoT investments can impact store profitability, you can register for a live IoT in Action event in New York (co-located with NRF 2019) on January 14, 2019, or sign up for our industry-specific retail webinar on January 8, 2019. You will get insights into how IoT can help you delight customers, improve the effectiveness of your associates, and increase the efficiency of your operations. You can also take a deep dive into building retail IoT solutions at our upcoming 2-day Virtual Bootcamp in late January and early February.

Azure Marketplace new offers – Volume 28

The Azure Marketplace is the premier destination for all your software needs – certified and optimized to run on Azure. Find, try, purchase, and provision applications & services from hundreds of leading software providers. You can also connect with Gold and Silver Microsoft Cloud Competency partners to help your adoption of Azure. In the second half of November we published 80 new offers.

A Cloud Guru's Azure This Week – 21 December 2018 (Christmas special!)

In this Christmas special edition of Azure This Week, Lars talks about static websites on Azure Storage now being generally available, the preview of neural network text-to-speech with Jessa and Guy, and some more serverless Azure news with Azure Functions.

Quelle: Azure