Unlock new features in the MT3620 MCU with the Azure Sphere 19.05 release

Each quarter, the Azure Sphere team works to open new scenarios to customers through new features on-chip and in the cloud.  The Azure Sphere 19.05 release continues this theme by unlocking the real-time capable cores that reside on the MT3620. Co-locating these cores within the same SOC enables new, real-time scenarios on the M4 cores while continuing to support connectivity scenarios on the high-level core. This release also introduces support for DHCP-based Ethernet connections to the cloud.

We are also pleased to announce that the Azure Sphere hardware ecosystem continues to expand with new modules available for mass production and new, less expensive development boards. Finally, new Azure Sphere reference solutions are available to accelerate your solution’s time to market.

To build applications that take advantage of this new functionality, please download and install the latest Azure Sphere SDK Preview for Visual Studio. All Wi-Fi connected devices will automatically receive an updated Azure Sphere operating system that contain support for these new features.

Enabling new MT3620-based features

Real-time core preview—The OS and SDK support development, deployment, and debugging SPI, I2C, GPIO, UART and ADC real-time capable apps on the MT3620’s two M4 cores. GitHub sample apps show GPIO, UART, and real-time core to high-level core communication.
ADC sample—This real-time core sample app demonstrates how to use the MT3620’s analog-to-digital converters to sample voltages. See the ADC GitHub sample for more details.

Tools and libraries

Improved CMAKE support—Visual Studio now supports one-touch deploy and debug for applications that use CMake.
Application runtime version—Application properties specify the required application runtime version (ARV), and azsphere commands detect conflicts. See the online documentation for details.
Random number generation (RNG)—The POSIX base API supports random number generation from Pluton's RNG.
Easy hardware targeting—Hardware-specific JSON and header files are provided in the GitHub sample apps repository. You can now easily target a particular hardware product by changing an application property.

New connectivity options

Ethernet internet interface—This release supports an Ethernet connection as an alternative to a Wi-Fi connection for communicating with the Azure Sphere Security Service and your own services. Our GitHub samples now demonstrate how to wire the supported Microchip part, bring up the Ethernet interface, and use it to connect to Azure IoT or your own web services.
Local device discovery—The Azure Sphere OS offers new network firewall and multicast capabilities that enable apps to run mDNS and DNS-SD for device discovery on local networks. Look for more documentation in the coming weeks on this feature.

Support for additional hardware platforms

Several hardware ecosystem partners have recently announced new Azure Sphere-enabled products:

SEEED MT3620 Mini Development Board—This less-expensive development board  single-band Wi-Fi is designed for size-constrained prototypes. It uses the AI-Link module for a quick path from prototype to commercialization.
AI-Link WF-M620-RSA1 Wi-Fi Module—This single-band Wi-Fi module is designed for cost-sensitive applications.
USI Azure Sphere Combo Module—This module supports both dual-band Wi-Fi and Bluetooth. The on-board Bluetooth chipset supports BLE and Bluetooth 5 Mesh. The chipset can also work as an NFC tag to support non-contact Bluetooth pairing and device provisioning scenarios.
Avnet Guardian module—This module enables the secure connection of existing equipment to the internet. It attaches to the equipment through Ethernet and connects to the cloud via dual-band Wi-Fi.
Avnet MT3620 Starter Kit—This development board with dual-band Wi-Fi connectivity features modular connectors that support a range of MikroE Click and Grove modules.
Avnet Wi-Fi Module—This dual-band Wi-Fi module with stamp hole (castellated) pin design allows for easy assembly and simpler quality assurance.

There has never been a better time to begin developing on Azure Sphere, using the development kit or module which best fits your needs, or those of your customer, with highly customizable offerings available.

Get started using the Azure Sphere SDK Preview for Visual Studio.
Need help? Connect with experts through the Azure Sphere forum or on Stack Overflow.
Share product feedback and requests.
Stay current with the latest Azure Updates.

Email us at nextinfo@microsoft.com to kick off an Azure Sphere engagement with your Microsoft representative.
Quelle: Azure

Azure Cost Management updates – May 2019

Whether you're a new student, thriving startup, or the largest enterprise you have financial constraints and you need to know what you're spending, where, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Azure Cost Management comes in.

We're always looking for ways to learn more about your challenges and how Cost Management can help you better understand how and where you're accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:

Expanded general availability (GA): Pay-as-you-go and Azure Government
New preview: Manage AWS and Azure costs together in the Azure portal
New getting started videos
Monitor costs based on your pay-as-you-go billing period
More comprehensive scheduled exports
Extended date picker
Share link to customized views
Documentation updates

Let's dig into the details…

 

Expanded general availability (GA): Pay-as-you-go and Azure Government

Azure Cost Management is now generally available for the following account types:

Public cloud

Enterprise Agreements (EA)
Microsoft Customer Agreements (MCA)
Pay-as-you-go (PAYG) and dev/test subscriptions

Azure Government

Enterprise Agreements

Stay tuned for more information about preview support for additional account types and clouds, like Cloud Solution Providers (CSP) and Sponsorship subscriptions. We know how critical it is for you to have a rich set of cost management tools for every account across every cloud, and we hear you loud and clear.

 

New preview: Manage AWS and Azure costs together in the Azure portal

Many organizations are adopting multi-cloud strategies for additional flexibility, but with increased flexibility comes increased complexity. From different cost models and billing cycles to underlying cloud architectures, having a single cross-cloud cost management solution is no longer a luxury, but a fundamental requirement to efficiently and effectively monitor, control, and optimize costs. This is where Azure Cost Management can help.

Start by creating a new AWS cloud connector from the Azure portal. From the home page of the Azure portal select the Cost Management tile. Then, select Cloud connectors (preview) and click the "Add" command. Simply specify a name, pick the management group you want AWS costs to be rolled up to, and configure the AWS connection details.

Cost Management will start ingesting AWS costs as soon as the AWS cost and usage report is available. If you created a new cost and usage report, AWS may take up to 24 hours to start exporting data. You can check the latest status from the cloud connectors list.

Once available, open cost analysis and change the scope to the management group you selected when creating the connector. Group by provider to see a break down of AWS and Azure costs. If you connected multiple AWS accounts or have multiple Azure billing accounts, group by billing account to see a break down by account.

In addition to seeing AWS and Azure costs together, you can also change the scope to your AWS consolidated or linked accounts to drill into AWS costs specifically. Create budgets for your AWS scopes to get notified as costs hit important thresholds.

Managing AWS costs is free to use and you will not be charged during the preview. If you would like to automatically upgrade when AWS support is generally available, navigate to the connector, and select the Automatically charge the 1 percent at general availability option, then select the desired subscription to charge.

For more information about managing AWS costs, see the documentation "Manage AWS costs and usage in Azure."

 

New getting started videos

Learning a new service can take time. Reading through documentation is great, but you've told us that sometimes you just want a quick video to get you started. Well, here are eight:

Azure Cost Management overview (4m)
Azure Cost Management and Cloudyn (4m)
How to manage and control your cloud costs (4m)
How to analyze spending in Power BI (3m)
How to create a budget to monitor your spending (5m)
How to schedule exports to storage (2m)
How to assign access (5m)
How to review tag policies (4m)

If you're looking for something a little more in-depth, try these:

Azure Cost Management technical overview (34m)
How to transition from Cloudyn to Azure Cost Management (31m)

 

Monitor costs based on your pay-as-you-go billing period

As you know, your pay-as-you-go and dev/test subscriptions are billed based on the day you signed up for Azure. They don’t map to calendar months, like EA and MCA billing accounts. This has made reporting on and controlling costs for each bill a little harder, but now you have the tools you need to effectively manage costs based on your specific billing cycle.

When you open cost analysis for a PAYG subscription, it defaults to the current billing period. From there, you can switch to a previous billing period or select multiple billing periods. More on the extended date picker options later.

If you want to get notified before your bill hits a specific amount, create a budget for the billing month. You can also specify if you want to track a quarterly or yearly budget by billing period.

Sometimes you need to export data and integrate it with your own datasets. Cost Management offers the ability to automatically push data to a storage account on a daily, weekly, or monthly basis. Now you can export your data as it is aligned to the billing period, instead of the calendar month.

We love hearing your suggestions, so let us know if there's anything else that would help you better manage costs during your personalized billing period.

 

More comprehensive scheduled exports

Scheduled exports enable you to react to new data being pushed to you instead of periodically polling for updates. As an example, a daily export of month-to-date data will push a new CSV file every day from January 1-31. These daily month-to-date exports have been updated to continue to push data on the configured schedule until they include the full dataset for the period. For example, the same daily month-to-date export would continue to push new January data on February first and February second to account for any data which may have been delayed. The update guarantees you will receive a full export for every period, starting April 2019.

For more information about how cost data is processed, see the documentation "Understand Cost Management data."

 

Extended date picker in cost analysis

You've told us that analyzing cost trends and investigating spending anomalies sometimes requires a broad set of date ranges. You may want to look at the current billing period to keep an eye on your next bill or maybe you need to look at the last 30 days in a monthly status meeting. Some teams are even looking at the last 7 days on a weekly or even daily basis to identify spending anomalies and react as quickly as possible. Not to mention the need for longer-term trend analysis and fiscal planning.

Based on all the great feedback you've shared around needing a rich set of one-click date options, cost analysis now offers an extended date picker with more options to make it easier than ever for you to get the data you need quickly.

We also noticed trends in how you navigate between periods. To simplify this, you can now quickly navigate backward and forward in time using the < PREVIOUS and NEXT > links at the top of the date picker. Try it yourself and let us know what you think.

 

Share links to customized views

We've heard you loud and clear about how important it is to save and share customized views in cost analysis. You already know you can pin a customized view to the Azure portal dashboard, and you already know you can share dashboards with others. Now you can share a direct link to that same customized view. If somebody who doesn't have access to the scope opens the link they'll get an access denied message, but they can change the scope to keep the customizations and apply them to their own scope.

You can also customize the scope to share a targeted URL. Here's the format of the URL:

https://portal.azure.com# [@#{domain}] /blade/Microsoft_Azure_CostManagement/Menu/open/CostAnalysis [/scope/{url-encoded-scope}] /view/{view-config}

The domain is optional. If you remove that, the user's preferred domain will be used.

The scope is also optional. If you remove that, the user's default scope will be the first billing account, management group, or subscription found. If you specify a custom scope, remember to URL-encode (e.g. "/" → "%2F") the scope, otherwise cost analysis will not load correctly.

The view configuration is a gzipped, URL-encoded JSON object. As an example, here's how you can decode a customized view:

Copy URL from the portal:

https://portal.azure.com#@domain.onmicrosoft.com/blade/Microsoft_Azure_CostManagement/Menu/open/CostAnalysis/scope/%2Fsubscriptions%2F00000000-0000-0000-0000-000000000000/view/H4sIAAAAAAAA%2F41QS0sDMRD%2BL3Peha4oam%2FSgnhQilYvpYchOxuDu8k6mVRL2f%2FupC8FofSYL99zNrAiji54GMPFqLotR5flqCp7ppWjLyjgMxGv305zOhJ2Rn%2FvjCRsJyHKQQnjDci6J%2F18jWhJcXEdNYxdxiYpSuj24IzYhTorGlbwN6y6yYxwRK7K6hqGAmoUjCRZYRl9apGdaCRM0bVr1aC1TBZl212LBNm304ffNZgxzfF7X7lJ3uzI8JI6GDTDcki98xbGi%2BOSWsv67UGKg80zxZDY0H2mP2VsWKqfa4U4p6HXYYvlkC3NO7LkCEHzQfUktKnLVmhM6nSEkHKhwTbmc%2FVuFST%2BjZ%2F%2Bj3%2BknDMUpziHivPMOI%2F6UOuM4QcE8nHtJAIAAA%3D%3D

Trim down to the view configuration after "/view/":

H4sIAAAAAAAA%2F41QS0sDMRD%2BL3Peha4oam%2FSgnhQilYvpYchOxuDu8k6mVRL2f%2FupC8FofSYL99zNrAiji54GMPFqLotR5flqCp7ppWjLyjgMxGv305zOhJ2Rn%2FvjCRsJyHKQQnjDci6J%2F18jWhJcXEdNYxdxiYpSuj24IzYhTorGlbwN6y6yYxwRK7K6hqGAmoUjCRZYRl9apGdaCRM0bVr1aC1TBZl212LBNm304ffNZgxzfF7X7lJ3uzI8JI6GDTDcki98xbGi%2BOSWsv67UGKg80zxZDY0H2mP2VsWKqfa4U4p6HXYYvlkC3NO7LkCEHzQfUktKnLVmhM6nSEkHKhwTbmc%2FVuFST%2BjZ%2F%2Bj3%2BknDMUpziHivPMOI%2F6UOuM4QcE8nHtJAIAAA%3D%3D

URL decode the view configuration:

H4sIAAAAAAAA/41QS0sDMRD+L3Peha4oam/SgnhQilYvpYchOxuDu8k6mVRL2f/upC8FofSYL99zNrAiji54GMPFqLotR5flqCp7ppWjLyjgMxGv305zOhJ2Rn/vjCRsJyHKQQnjDci6J/18jWhJcXEdNYxdxiYpSuj24IzYhTorGlbwN6y6yYxwRK7K6hqGAmoUjCRZYRl9apGdaCRM0bVr1aC1TBZl212LBNm304ffNZgxzfF7X7lJ3uzI8JI6GDTDcki98xbGi+OSWsv67UGKg80zxZDY0H2mP2VsWKqfa4U4p6HXYYvlkC3NO7LkCEHzQfUktKnLVmhM6nSEkHKhwTbmc/VuFST+jZ/+j3+knDMUpziHivPMOI/6UOuM4QcE8nHtJAIAAA==

Gzip decompress the decoded string to get the customized view (note some tools may require base 64 decoding the URL-decoded string as well):

{
"version":"2019-04-01-preview",
"queryVersion":"2019-04-01-preview",
"metric":"ActualCost",
"query":{
"type":"Usage",
"timeframe":"Custom",
"timePeriod":{"from":"2019-04-18","to":"2019-05-17"},
    "dataset":{
      "granularity":"Daily",
      "aggregation":{"totalCost":{"name":"PreTaxCost","function":"Sum"}},
      "grouping":[{"type":"dimension","name":"ResourceGroupName"}],
    "filter":{"and":[]}
    },
  },
  "chart":"StackedColumn",
  "accumulated":false,
  "pivots":[
    {"type":"Dimension","name":"Meter"},
    {"type":"Dimension","name":"ResourceType"},
    {"type":"Dimension","name":"ResourceId"}
  ]
}

Understanding how the view configuration works means you can:

Link to cost analysis from your own apps
Build out and automate the creation of custom dashboards via ARM deployment templates
Copy the query property and use it to get the same data used to render the main chart (or table, if using the table view)

You'll hear more about the view configuration soon, so keep an eye out.

 

Documentation updates

Lots of documentation updates! Here are a few you might be interested in:

Numerous updates to "Understanding Cost Management data"
Added pay-as-you-go billing period support to budgets and exports tutorials
Added note about supported scopes for exports
Added view picker and updated date picker in Cost Analysis tutorial
Added new videos to overview, Cost Analysis, budgets, exports, assigning access, and Cloudyn
And, in case you missed it, also check out the documentation "Understand and work with scopes"

Want to keep an eye on all documentation updates? Check out the Cost Management doc change history in the azure-docs repository on GitHub. If you see something missing, select "Edit" at the top of the doc and submit a quick pull request.

What's next?

These are just a few of the big updates from the last month. We're always listening and making constant improvements based on your feedback, so please keep the feedback coming.

Follow @AzureCostMgmt on Twitter for updates, tips, and tricks throughout the week!
Quelle: Azure

Integrating Azure CNI and Calico: A technical deep dive

This post was co-authored by Andy Randall, VP of Business Development, Kinvolk Gmbh

We are pleased to share the availability of Calico Network Policies in Azure Kubernetes Service (AKS). Calico policies lets you define filtering rules to control flow of traffic to and from Kubernetes pods. In this blog post, we will explore in more technical detail the engineering work that went into enabling Azure Kubernetes Service to work with a combination of Azure CNI for networking and Calico for network policy.

First, some background. Simplifying somewhat, there are three parts to container networking:

Allocating an IP address to each container as it’s created, this is IP address management or IPAM.

Routing the packets between container endpoints, which in turn splits into:

Routing from host to host (inter-node routing).

Routing within the host between the external network interface and the container, as well as routing between containers on the same host (intra-node routing).

Ensuring that packets that should not be allowed are blocked (network policy).

Typically, a single network plug-in technology addresses all these aspects. However, the open API used by Kubernetes Container Network Interface (CNI), actually allows you to combine different implementations.

The choice of configurations brings you opportunities, but also calls for a plan to make sure that the mechanisms you choose are compatible and enable you to achieve your networking goals. Let’s look a bit more closely into those details

Networking: Azure CNI

Cloud networks, like Azure, were originally built for virtual machines with typically just one or a small number of relatively static IP addresses. Containers change all that, and introduce a host of new challenges for the cloud networking layer, as dozens or even hundreds of workloads are rapidly created and destroyed on a regular basis, each of which is its own IP endpoint on the underlying network.

The first approach at enabling container networking in the cloud leveraged overlays, like VXLAN, to ensure only the host IP was exposed to the underlying network. Such overlay network solutions like flannel, or AKS’s kubenet (basic) networking mode, do a great job of hiding the underlying network from the containers. Unfortunately that is also the downside, the containers are not actually running in the underlying VNET, meaning they cannot be addressed like a regular endpoint and can only communicate outside of the cluster via network address translation (NAT).

With Azure CNI, which is enabled with advanced mode networking in AKS, we added the ability for each container to get its own real IP address within the same VNET as the host. When a container is created, the Azure CNI IPAM component assigns it an IP address from the VNET, and ensures that the address is configured on the underlying network through the magic of the Azure software-defined network layer, taking care of the inter-node routing piece.

So with IPAM and inter-node routing taken care of, we now need to consider intra-node routing. How do we do intra-node routing, i.e. get a packet between two containers, or between the host’s network interface (typically eth0) and the virtual ethernet (veth) interface of the container?

It turns out the Linux kernel is rich in networking capabilities, and there are many different ways to achieve this goal. One of the simplest and easiest is with a virtual bridge device. With this approach, all the containers are connected on a local layer two segment, just like physical machines that are connected via an ethernet switch.

Packets from the ‘real’ network are switched through the bridge to the appropriate container via standard layer two techniques (ARP and address learning).
Packets to the real network are passed through the bridge, to the NIC, where they are routed to the remote node.
Packets from one container to another also flow through the bridge, just like two PCs connected on an ethernet switch.

This approach, which is illustrated in figure one, has the advantage of being high performance and requiring little control plane logic to maintain, helping to ensure robustness.

Figure 1: Azure CNI networking

Network policy with Azure

Kubernetes has a rich policy model for defining which containers are allowed to talk to which other ones, as defined in the Kubernetes Network Policy API. As we demonstrated recently at Ignite, we have now implemented this API and it works in conjunction with Azure CNI in AKS or in your own self-managed Kubernetes clusters in Azure, with or without AKS-Engine.

We translate the Kubernetes network policy model to a set of allowed IP address pairs, which are then programmed as rules in the Linux kernel iptables module. These rules are applied to all packets going through the bridge. This is shown in figure two.

Figure 2: Azure CNI with Azure Policy Manager

Network policy with Calico

Kubernetes is also an open ecosystem, and Tigera’s Calico is well known as the first, and most widely deployed, implementation of Network Policy across cloud and on-premise environments. In addition to the base Kubernetes API, it also has a powerful extended policy model which supports a range of features such as global network policies, network sets, more flexible rule specification, the ability to run the policy enforcement agent on non-Kubernetes nodes, and application layer policy via integration with Istio. Furthermore, Tigera offers a commercial offering built on Calico, Tigera Secure, that adds a host of enterprise management, controls, and compliance features.

Given Kubernetes’ aforementioned modular networking model, you might think you could just deploy Calico for network policy along with Azure CNI, and it should all just work. Unfortunately, it is not this simple.

 

While Calico uses iptables for policy, it does so in a subtly different way. It expects containers to be established with separate kernel routes, and it enforces the policies that apply to each container on that specific container’s virtual ethernet interface. This has the advantage that all container-to-container communications are identical (always a layer 3 routed hop, whether internal to the host or across the underlying network), and security policies are more narrowly applied to the specific container’s context.

To make Azure CNI compatible with the way Calico works we added a new intra-node routing capability to the CNI, ,which we call ‘transparent’ mode. When configured to run in this mode, Azure CNI sets up local routes for containers instead of creating a virtual bridge device. This is shown in Figure 3.
  

Figure 3: Azure CNI with Calico Network Policy

Onward and upstream

A Kubernetes cluster with the enhanced Azure CNI and Calico policies can be created using AKS-Engine by specifying the following configuration in the cluster definition file.

"properties": {

"orchestratorProfile": {

"orchestratorType": "Kubernetes",

"kubernetesConfig":

{ "networkPolicy": "calico", "networkPlugin": "azure" }

These options have also been integrated into AKS itself, enabling you to provision a cluster with Azure networking and Calico network policy by simply specifying the options –network-plugin azure –network-policy Calico at cluster create time.

Find more information by visiting our documentation, “Azure Kubernetes network policies overview.”
Quelle: Azure

Azure Marketplace new offers – Volume 37

We continue to expand the Azure Marketplace ecosystem. For this volume, 163 new offers successfully met the onboarding criteria and went live. See details of the new offers below:

Applications

Accela Civic Platform and Civic Applications: Accela's fast-to-implement civic applications and robust and extensible solutions platform help agencies respond to the rapid modernization of technology with SaaS solutions that offer high degrees of security, flexibility, and usability.

Actifile Guardrail-Secure Data on 0 Trust Devices: Actifile's Guardrail unique low-footprint technology enables secure usage of corporate data taken from any application or data source.

Adrenalin HCM: Human resource function is the quintessential force that enables an organization’s strongest asset to perform better and benefit themselves and the company. Reimagine your HR function with Adrenalin HCM.

Advanced Threat Protection for OneDrive: BitDam helps enterprises take full advantage of all OneDrive has to offer while delivering advanced threat protection against content-borne attacks.

AGR – Advanced Demand Planning: This modular AGR solution allows you to make more consistent planning decisions and more accurate buying decisions and helps ensure you have the right product in the right place at the right time.

agroNET – Digital Farming Management Platform: agroNET is a turnkey digital farming solution that enables smart agriculture service providers and system integrators to rapidly deploy the service tailored to the needs of farmers.

AIMSCO Azure MES/QM Platform for SME Manufacturers: With embedded navigation dashboards, displays, alerts, APIs, and BI interfaces, AIMSCO Azure MES/QM Platform users from the shop floor to the boardroom have real-time access to critical decision-making tools.

AIRA Robotics as a Service: Transform the installation of new equipment from CAPEX to OPEX as a part of a digital transformation using the AIRA digitalization system for long-term service relationships with suppliers.

Apex Portal: Use Apex Portal for supplier registration, self-service inquiry of invoice and payment status, dynamic discounting and early payments, and automated statement audits.

AppStudio: AppStudio is a suite of offerings for managing apps using a standardized methodology to ensure you are up to date and ready for the next challenge.

ArcBlock ABT Blockchain Node: ABT Blockchain Node is fully decentralized and uses ArcBlock's blockchain development platform to easily build, run, and use DApps and blockchain-ready services.

ArcGIS Enterprise 10.7: Manage, map, analyze, and share geographic information systems (GIS) data with ArcGIS Enterprise, the complete geospatial system that powers your data-driven decisions.

Area 1 Horizon Anti-Phishing Service for Office 365: Area 1 Security closes the phishing gap with a preemptive, comprehensive, and accountable anti-phishing service that seamlessly integrates with and fortifies Microsoft Office 365 security defenses.

Arquivar-GED: ArqGED is document management software that allows users to dynamically solve problems with location and traceability of information in any format (paper, digital, microfilm, etc.).

Aruba Virtual Gateway (SD-WAN): Aruba's software-defined WAN (SD-WAN) technology simplifies wide area network operations and improves application QoS to lower your total cost of ownership.

Arundo Analytics: Arundo delivers enterprise-scale machine learning and advanced analytics applications to improve operations in heavy asset industries.

Assurity Suite: The Assurity Suite platform provides assurance and control over your organization's documents, communications, investigations, compliance, information, and processes.

Atilekt.NET: Website-building platform Atilekt.NET is a friendly, flexible, and fast-growing content management system based on ASP.NET.

Axians myOperations Patch Management: Axians myOperations Server Patch Management integrates a complete management solution to simplify the rollout, monitoring, and reporting of Windows updates.

Axioma Risk: Axioma Risk is an enterprise-wide risk-management system that enables clients to obtain timely, consistent, and comparable views of risk across an entire organization and all asset classes.

Azure Analytics System Solution: BrainPad's Azure Analytics System Solution is designed for enterprises using clouds for the first time as well as companies considering sophisticated usage. This application is available only in Japanese.

Beam Communications: Communications are a fundamental element in institutional development, and Beam Communications boosts internal and external communications. This application is available only in Spanish.

Betty Blocks Platform: From mobile apps to customer portals to back-office management and everything in between, the Betty Blocks platform supports every app size and complexity.

BI-Clinical: BI-Clinical is CitiusTech’s ONC- and NCQA-certified BI and analytics platform designed to address the healthcare organization’s most critical quality reporting and decision support needs.

Bizagi Digital Business Platform: The Bizagi platform helps enterprises embrace change by improving operational efficiencies, time to market, and compliance.

Bluefish Editor on Windows Server 2019: The Bluefish software editor supports a plethora of programming languages including HTML, XHTML, CSS, XML, PHP, C, C++, JavaScript, Java, Google Go, Vala, Ada, D, SQL, Perl, ColdFusion, JSP, Python, Ruby, and Shell.

BotCore – Enterprise Chatbot Builder: BotCore is an accelerator that enables organizations to build customized conversational bots powered by artificial intelligence. It is fully deployable to Microsoft Azure and leverages many of the features available in it.

Brackets: With focused visual tools and preprocessor support, Brackets is a modern text editor that makes it easy to design in the browser. It's crafted for web designers and front-end developers.

Brackets on Windows Server 2019: With focused visual tools and preprocessor support, Brackets is a modern text editor that makes it easy to design in the browser. It's crafted for web designers and front-end developers.

bugong: The bugong platform combines leading algorithm technology with intelligent manufacturing management. This application is available only in Chinese.

Busit Application Enablement Platform: Busit Application Enablement Platform (AEP) enables fast and efficient handling of all your devices and services, regardless of the brand, manufacturer, or communication protocol.

ByCAV: ByCAV provides biometric identity validation through non-traditional channels for companies in diverse industries that require identity verification. This application is available in Spanish only in Colombia.

Camel Straw: Camel Straw is a cloud-based load testing platform that helps teams load test and analyze and improve the way their applications scale.

Celo: Celo connects healthcare professionals. From big hospitals to small clinics, Celo helps healthcare professionals communicate better.

Cirkled In – College Recruitment Platform: Cirkled In is a revolutionary, award-winning recruitment platform that helps colleges match with best-fit high school students based on students’ holistic portfolio.

Cirkled In – Student Profile & Portfolio Platform: Cirkled In is a secure, award-winning electronic portfolio platform for students designed to compile students’ achievements in seven categories from academics to sports to volunteering and more.

Cleafy Fraud Manager for Azure: Cleafy combines deterministic malware detection with passive behavioral and transactional risk analysis to protect online services against targeted attacks from compromised endpoints without affecting your users and business.

Cloud Desktop: Cloud Desktops on Microsoft Azure offers continuity and integration with the tools and applications that you already use.

Cloud iQ – Cloud Management Portal: Crayon Cloud-iQ is a self-service platform that enables you to manage cloud products (Azure, Office 365, etc.), services, and economics across multiple vendors through a single pane portal view.

Cloudneeti – Continuous Assurance SaaS: Cloudneeti SaaS enables instant visibility into security, compliance, and data privacy posture and enforces industry standards through continuous and integrated assurance aligned with the cloud-native operating model.

Collaboro – Digital Asset Management: Collaboro partners with brands, institutions, government, and advertising agencies to solve their specific digital asset management needs in a fragmented marketing and media space.

Connected Drone: Targeting power and utilities, eSmart Systems Connected Drone software utilizes deep learning to dramatically reduce utility maintenance costs and failure rates and extend asset life.

CyberVadis: By pooling and sharing analyst-validated cybersecurity audits, CyberVadis allows you to scale up your third-party risk assessment program while controlling your costs.

Data Quality Management Platform: BaseCap Analytics’ Data Quality Management Platform helps you make better business decisions by measurably increasing the quality of your greatest asset: data.

DatabeatOMNI: DatabeatOMNI provides you with everything you need to display great content, on as many screens as you want to – without complex interfaces, specialist training, or additional procurement costs.

dataDiver: dataDiver is an extended analytics tool for gaining insights into research design that is neither traditional BI nor BA. This application is available only in Japanese.

dataFerry: dataFerry is a data preparation tool that allows you to easily process data from various sources into the desired form. This application is available only in Japanese.

Dataprius Cloud: Dataprius offers a different way to work with files in the cloud, allowing you to work with company files without synchronizing, without conflicts, and with multiple users connected at the same time.

Denodo Platform 7.0 14-day Free Trial (BYOL): Denodo integrates all of your Azure data sources and your SaaS applications to deliver a standards-based data gateway, making it quick and easy for users of all skill levels to access and use your cloud-hosted data.

Descartes MacroPoint: Descartes MacroPoint consolidates logistics tracking data from carriers into a single integrated platform to meet two growing challenges: real-time freight visibility and automated capacity matching.

Digital Asset Management (DAM) Managed Application: Digital Asset Management delivers a secured and centralized repository to manage videos. It offers capabilities for advanced embed, review, approval, publishing, and distribution of videos.

Digital Fingerprints: Digital Fingerprints is a continuous authentication system based on behavioral biometrics.

DM REVOLVE – Dynamics Data Migration: DM REVOLVE is a dedicated Azure-based Dynamics end-to-end data migration solution that incorporates "Dyn-O-Matic," our specialized Dynamics automated load adaptor.

Docker Community Edition Ubuntu Bionic Beaver: Deploy Docker Community Edition with Ubuntu on Azure with this free, community-supported, DIY version of Docker on Ubuntu.

Docker Community Edition Ubuntu Xenial: Deploy Docker Community Edition with Ubuntu on Azure with this community-supported, DIY version of Docker on Ubuntu.

Dom Rock AI for Business Platform: The Dom Rock AI for business platform empowers people to make better and faster decision enlightened by data. This application is available only in Portuguese.

Done.pro: Done.pro will enable Uber for X cloud platforms customized and tuned for your business in order to provide customers with exceptional service.

eComFax: Secure Advanced Messaging Platform: Comunycarse Network Consultants eComFax is a secure, advanced messaging platform designed for compliance and mobility.

EDGE: The Edge system allows seamless operations across the UK – in both the established Scottish market and the new English market.

eJustice: The eJustice solution provides information and communication technology enablement for courts.

ekoNET – Air Quality Monitoring: ekoNET combines portable devices and cloud-based functionality to enable granular air quality monitoring indoors and outdoors.

Element AssetHub: AssetHub is a data hub connecting time series, IT, and OT to manage operational asset models.

Equinix Cloud Exchange Fabric: This software-defined interconnection solution allows you to directly, securely, and dynamically connect distributed infrastructure and digital ecosystems to your cloud service providers.

ERP Beam Education: ERP Beam Education efficiently integrates all the processes that are part of managing an educational center. This application is available only in Spanish.

Essatto Data Analytics Platform: Essatto enables more informed decision making by providing timely insights into your financial and business operations in a flexible, cost-effective application.

Event Monitor: Event Monitor is a user-friendly solution meant for security teams that are responsible for safety.

Firewall as a Service: Firewall as a Service delivers a next-generation managed internet gateway from Microsoft Azure including 24/7 support, self-service, and unlimited changes by our security engineers.

GDPR++ for Data Protection & Security: GDPR++ is an Azure-based tool that helps companies keep data protection and cyber security under control.

GEODI: GEODI helps you focus on your business by letting you share information, documents, notes, and notifications with contacts and stakeholders via mobile app or browser.

GeoServer: Make your spatial information accessible to all with this free, community-supported open source server based on Java for sharing geospatial data.

GeoServer on Windows Server 2019: Make your spatial information accessible to all with this free, community-supported open source server based on Java for sharing geospatial data.

Ghost Helm Chart: Ghost is a modern blog platform that makes publishing beautiful content to all platforms easy and fun. Built on Node.js, it comes with a simple markdown editor with preview, theming, and SEO built in.

Grafana Multi-Tier with Azure Managed DB: Grafana is an open source analytics and monitoring dashboard for over 40 data sources, including Graphite, Elasticsearch, Prometheus, MariaDB/MySQL, PostgreSQL, InfluxDB, OpenTSDB, and more.

HashiCorp Consul Helm Chart: HashiCorp Consul is a tool for discovering and configuring services in your infrastructure.

HPCBOX: HPC Cluster for STAR-CCM+: HPCBOX combines cloud infrastructure, applications, and managed services to bring supercomputer technology to your personal computer.

H-Scale: H-Scale is a modular, configurable, and scalable data integration platform that helps organizations build confidence in their data and accelerate their data strategies.

Integrated Cloud Suite: CitiusTech’s Integrated Cloud Suite is a one-stop solution that enables healthcare organizations to reduce complexity and drive a multi-cloud strategy optimally and cost-effectively.

JasperReports Helm Chart: JasperReports Server is a standalone and embeddable reporting server. It is a central information hub, with reporting and analytics that can be embedded into web and mobile applications.

Jenkins Helm Chart: Jenkins is a leading open source continuous integration and continuous delivery (CI/CD) server that enables the automation of building, testing, and shipping software projects.

Jenkins On Ubuntu Bionic Beaver: Jenkins is a simple, straightforward continuous integration tool that effortlessly distributes work across multiple devices and assists in building drives, tests, and deployment.

Jenkins-Docker CE on Ubuntu Bionic Beaver: This solution takes away the hassles of setting up the installation process of Jenkins and Docker. The ready-made image integrates Jenkins-Docker to make continuous integration jobs smooth, effective, and glitch-free.

Join2ship: Join2ship is a collaborative supply chain platform designed to digitalize your receipts and deliveries.

Kafka Helm Chart: Tested to work on the EKS platform, Apache Kafka is publish-subscribe messaging rethought as a distributed commit log.

Kaleido Enterprise Blockchain SaaS: Kaleido simplifies the process of creating and operating permissioned blockchains with a seamless experience across cloud properties and geographies for all network participants.

Kubeapps Helm Chart: Kubeapps is a web-based application deployment and management tool for Kubernetes clusters.

LOOGUE FAQ: LOOGUE FAQ is an AI virtual agent that creates chatbots that support queries by creating and uploading two columns of questions and answers in Excel. This application is available only in Japanese.

Magento Helm Chart: Magento is a powerful open source e-commerce platform. Its rich feature set includes loyalty programs, product categorization, shopper filtering, promotion rules, and much more.

MariaDB Helm Chart: MariaDB is an open source, community-developed SQL database server that is widely used around the world due to its enterprise features, flexibility, and collaboration with leading tech firms.

Metrics Server Helm Chart: Metrics Server aggregates resource usage data, such as container CPU and memory usage, in a Kubernetes cluster and makes it available via the Metrics API.

MNSpro Cloud Basic: MNSpro Cloud combines the management of your school network with a learning management system, whether you use Windows, iOS, or Android devices.

MongoDB Helm Chart: MongoDB is a scalable, high-performance, open source NoSQL database written in C++.

MySQL 5.6 Secured Ubuntu Container with Antivirus: MySQL is a popular open source relational SQL database management system. MySQL is one of the best RDBMS being used for developing web-based software applications.

MySQL 8.0 Secured Ubuntu Container with Antivirus: MySQL is a popular open source relational SQL database management system. MySQL is one of the best RDBMS being used for developing web-based software applications.

MySQL Helm Chart: MySQL is a fast, reliable, scalable, and easy-to-use open source relational database system. MySQL Server is designed to handle mission-critical, heavy-load production applications.

NATS Helm Chart: NATS is an open source, lightweight, and high-performance messaging system. It is ideal for distributed systems and supports modern cloud architectures and pub-sub, request-reply, and queuing models.

NetApp Cloud Volumes ONTAP: NetApp Cloud Volumes ONTAP, a leading enterprise-grade storage management solution, delivers secure, proven storage management services and supports up to a capacity of 368 TB.

Node.js Helm Chart: Node.js is a runtime environment built on V8 JavaScript engine. Its event-driven, non-blocking I/O model enables the development of fast, scalable, and data-intensive server applications.

Node 6 Secured Jessie Container with Antivirus: Node.js is an open source, cross-platform JavaScript runtime environment for developing a diverse variety of tools and applications.

Odoo Helm Chart: Odoo is an open source ERP and CRM platform that can connect a wide variety of business operations such as sales, supply chain, finance, and project management.

On-Demand Mobility Services Platform: Deploy this intelligent, on-demand transportation operating system for automotive OEMs that need to run professional mobility services to embrace the new automotive era and manage the decline of vehicle ownership.

OpenCart Helm Chart: OpenCart is free open source e-commerce platform for online merchants. OpenCart provides a professional and reliable foundation from which to build a successful online store.

OrangeHRM Helm Chart: OrangeHRM is a feature-rich, intuitive HR management system that offers a wealth of modules to suit the needs of any business. This widely used system provides an essential HR management platform.

Osclass Helm Chart: Osclass allows you to easily create a classifieds site without any technical knowledge. It provides support for presenting general ads or specialized ads and is customizable, extensible, and multilingual.

ownCloud Helm Chart: ownCloud is a file storage and sharing server that is hosted in your own cloud account. Access, update, and sync your photos, files, calendars, and contacts on any device, on a platform that you own.

Paladion MDR powered by AI Platform – AI.saac: Paladion's managed detection and response, powered by our next-generation AI platform, is a managed security service that provides threat intelligence, threat hunting, security monitoring, incident analysis, and incident response.

Parse Server Helm Chart: Parse is a platform that enables users to add a scalable and powerful back end to launch a full-featured app for iOS, Android, JavaScript, Windows, Unity, and more.

Phabricator Helm Chart: Phabricator is a collection of open source web applications that help software companies build better software.

 

PHP 5.6 Secured Jessie-cli Container with Antivirus: PHP is a server-side scripting language designed for web development. It is mainly used for server-side scripting and can collect form data, generate dynamic page content, and send and receive cookies.

 

PHP 5.6 Secured Stretch Container with Antivirus: PHP is a server-side scripting language designed for web development. It is mainly used for server-side scripting and can collect form data, generate dynamic page content, and send and receive cookies.

PHP 7.0 Secured Jessie Container with Antivirus: PHP is a server-side scripting language designed for web development. It is mainly used for server-side scripting and can collect form data, generate dynamic page content, and send and receive cookies.

PHP 7.0 Secured Jessie-cli Container – Antivirus: PHP is a server-side scripting language designed for web development. It is mainly used for server-side scripting and can collect form data, generate dynamic page content, and send and receive cookies.

PHP 7.0 Secured Stretch Container with Antivirus: PHP is a server-side scripting language designed for web development. It is mainly used for server-side scripting and can collect form data, generate dynamic page content, and send and receive cookies.

PHP 7.1 Secured Jessie Container with Antivirus: PHP is a server-side scripting language designed for web development. It is mainly used for server-side scripting and can collect form data, generate dynamic page content, and send and receive cookies.

PHP 7.1 Secured Jessie-cli Container with Antivirus: PHP is a server-side scripting language designed for web development. It is mainly used for server-side scripting and can collect form data, generate dynamic page content, and send and receive cookies.

PHP 7.1 Secured Stretch Container with Antivirus: PHP is a server-side scripting language designed for web development. It is mainly used for server-side scripting and can collect form data, generate dynamic page content, and send and receive cookies.

PHP 7.2 Secured Stretch Container with Antivirus: PHP is a server-side scripting language designed for web development. It is mainly used for server-side scripting and can collect form data, generate dynamic page content, and send and receive cookies.

PHP 7.3 Rc Stretch Container with Antivirus: PHP is a server-side scripting language designed for web development. It is mainly used for server-side scripting and can collect form data, generate dynamic page content, and send and receive cookies.

phpBB Helm Chart: phpBB is a popular bulletin board that features robust messaging capabilities such as flat message structure, subforums, topic split/merge/lock, user groups, full-text search, and attachments.

PostgreSQL Helm Chart: PostgreSQL is an open source object-relational database known for reliability and data integrity. ACID-compliant, it supports foreign keys, joins, views, triggers, and stored procedures.

Project Ares: Project Ares by Circadence is an award-winning, gamified learning and assessment platform that helps cyber professionals of all levels build new skills and stay up to speed on the latest tactics.

Python Secured Jessie-slim Container – Antivirus: This image is made for customers who are looking for deploying a self-managed Community Edition on Hardened kernel instead of just putting up a vanilla install.

Quvo: Quvo is a cloud-first, mobile-first working platform designed especially for public sector and enterprise mobile workforces.

RabbitMQ Helm Chart: RabbitMQ is a messaging broker that gives your applications a common platform to send and receive messages, and your messages a safe place to live until received.

Recordia: Smart Recording & Archiving Interactions: Recordia facilitates gathering all valuable customer interactions under one single repository in the cloud. Know how your sales, marketing, and support staff is doing.

Redis Helm Chart: Redis is an open source, advanced key-value store. It is often referred to as a data structure server since keys can contain strings, hashes, lists, sets, and sorted sets.

Redmine Helm Chart: Redmine is a popular open source project management and issue tracking platform that covers multiple projects and subprojects, each with its own set of users and tools, from the same place.

Secured MySQL 5.7 on Ubuntu 16.04 LTS: MySQL is a popular open source relational SQL database management system. MySQL is one of the best RDBMS being used for developing web-based software applications.

Secured MySQL 5.7 on Ubuntu 18.04 LTS: MySQL is a popular open source relational SQL database management system. MySQL is one of the best RDBMS being used for developing web-based software applications.

Smart Planner: Smart Planner is a web platform for the optimization of productive processes, continuous improvement, and integral management of the supply chain. This application is available only in Spanish.

SmartVM API – Improve your vendor master file: The SmartVM API vendor master cleansing, enriching, and continuous monitoring technology automates vendor master management to help you mitigate risks, eliminate costly information gaps, and improve your supplier records.

SuiteCRM Helm Chart: SuiteCRM is an open source, enterprise-grade customer relationship management (CRM) application that is a fork of the popular SugarCRM application.

Talend Cloud: Remote Engine for Azure: Talend Cloud is a unified, comprehensive, and highly scalable integration Platform as-a-Service (iPaaS) that makes it easy to collect, govern, transform, and share data.

TensorFlow ResNet Helm Chart: TensorFlow ResNet is a client utility for use with TensorFlow Serving and ResNet models.

Terraform on Windows Server 2019: Terraform is used to create, change, and improve your infrastructure via declarative code.

TestLink Helm Chart: TestLink is test management software that facilitates software quality assurance. It supports test cases, test suites, test plans, test projects and user management, and stats reporting.

Tomcat Helm Chart: Tomcat is a widely adopted open source Java application and web server. Created by the Apache Software Foundation, it is lightweight and agile with a large ecosystem of add-ons.

Transfer Center: The comprehensive patient analytics and real-time reporting in Transfer Center help ensure improved care coordination, streamlined patient flow, and full regulatory compliance.

Unity Cloud: Unity is underpinned by Docker, so you can write custom full-code extensions in any language and enjoy fault tolerance, high availability, and scalability.

User Management Pack 365: User Management Pack 365 is a powerful software application that simplifies user lifecycle and identity management across Skype for Business deployments.

Visual Studio Emulator on Windows Server 2016: Visual Studio Emulator plays an important role in the edit-compile-debug cycle of your Android testing.

Webfopag – Online Payroll: Fully process payroll while meeting your business compliance rules. This application is available only in Portuguese.

WordPress Helm Chart: WordPress is one of the world's most popular blogging and content management platform. It is powerful yet simple, and everyone from students to global corporations use it to build beautiful, functional websites.

XAMPP: XAMPP is specifically designed to make it easier for developers to install the distribution to get into the Apache universe.

XAMPP Windows Server 2019: XAMPP is specifically designed to make it easier for developers to install the distribution to get into the Apache universe.

XS VM Lift & Shift with Provisioning & Metering: Modernize migration, provisioning, and automatic metering with the Beacon42 metering tool. This application is available only in Spanish.

ZooKeeper Helm Chart: ZooKeeper provides a reliable, centralized register of configuration data and services for distributed applications.

 

Consulting Services

360 Degree Security System: 1-Hour Briefing: This 360 Degree Security System briefing will address why antivirus solutions are obsolete, how to automatically track and block brute force attacks, and how to automatically track and block malicious activity.

Application Migration: 3-Day Assessment: Chef consultants will attend your site and assess how to use Chef Habitat to migrate a legacy app from an older platform (such as Windows Server 2008 R2 and SQL Server 2008 R2) to Azure.

Archiving & Backup Essentials: 1-Hr Briefing: Learn how to take advantage of tiered storage in Microsoft Azure to dramatically reduce your storage and backup costs and enhance your resilience.

Azure Cloud Governance 1-Day Workshop: Join this day-long cloud governance learning event designed for IT and senior leadership. Discover cloud governance, understand the main concepts, and learn about what you can do to give your business an advantage.

Azure Data Centre Modernization: 3-Day Assessment: This Azure assessment will provide you with an understanding of what's possible for your business with a business case for migration that includes timing and cost estimates.

Azure Maturity: 4-Week Assessment: The Azure Maturity assessment aims at estimating the maturity of your organization (strengths and weaknesses) and building a roadmap that will allow you to make your cloud journey a success.

Azure: 5-Day Enterprise Scaffold Workshop: This workshop provides training, processes, and security settings to scale up and optimize the adoption of Azure by removing blockers to scale and introducing processes to scale safely and efficiently.

BizTalk to Azure Migration Assessment – 2 Day: This assessment will provide you with detailed guidance on how you can successfully move your BizTalk applications to Azure Integration Services running in the cloud.

Business Continuity System: 1 Hour Briefing: This briefing is for every IT director who wants to minimize downtime with dependable recovery, reduce infrastructure costs, or easily run disaster recovery drills without affecting ongoing replication.

Data Centre Migration Essentials: 1-Hr Briefing: Identify your migration options and uncover the best ROI opportunities in migrating your apps, data, and/or infrastructure to Microsoft Azure.

Data Compliance Monitoring – 3 Week Assessment: The CTO Boost team will work closely with your risk and compliance stakeholders to assess your compliance strategy and build a plan toward compliance automation.

Databricks 5 Day Data Engineering PoC: We will work with your development team to demonstrate the performance, scale, and reduced complexity that Azure Databricks can offer your business.

Email Compliance Essentials: 1-Hr Briefing: Discover how you can use Azure to provide email journaling, retention management, and e-discovery to meet your email compliance needs.

Legacy App Migration – 8-Week Assessment and Design: After investigating your legacy apps, we deliver a roadmap for your Azure cloud journey. Additionally, we design a modern user experience (UX) leveraging the latest usability and distributed workforce techniques.

Modern Data Architecture: 1-Hour Assessment: During this session we will discuss the different components that make up a modern data architecture to assess whether it is right for you and how Data Thirst could help you deliver a successful data platform that uses it.

Win/SQL 2008 EOL to Azure: 5-Day Assessment: This free assessment is focused on applications running on end-of-support Windows and SQL Server 2008 products and provides a detailed upgrade and migration plan to Microsoft Azure.

Windows/SQL 2008 to Azure: 1 Week Implementation: Need an efficient path forward for applications based on Windows or SQL Server 2008? This 1-week implementation provides a data-driven migration of your Windows or SQL workload to Microsoft Azure.

Quelle: Azure

Azure IoT Hub message enrichment simplifies downstream processing of your data

We just released a new capability that enables enriching messages that are egressed from Azure IoT Hub to other services. Azure IoT Hub provides an out-of-the-box capability to automatically deliver messages to different services and is built to handle billions of messages from your IoT devices. Messages carry important information that enable various workflows throughout the IoT solution. Message enrichments simplifies post-processing of your data and can reduce costs of calling device twin APIs for information. This capability allows you to stamp information on your messages, such as details from your device twin, your IoT Hub name or any static property you want to add.

A message enrichment has three key elements, the key name for the enrichment, the value of the enrichment key, and the endpoints that the enrichment applies to. Message enrichments are added to the IoT Hub message as application properties. You can add up to 10 enrichments per IoT Hub for standard and basic tier IoT Hubs and two enrichments for free tier IoT Hub. Enrichments can be applied to messages going to the built-in endpoint, messages that are routed to the built-in endpoint, or custom endpoints such as Azure blob storage, Event Hubs, Service Bus Queue, and Service Bus topic. Each enrichment will have a key that can be set as any string, and a value that can be a path to a device twin (e.g. $twin.tag.field), the IoT Hub sending the message (e.g. $iothubname), or any static value (e.g. myapplicationId).

You can also use the IoT Hub Create or Update REST API, and add enrichments as part of the RoutingProperties. For example:

"routing": {
"enrichments": [
{
"key": "appId",
"value": "myApp",
"endpointNames": ["events"]
},
{
"key": "Iot-Hub-Name",
"value": "$iothubname",
"endpointNames": ["events"]
},
{
"key": "Device-Location",
"value": "$twin.tags.location",
"endpointNames": ["events"]
}
],
"endpoints": {
"serviceBusQueues": [],
"serviceBusTopics": [],
"eventHubs": [],
"storageContainers": []
},
"routes": [{
"name": myfirstroute",
"source": "DeviceMessages",
"condition": "true",
"endpointNames": [
"events"
],
"isEnabled": true
}],
"fallbackRoute": {
"name": "$fallback",
"source": "DeviceMessages",
"condition": "true",
"endpointNames": [
"events"
],
"isEnabled": true
}
}

This feature is available for preview in all public regions except East US, West US, and West Europe. We are excited for you to try this capability and build more streamlined IoT solutions for your business. Try this tutorial to get started.

We would love to hear more about your experiences with the preview and get your feedback! Are there other capabilities in IoT Hub that you would like to see? Please continue to submit your suggestions through the Azure IoT User Voice forum.
Quelle: Azure

Visual data ops for Apache Kafka on Azure HDInsight, powered by Lenses

This blog was written in collaboration with Andrew Stevenson, CTO at Lenses.

Apache Kafka is one of the most popular open source streaming platforms today. However, deploying and running Kafka remains a challenge for most. Azure HDInsight addresses this challenge by providing:

Ease-of-use: Quickly deploy Kafka clusters in the cloud and integrate simply with other Azure services.
Higher scale and lower total-cost-of-operations (TCO): With managed disks, compute and storage are separated, enabling you to have 100s of TBs on a cluster.
Enhanced security: Bring your own key (BYOK) encryption, custom virtual networks, and topic level security with Apache Ranger.

But that’s not all – you can now successfully manage your streaming data operations, from visibility to monitoring, with Lenses, an overlay platform now generally available as part of the Azure HDInsight application ecosystem, right from within the Azure portal!

With Lenses, customers can now:

Easily look inside Kafka topics
Inspect and modify streaming data using SQL
Visualize application landscapes

Look inside Kafka topics

A typical production Kafka cluster has thousands of topics. Imagine you want to get a high level view on all of these topics. You may want to understand the configuration of the various topics, such as the replication or partition distribution. Or you may want to look deeper inside a specific topic, investigating the message throughput and the leader broker.

While many of these insights can be provided through the Kafka CLI, Lenses greatly simplifies the experience by unifying key insights for topics and brokers via a simple to use and intuitive visual interface. With Lenses, inspecting your Kafka cluster is effortless.

Inspect and modify streaming data using SQL

What if you want to inspect the data within the Kafka topic and view the messages sent within a certain time frame? Or if you actually want to process a subset of that stream and write it back to another Kafka topic. You can achieve that with SQL queries and Processors within the Lenses UI. You can write SQL queries to validate your streaming data and unblock your client organizations faster.

SQL Processors can be deployed and monitored to perform real-time transforms and analytics, supporting all the features you would expect in SQL, like joins and aggregations. You can also configure Lenses to scale out processing with Azure Kubernetes Service (AKS).

Visualize application landscapes

At the end of the day, you’re trying to create a solution that will create business impact. That solution will be composed of various microservices, data producers, and analytical engines. Lenses gives you easy insights into your application landscape, describing the running processes and the lineage of your data platform.

In the Topology view, running applications are dynamically added, recovered at startup, and the topics are included. For creating end-to-end solutions, Lenses also provides an easy way to deploy connectors from the open source Stream Reactor project, containing a large collection of Kafka Connect Connectors.

Check out the following resources to get started with Lenses on Azure HDInsight:

Create an HDInsight Kafka cluster
Lenses on Azure HDInsight

Quelle: Azure

Simplify the management of application configurations with Azure App Configuration

We’re excited to announce the public preview of Azure App Configuration, a new service aimed at simplifying the management of application configuration and feature flighting for developers and IT. App Configuration provides a centralized place in Microsoft Azure for users to store all their application settings and feature flags (a.k.a., feature toggles), control their accesses and deliver the configuration data where it is needed.

Eliminate hard-to-troubleshoot errors across distributed applications

Companies throughout industries are transforming into digital organizations in order to better serve their customers, foster tighter relationships and respond to competition faster. We have witnessed a rapid growth in the numbers of applications our customers have. Modern applications, particularly those running in a cloud, are typically made up of multiple components and distributed in nature. Spreading configuration data across these components often leads to hard-to-troubleshoot errors in production. When a company has a large portfolio of applications, these problems multiply very quickly.

With App Configuration, you can keep your application settings together so that:

You have a single consolidated view of all configuration data.
You can easily make changes to settings, compare values, and perform rollbacks.
You have numerous options to deliver these settings to your application, including injecting them directly into your compute service (e.g., App Service), embedding in a CI/CD pipeline, or retrieving them on-demand inside your code.

App Configuration allows you to maintain control over the configuration data and handle it with confidence.

Increase release velocity with feature flags

One of the core solutions we provide with App Configuration is feature management. Traditionally, a new application feature needs to go through a series of testing before it can be released. This generally leads to long development cycles. Newer software engineering methodologies, such as feature management using feature flags, help shorten the cycles by enabling real test in production while safeguarding the application stability. Feature management solves a multitude of developer challenges especially when building applications for the cloud.

App Configuration provides built-in support for feature management. You can leverage it to remotely control feature availability in your deployed application. While it can be used with any programming language, through its REST APIs, the .NET Core and ASP.NET Core libraries offer a complete end-to-end solution out of the box.

Get started now

App Configuration provides a complete turnkey solution for dealing with application settings and feature flags. It’s easy to onboard and use. You can find the complete documentation at, “Azure App Configuration Preview documentation.” Please give it a try and let us know what you think!
Quelle: Azure

Isolate app integrations for stability, scalability, and speed with an integration service environment

Innovation at scale is a common challenge facing large organizations. A key contributor to the challenge is the complexity in coordinating the sheer number of apps and environments.

Integration tools, such as Azure Logic Apps, give you the flexibility to scale and innovate as fast as you want, on-premises or in the cloud. This is a key capability you need to have in place when migrating to the cloud, or even if you're cloud native. Often, integration has been relegated as something to do after the fact. In the modern enterprise, however, application integration is something that has to be done in conjunction with application development and innovation.

An integration service environment is the ideal solution for organizations concerned about noisy neighbor issues, data isolation, or who need more flexibility and configurability than the core Logic Apps service offers.

Building upon the existing set of capabilities, we are releasing a number of new, exciting changes that make integration service environments even better, such as:

Faster deployment times by halving the previous provisioning time

Higher throughput limits for an individual Logic App and connectors

An individual Logic App can now run for up to a year (365 days)

Integration service environment for Logic Apps is the next step for organizations who are pursuing integration as part of their core application development strategy. Here’s what an integration service environment can offer:

Direct, secure access to your virtual network resources. Enables Logic Apps to have secure, direct access to private resources, such as virtual machines, servers, and other services in your virtual network including Azure services with service endpoints and on-premises resources via Azure ExpressRoute or site to site VPN.

Consistent, highly reliable performance. Eliminates the noisy neighbor issue, removing fear of intermittent slowdowns that can impact business critical processes with a dedicated runtime where only your Logic Apps execute in.

Isolated, private storage. Sensitive data subject to regulation is kept private and secure, opening new integration opportunities.

Predicable pricing. Provides a fixed monthly cost for Logic Apps. Each integration service environment includes the free usage of one standard integration account and one enterprise connector. If your Logic Apps action execution count exceeds 50 million action executions per month, the integration service environment could provide better value.

New to integration service environments for Logic Apps? Watch this Azure Friday introduction video for assistance.

Get started with an integration service environment for Azure Logic Apps today.
Quelle: Azure

Key causes of performance differences between SQL managed instance and SQL Server

Migrating to a Microsoft Azure SQL Database managed instance provides a host of operational and financial benefits you can only get from a fully managed and intelligent cloud database service. Some of these benefits come from features that optimize or improve overall database performance. After migration many of our customers are eager to compare workload performance with what they experienced with on-premises SQL Server, and sometimes they're surprised by the results. In many cases, you might get better results on the on-premises SQL Server database because a SQL Database managed instance introduces some overhead for manageability and high availability. In other cases, you might get better results on a SQL Database managed instance because the latest version of the database engine has improved query processing and optimization features compared to older versions of SQL Server.

This article will help you understand the underlying factors that can cause performance differences and the steps you can take to make fair comparisons between SQL Server and SQL Database.

If you're surprised by the comparison results, it's important to understand what factors could influence your workload and how to configure your test environments to ensure you have a fair comparison. Some of the top reasons why you might experience lower performance on a SQL Database managed instance compared to SQL Server are listed below. You can mitigate some of these by increasing and pre-allocating file sizes or adding cores; however, the others are prerequisites for guaranteed high availability and are part of the PaaS service.

Simple or bulk recovery model

The databases placed on the SQL Database managed instance are using a full database recovery model to provide high availability and guarantee no data loss. In this scenario, one of the most common reasons why you might get worse performance on a SQL Database managed instance is the fact that your source database uses a simple or bulk recovery model. The drawback of the full recovery model is that it generates more log data than the simple/bulk logged recovery model, meaning your DML transaction processing in the full recovery model will be slower.

You can use the following query to determine what recovery model is used on your databases:

select name, recovery_model_desc from sys.databases

If you want to compare the workload running on SQL Server and SQL Database managed instances, for a fair comparison make sure the databases on both sides are using the full recovery model.

Resource governance and HA configuration

SQL Database managed instance has built-in resource governance that ensures 99.99% availability, and guarantees that management operations such as automated backups will be completed even under high workloads. If you don’t use similar constraints on your SQL Server, the built-in resource governance on SQL Database managed instance might limit your workload.

For example, there's an instance log throughput limit (up to 22MBs on the general purpose and up to 48MBs on the business critical tier) that ensures you can't load more data than the instance can backup. In this case, you might see higher INSTANCE_LOG_GOVERNOR wait statistics that don’t exist in your SQL Server instance. These resource governance constraints might slow down operations such as bulk load or index rebuild because these operations require higher log rates.

In addition, the secondary replicas in business critical tier instances might slow down the primary database if they can't catch-up the changes and apply them, so you might see additional HADR_DATABASE_FLOW_CONTROL or HADR_THROTTLE_LOG_RATE_SEND_RECV wait statistics.

If you're comparing your SQL Server workload running on local SSD storage to the business critical tier, note that the business critical instance is an Always On availability group cluster with three secondary replicas. Make sure that your source SQL Server has an HA implementation similarly using Always On availability groups with at least one synchronous commit replica. If you're comparing the business critical tier with a single SQL Server instance writing to the local disk, this would be an unrealistic comparison due to the absence of HA on your source instance. If you are using async always on replicas, you would have HA with better performance, but in this case you are making the trade-off between the possibility of data-loss in favor of performance, and you will get the better results on the SQL Server instance.

Automated backup schedule

One of the main reasons why you would choose the SQL Database managed instance is the fact that it guarantees you will always have backups of your databases, even under heavy workloads. The databases in a SQL Database managed instance have scheduled full, incremental, and log backups. Full backups are taken every seven days, incremental every twelve hours, and log backups are taken every five to ten minutes. If you have multiple databases on the instance there's a high chance there is at least one backup currently running.

Since the backup operations are using some instance resources (CPU, disk, network), they can affect workload performance. Make sure the databases on the system that you compare with the managed instance have similar backup schedules. Otherwise, you might need to accept that you're getting better results on your SQL Server instance because you're making a trade-off between database recovery and performance, which is not possible on a SQL Database managed instance.

If you're seeing unexpected performance differences, check if there is some ongoing full/differential backup either on the SQL Database managed instance or SQL Server instance that can affect performance of the currently running workload, using the following query:

SELECT r.command, query = a.text, start_time, percent_complete,
eta = dateadd(second,estimated_completion_time/1000, getdate())
FROM sys.dm_exec_requests r
CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) a
WHERE r.command IN ('BACKUP DATABASE','BACKUP LOG')

If you see currently running full or incremental backup during the short-running benchmark, you might pause your workload and resume it once the backup finishes.

Connection and App to Database proximity

The application accessing the databases and executing the benchmark queries on the SQL Database managed instance and SQL Server instance must be in a similar network proximity range in both cases. If you are placing your application and SQL Server database in the local environment (or running an app like HammerDB from the same machine where the SQL Server is installed) you will get better results on SQL Server compared to the SQL Database managed instance, which is placed on a distributed cloud environment with respect to the application. Make sure that in both cases you're running the benchmark application or query on separate virtual machines in the same region as SQL Database managed instance to get the valid results. If you're comparing an on-premises environment with the equivalent cloud environments, try to measure bandwidth and latency between the app and database and try to ensure they are similar.

SQL Database managed instance is accessed via proxy gateway nodes that accept the client requests and redirect them to the actual database engine nodes. In order to provide the results closer to your environment, enable ProxyOverride mode on your instance using Set-AzSqlInstance PowerShell command to enable direct access from the client to the nodes currently hosting your SQL Database managed instance.

In addition, due to compliance requirements, a SQL Database managed instance enforces SSL/TLS transport encryption which is always enabled. Encryption can introduce overhead in case of a large number of queries. If your on-premises environment does not enforce SSL encryption you will see additional network overhead in the SQL Database managed instance.

Transparent data encryption

The databases on SQL Database managed instance are encrypted by default using Transparent Data Encryption. Transparent Data Encryption encrypts/decrypts every page that is exchanged with the disk storage. This spends more CPU resources, and introduces additional latency in the process of fetching and saving the data pages to or from disk storage. Make sure that both databases on SQL Database managed instance and SQL Server have Transparent Data Encryption either turned on or off, and that database encryption/decryption operations have completed before starting performance testing.

You can use the following query to determine whether the databases are encrypted:

select name, is_encrypted from sys.databases

Another important factor that might affect your performance is encrypted TempDB. TempDB is encrypted if at least one database on your SQL Server or SQL Database managed instance is encrypted. As a result, you might compare two databases that are not encrypted, but due to some other SQL Database managed instance being encrypted (although it's not involved in the workload) the TempDB will also be encrypted. The unencrypted databases will still use encrypted TempDB and any query that creates temporary objects or uses spills would be slower. Note that TempDB will only get decrypted once all user databases on an instance are decrypted, and the instance restarts. Scaling a SQL Database managed instance to a new pricing tier and back is one way to restart it.

Database engine settings

Make sure the database engine setting such as database compatibility levels, trace flags, system configurations (‘cost threshold for parallelism’, ’max degree of parallelism’), database scoped configurations (LEGACY_CARDINALITY_ESTIMATOR, PARAMETER_SNIFFING, QUERY_OPTIMIZER_HOTFIXES, etc.), and database settings (AUTO_UPDATE_STATISTICS, DELAYED_DURABILITY) on the SQL Server and SQL Database managed instances are the same on both databases.

The following sample queries can help you to identify setting on SQL Server and Azure SQL Database managed instance:

select compatibility_level, snapshot_isolation_state_desc, is_read_committed_snapshot_on,

  is_auto_update_stats_on, is_auto_update_stats_async_on, delayed_durability_desc
from sys.databases;
GO

select * from sys.database_scoped_configurations;
GO

dbcc tracestatus;
GO

select * from sys.configurations;

Compare the results of these queries on the SQL Database managed instance and SQL Server and try to align the differences if you identify some.

Note: The list of trace flags and configurations might be very long so we recommend filtering them or lookng only on the trace flags you've changed or know are affecting performance. Some of the trace flags are pre-configured on SQL Database managed instance as part of PaaS configurations and they are not affecting performance.

You might experiment with changing the compatibility level to a higher value, turning on the legacy cardinality estimator, or the automatic tuning feature on the SQL Database managed instance, which might give you better results than your SQL Server database.

Also note that SQL Database managed instance might provide better performance even if you align all parameters because it has the latest improvements, or fixes that are not bound to compatibility level, or some features, like forcing last good plan, that might improve your workload.

Hardware and environment specification

SQL Database managed instance runs on standardized hardware with pre-defined technical characteristics that are probably different than your environment. Some of the characteristics you might need to consider when comparing your environment with the environment where the SQL Database managed instance is running are:

Number of cores should be the same both on SQL Server and the SQL Database managed instance. Note that a SQL Database managed instance uses 2.3-2.4 GHz processors, which might be different than your processor speed. It might consume more or less CPU for the same operation due to the CPU differences. If possible, check if hyperthreading is used on the SQL Server environment when comparing to the Gen4 and Gen5 hardware generations on a SQL Database managed instance. One on Gen4 hardware does not use hyperthreading, while on Gen5 it does. If you are comparing SQL Server running on a bare-metal machine with a SQL Database managed instance or SQL Server running on a virtual machine you'll probably get better results on a bare-metal instance.
Amount of memory including memory/core ratio (5.1 GB/core on Gen5, 7 GB/core on Gen4). Higher memory/core ratio provides bigger buffer pool cache and increases cache hit ratio. If your workload does not perform well on a managed interface with the memory/core ratio 5, then you probably need to choose a virtual machine with the appropriate memory/core ratio instead of a SQL Database managed instance.
IO characteristics – You need to be aware that performance of the storage system might be very different compared to your on-premises environment. A SQL Database managed instance is a cloud database and relies on Azure cloud infrastructure.

The general purpose tier uses remote Azure Premium disks where IO performance depends on the file sizes. If you reach the log limit that depends on the file size, you might notice WRITE_LOG waits and less IOPS in file statistics. This issue might occur on a SQL Database managed instance if the log files are small and not pre-allocated. You might need to increase the size of some files in the general purpose tier to get better performance (see this Tech Community article Storage performance best practices and considerations for Azure SQL Managed Instance General Purpose tier).
A SQL Database managed instance does not use instant file initialization, so you might see additional PREEMPTIVE_OS_WRITEFILEGATHER wait statistics since the date files are filled with zero bytes during file growth.

Local or remote storage types – Make sure you're considering local SSD versus remote storage while doing the comparison. The general purpose tier uses remote storage (Azure Premium Storage) that can't match your on-premises environment if it uses local SSD or a high-performance SAN. In this case you would need to use the business critical tier as a target. The general purpose tier can be compared with other cloud databases like SQL Server on Azure Virtual Machines that also use remote storage (Azure Premium Storage). In addition, beware that remote storage used by a general purpose instance is still different than remote storage used by a SQL Virtual Machine because:

The general purpose tier uses a dedicated IO resource per each database file that depends on the size of the individual files, while SQL Server on Azure Virtual Machine uses shared IO resources for all files where IO characteristics depend on the size of the disk. If you have many small files, you will get better performance on a SQL Virtual Machine, while you can get better performance on a SQL Database managed instance if the usage of files can be parallelized because there are no noisy neighbors who are sharing the same IO resources.
SQL Virtual Machines use a read-caching mechanism that improves read speed.

If your hardware specs and resource allocation are different, you might expect different performance results that can be resolved only by changing the service tier or increasing file size. If you are comparing a SQL Database managed instance with SQL Server on Azure Virtual Machines, make sure that you are choosing a virtual machine series that has memory/cpu ratio similar to SQL Database managed instance, such as DS series.

Azure SQL Database managed instance provides a powerful set of tools that can help you troubleshoot and improve performance of your databases, in addition to built-in intelligence that could automatically resolve potential issues. Learn more about monitoring and tuning capabilities of Azure SQL Database managed instance in the following article: https://docs.microsoft.com/en-us/azure/sql-database/sql-database-monitoring-tuning-index
Quelle: Azure