Announcing service monitor alliances for Azure Deployment Manager

Azure Deployment Manager is a new set of features for Azure Resource Manager that greatly expands your deployment capabilities. If you have a complex service that needs to be deployed to several regions, if you’d like greater control over when your resources are deployed in relation to one another, or if you’d like to limit your customer’s exposure to bad updates by catching them while in progress, then Deployment Manager is for you. Deployment Manager allows you to perform staged rollouts of resources, meaning they are deployed region by region in an ordered fashion.

During Microsoft Build 2019, we announced that Deployment Manager now supports integrated health checks. This means that as your rollout proceeds, Deployment Manager will integrate with your existing service health monitor, and if during deployment unacceptable health signals are reported from your service, the deployment will automatically stop and allow you to troubleshoot.

In order to make health integration as easy as possible, we’ve been working with some of the top service health monitoring companies to provide you with a simple copy/paste solution to integrate health checks with your deployments. If you’re not already using a health monitor, these are great solutions to start with:

Datadog, the leading monitoring and analytics platform for modern cloud environments. See how Datadog integrates with Azure Deployment Manager.
Site24x7, the all-in-one private and public cloud services monitoring solution. See how Site24x7 integrates with Azure Deployment Manager.
Wavefront, the monitoring and analytics platform for multi-cloud application environments. See how Wavefront integrates with Azure Deployment Manager.

These service monitors provide a simple copy/paste solution to integrate with Azure Deployment Manager’s health integrated rollout feature, allowing you to easily prevent bad updates from having far reaching impact across your user base. Stay tuned for Azure Monitor integration, which is coming soon.

Additionally, Azure Deployment Manager no longer requires sign-up for use, and is now completely open to the public!

To get started, check out the tutorial “Use Azure Deployment Manager with Resource Manager templates (Public preview)” or the documentation “Enable safe deployment practices with Azure Deployment Manager (Public preview)”.  If you want to try out the health integration feature, check out the tutorial “Use health check in Azure Deployment Manager (Public preview)” for an end to end walkthrough.

We’re excited to have you give Azure Deployment Manager a try, and, as always, we are listening to your feedback.
Quelle: Azure

Unlock new features in the MT3620 MCU with the Azure Sphere 19.05 release

Each quarter, the Azure Sphere team works to open new scenarios to customers through new features on-chip and in the cloud.  The Azure Sphere 19.05 release continues this theme by unlocking the real-time capable cores that reside on the MT3620. Co-locating these cores within the same SOC enables new, real-time scenarios on the M4 cores while continuing to support connectivity scenarios on the high-level core. This release also introduces support for DHCP-based Ethernet connections to the cloud.

We are also pleased to announce that the Azure Sphere hardware ecosystem continues to expand with new modules available for mass production and new, less expensive development boards. Finally, new Azure Sphere reference solutions are available to accelerate your solution’s time to market.

To build applications that take advantage of this new functionality, please download and install the latest Azure Sphere SDK Preview for Visual Studio. All Wi-Fi connected devices will automatically receive an updated Azure Sphere operating system that contain support for these new features.

Enabling new MT3620-based features

Real-time core preview—The OS and SDK support development, deployment, and debugging SPI, I2C, GPIO, UART and ADC real-time capable apps on the MT3620’s two M4 cores. GitHub sample apps show GPIO, UART, and real-time core to high-level core communication.
ADC sample—This real-time core sample app demonstrates how to use the MT3620’s analog-to-digital converters to sample voltages. See the ADC GitHub sample for more details.

Tools and libraries

Improved CMAKE support—Visual Studio now supports one-touch deploy and debug for applications that use CMake.
Application runtime version—Application properties specify the required application runtime version (ARV), and azsphere commands detect conflicts. See the online documentation for details.
Random number generation (RNG)—The POSIX base API supports random number generation from Pluton's RNG.
Easy hardware targeting—Hardware-specific JSON and header files are provided in the GitHub sample apps repository. You can now easily target a particular hardware product by changing an application property.

New connectivity options

Ethernet internet interface—This release supports an Ethernet connection as an alternative to a Wi-Fi connection for communicating with the Azure Sphere Security Service and your own services. Our GitHub samples now demonstrate how to wire the supported Microchip part, bring up the Ethernet interface, and use it to connect to Azure IoT or your own web services.
Local device discovery—The Azure Sphere OS offers new network firewall and multicast capabilities that enable apps to run mDNS and DNS-SD for device discovery on local networks. Look for more documentation in the coming weeks on this feature.

Support for additional hardware platforms

Several hardware ecosystem partners have recently announced new Azure Sphere-enabled products:

SEEED MT3620 Mini Development Board—This less-expensive development board  single-band Wi-Fi is designed for size-constrained prototypes. It uses the AI-Link module for a quick path from prototype to commercialization.
AI-Link WF-M620-RSA1 Wi-Fi Module—This single-band Wi-Fi module is designed for cost-sensitive applications.
USI Azure Sphere Combo Module—This module supports both dual-band Wi-Fi and Bluetooth. The on-board Bluetooth chipset supports BLE and Bluetooth 5 Mesh. The chipset can also work as an NFC tag to support non-contact Bluetooth pairing and device provisioning scenarios.
Avnet Guardian module—This module enables the secure connection of existing equipment to the internet. It attaches to the equipment through Ethernet and connects to the cloud via dual-band Wi-Fi.
Avnet MT3620 Starter Kit—This development board with dual-band Wi-Fi connectivity features modular connectors that support a range of MikroE Click and Grove modules.
Avnet Wi-Fi Module—This dual-band Wi-Fi module with stamp hole (castellated) pin design allows for easy assembly and simpler quality assurance.

There has never been a better time to begin developing on Azure Sphere, using the development kit or module which best fits your needs, or those of your customer, with highly customizable offerings available.

Get started using the Azure Sphere SDK Preview for Visual Studio.
Need help? Connect with experts through the Azure Sphere forum or on Stack Overflow.
Share product feedback and requests.
Stay current with the latest Azure Updates.

Email us at nextinfo@microsoft.com to kick off an Azure Sphere engagement with your Microsoft representative.
Quelle: Azure

Azure Cost Management updates – May 2019

Whether you're a new student, thriving startup, or the largest enterprise you have financial constraints and you need to know what you're spending, where, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Azure Cost Management comes in.

We're always looking for ways to learn more about your challenges and how Cost Management can help you better understand how and where you're accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:

Expanded general availability (GA): Pay-as-you-go and Azure Government
New preview: Manage AWS and Azure costs together in the Azure portal
New getting started videos
Monitor costs based on your pay-as-you-go billing period
More comprehensive scheduled exports
Extended date picker
Share link to customized views
Documentation updates

Let's dig into the details…

 

Expanded general availability (GA): Pay-as-you-go and Azure Government

Azure Cost Management is now generally available for the following account types:

Public cloud

Enterprise Agreements (EA)
Microsoft Customer Agreements (MCA)
Pay-as-you-go (PAYG) and dev/test subscriptions

Azure Government

Enterprise Agreements

Stay tuned for more information about preview support for additional account types and clouds, like Cloud Solution Providers (CSP) and Sponsorship subscriptions. We know how critical it is for you to have a rich set of cost management tools for every account across every cloud, and we hear you loud and clear.

 

New preview: Manage AWS and Azure costs together in the Azure portal

Many organizations are adopting multi-cloud strategies for additional flexibility, but with increased flexibility comes increased complexity. From different cost models and billing cycles to underlying cloud architectures, having a single cross-cloud cost management solution is no longer a luxury, but a fundamental requirement to efficiently and effectively monitor, control, and optimize costs. This is where Azure Cost Management can help.

Start by creating a new AWS cloud connector from the Azure portal. From the home page of the Azure portal select the Cost Management tile. Then, select Cloud connectors (preview) and click the "Add" command. Simply specify a name, pick the management group you want AWS costs to be rolled up to, and configure the AWS connection details.

Cost Management will start ingesting AWS costs as soon as the AWS cost and usage report is available. If you created a new cost and usage report, AWS may take up to 24 hours to start exporting data. You can check the latest status from the cloud connectors list.

Once available, open cost analysis and change the scope to the management group you selected when creating the connector. Group by provider to see a break down of AWS and Azure costs. If you connected multiple AWS accounts or have multiple Azure billing accounts, group by billing account to see a break down by account.

In addition to seeing AWS and Azure costs together, you can also change the scope to your AWS consolidated or linked accounts to drill into AWS costs specifically. Create budgets for your AWS scopes to get notified as costs hit important thresholds.

Managing AWS costs is free to use and you will not be charged during the preview. If you would like to automatically upgrade when AWS support is generally available, navigate to the connector, and select the Automatically charge the 1 percent at general availability option, then select the desired subscription to charge.

For more information about managing AWS costs, see the documentation "Manage AWS costs and usage in Azure."

 

New getting started videos

Learning a new service can take time. Reading through documentation is great, but you've told us that sometimes you just want a quick video to get you started. Well, here are eight:

Azure Cost Management overview (4m)
Azure Cost Management and Cloudyn (4m)
How to manage and control your cloud costs (4m)
How to analyze spending in Power BI (3m)
How to create a budget to monitor your spending (5m)
How to schedule exports to storage (2m)
How to assign access (5m)
How to review tag policies (4m)

If you're looking for something a little more in-depth, try these:

Azure Cost Management technical overview (34m)
How to transition from Cloudyn to Azure Cost Management (31m)

 

Monitor costs based on your pay-as-you-go billing period

As you know, your pay-as-you-go and dev/test subscriptions are billed based on the day you signed up for Azure. They don’t map to calendar months, like EA and MCA billing accounts. This has made reporting on and controlling costs for each bill a little harder, but now you have the tools you need to effectively manage costs based on your specific billing cycle.

When you open cost analysis for a PAYG subscription, it defaults to the current billing period. From there, you can switch to a previous billing period or select multiple billing periods. More on the extended date picker options later.

If you want to get notified before your bill hits a specific amount, create a budget for the billing month. You can also specify if you want to track a quarterly or yearly budget by billing period.

Sometimes you need to export data and integrate it with your own datasets. Cost Management offers the ability to automatically push data to a storage account on a daily, weekly, or monthly basis. Now you can export your data as it is aligned to the billing period, instead of the calendar month.

We love hearing your suggestions, so let us know if there's anything else that would help you better manage costs during your personalized billing period.

 

More comprehensive scheduled exports

Scheduled exports enable you to react to new data being pushed to you instead of periodically polling for updates. As an example, a daily export of month-to-date data will push a new CSV file every day from January 1-31. These daily month-to-date exports have been updated to continue to push data on the configured schedule until they include the full dataset for the period. For example, the same daily month-to-date export would continue to push new January data on February first and February second to account for any data which may have been delayed. The update guarantees you will receive a full export for every period, starting April 2019.

For more information about how cost data is processed, see the documentation "Understand Cost Management data."

 

Extended date picker in cost analysis

You've told us that analyzing cost trends and investigating spending anomalies sometimes requires a broad set of date ranges. You may want to look at the current billing period to keep an eye on your next bill or maybe you need to look at the last 30 days in a monthly status meeting. Some teams are even looking at the last 7 days on a weekly or even daily basis to identify spending anomalies and react as quickly as possible. Not to mention the need for longer-term trend analysis and fiscal planning.

Based on all the great feedback you've shared around needing a rich set of one-click date options, cost analysis now offers an extended date picker with more options to make it easier than ever for you to get the data you need quickly.

We also noticed trends in how you navigate between periods. To simplify this, you can now quickly navigate backward and forward in time using the < PREVIOUS and NEXT > links at the top of the date picker. Try it yourself and let us know what you think.

 

Share links to customized views

We've heard you loud and clear about how important it is to save and share customized views in cost analysis. You already know you can pin a customized view to the Azure portal dashboard, and you already know you can share dashboards with others. Now you can share a direct link to that same customized view. If somebody who doesn't have access to the scope opens the link they'll get an access denied message, but they can change the scope to keep the customizations and apply them to their own scope.

You can also customize the scope to share a targeted URL. Here's the format of the URL:

https://portal.azure.com# [@#{domain}] /blade/Microsoft_Azure_CostManagement/Menu/open/CostAnalysis [/scope/{url-encoded-scope}] /view/{view-config}

The domain is optional. If you remove that, the user's preferred domain will be used.

The scope is also optional. If you remove that, the user's default scope will be the first billing account, management group, or subscription found. If you specify a custom scope, remember to URL-encode (e.g. "/" → "%2F") the scope, otherwise cost analysis will not load correctly.

The view configuration is a gzipped, URL-encoded JSON object. As an example, here's how you can decode a customized view:

Copy URL from the portal:

https://portal.azure.com#@domain.onmicrosoft.com/blade/Microsoft_Azure_CostManagement/Menu/open/CostAnalysis/scope/%2Fsubscriptions%2F00000000-0000-0000-0000-000000000000/view/H4sIAAAAAAAA%2F41QS0sDMRD%2BL3Peha4oam%2FSgnhQilYvpYchOxuDu8k6mVRL2f%2FupC8FofSYL99zNrAiji54GMPFqLotR5flqCp7ppWjLyjgMxGv305zOhJ2Rn%2FvjCRsJyHKQQnjDci6J%2F18jWhJcXEdNYxdxiYpSuj24IzYhTorGlbwN6y6yYxwRK7K6hqGAmoUjCRZYRl9apGdaCRM0bVr1aC1TBZl212LBNm304ffNZgxzfF7X7lJ3uzI8JI6GDTDcki98xbGi%2BOSWsv67UGKg80zxZDY0H2mP2VsWKqfa4U4p6HXYYvlkC3NO7LkCEHzQfUktKnLVmhM6nSEkHKhwTbmc%2FVuFST%2BjZ%2F%2Bj3%2BknDMUpziHivPMOI%2F6UOuM4QcE8nHtJAIAAA%3D%3D

Trim down to the view configuration after "/view/":

H4sIAAAAAAAA%2F41QS0sDMRD%2BL3Peha4oam%2FSgnhQilYvpYchOxuDu8k6mVRL2f%2FupC8FofSYL99zNrAiji54GMPFqLotR5flqCp7ppWjLyjgMxGv305zOhJ2Rn%2FvjCRsJyHKQQnjDci6J%2F18jWhJcXEdNYxdxiYpSuj24IzYhTorGlbwN6y6yYxwRK7K6hqGAmoUjCRZYRl9apGdaCRM0bVr1aC1TBZl212LBNm304ffNZgxzfF7X7lJ3uzI8JI6GDTDcki98xbGi%2BOSWsv67UGKg80zxZDY0H2mP2VsWKqfa4U4p6HXYYvlkC3NO7LkCEHzQfUktKnLVmhM6nSEkHKhwTbmc%2FVuFST%2BjZ%2F%2Bj3%2BknDMUpziHivPMOI%2F6UOuM4QcE8nHtJAIAAA%3D%3D

URL decode the view configuration:

H4sIAAAAAAAA/41QS0sDMRD+L3Peha4oam/SgnhQilYvpYchOxuDu8k6mVRL2f/upC8FofSYL99zNrAiji54GMPFqLotR5flqCp7ppWjLyjgMxGv305zOhJ2Rn/vjCRsJyHKQQnjDci6J/18jWhJcXEdNYxdxiYpSuj24IzYhTorGlbwN6y6yYxwRK7K6hqGAmoUjCRZYRl9apGdaCRM0bVr1aC1TBZl212LBNm304ffNZgxzfF7X7lJ3uzI8JI6GDTDcki98xbGi+OSWsv67UGKg80zxZDY0H2mP2VsWKqfa4U4p6HXYYvlkC3NO7LkCEHzQfUktKnLVmhM6nSEkHKhwTbmc/VuFST+jZ/+j3+knDMUpziHivPMOI/6UOuM4QcE8nHtJAIAAA==

Gzip decompress the decoded string to get the customized view (note some tools may require base 64 decoding the URL-decoded string as well):

{
"version":"2019-04-01-preview",
"queryVersion":"2019-04-01-preview",
"metric":"ActualCost",
"query":{
"type":"Usage",
"timeframe":"Custom",
"timePeriod":{"from":"2019-04-18","to":"2019-05-17"},
    "dataset":{
      "granularity":"Daily",
      "aggregation":{"totalCost":{"name":"PreTaxCost","function":"Sum"}},
      "grouping":[{"type":"dimension","name":"ResourceGroupName"}],
    "filter":{"and":[]}
    },
  },
  "chart":"StackedColumn",
  "accumulated":false,
  "pivots":[
    {"type":"Dimension","name":"Meter"},
    {"type":"Dimension","name":"ResourceType"},
    {"type":"Dimension","name":"ResourceId"}
  ]
}

Understanding how the view configuration works means you can:

Link to cost analysis from your own apps
Build out and automate the creation of custom dashboards via ARM deployment templates
Copy the query property and use it to get the same data used to render the main chart (or table, if using the table view)

You'll hear more about the view configuration soon, so keep an eye out.

 

Documentation updates

Lots of documentation updates! Here are a few you might be interested in:

Numerous updates to "Understanding Cost Management data"
Added pay-as-you-go billing period support to budgets and exports tutorials
Added note about supported scopes for exports
Added view picker and updated date picker in Cost Analysis tutorial
Added new videos to overview, Cost Analysis, budgets, exports, assigning access, and Cloudyn
And, in case you missed it, also check out the documentation "Understand and work with scopes"

Want to keep an eye on all documentation updates? Check out the Cost Management doc change history in the azure-docs repository on GitHub. If you see something missing, select "Edit" at the top of the doc and submit a quick pull request.

What's next?

These are just a few of the big updates from the last month. We're always listening and making constant improvements based on your feedback, so please keep the feedback coming.

Follow @AzureCostMgmt on Twitter for updates, tips, and tricks throughout the week!
Quelle: Azure

Testing Cloud Pub/Sub clients to maximize streaming performance

Cloud Pub/Sub, part of Google Cloud Platform (GCP), lets you ingest event streams from a wide variety of data sources, at nearly any scale, and allows horizontal application scaling without additional configuration. This model allows customers to sidestep complexities in operations, scaling, compliance, security, and more, leading to simpler pipelines for analytics and machine learning. However, Cloud Pub/Sub’s enablement of horizontal scaling adds the additional requirement to orchestrate multiple machines (instances or Cloud Pub/Sub clients). So to verify that Cloud Pub/Sub client libraries can handle high-throughput single-machine workloads, we must first understand the performance characteristics of a single, larger machine.With that in mind, we’ve developed an open-source load test framework, now available on GitHub. In this post, you’ll see single-machine application benchmarks showing how Cloud Pub/Sub can be expected to scale for various programming languages and scenarios. These details should also help you understand how a single Cloud Pub/Sub client is expected to scale using different client libraries, as well as how to tune the settings of these libraries to achieve maximum throughput. Note that the Cloud Pub/Sub service is designed to scale seamlessly with the traffic you send to it from one or more clients; the aggregate throughput of the Cloud Pub/Sub system is not being measured here.Here’s how we designed and tested this framework.Setting up the test parametersWe will publish and subscribe from single, distinct Compute Engine instances of various sizes running Ubuntu 16.04 LTS. The test creates a topic and publishes to it from a single machine as fast as it can. The test also creates a single subscription to that topic, and a different machine reads as many messages as possible from that subscription. We’ll run the primary tests with 1KB-sized messages, typical of real-world Cloud Pub/Sub usage. Tests will be run for a 10-minute burn-in period, followed by a 10-minute measurement period. The code we used for testing is publicly available on GitHub, and you can find results in their raw form here.Using vertical scalingAs the number of cores in the machine increases, the corresponding publisher throughput should be able to increase to process the higher number of events being generated. To do so, you’ll want to choose a strategy depending on the language’s ability for thread parallelism. For thread-parallelizable languages such as Java, Go, and C#, you can increase publisher throughput by having more threads generating load for a single publisher client. In the test, we set the number of threads to five times the number of physical cores. Because we are publishing in a tight loop, we used a rate limiter to prevent running out of memory or network resources (though this would probably not be needed for a normal workflow). We tuned the number of threads per core on the subscribe side on a per-language basis, and ran both Java and Go tests at the approximate optimum of eight threads/goroutines per core.For Python, Node, Ruby and PHP, which use a process parallelism approach, it’s best to use one publisher client per hardware core to enable maximum throughput. This is because GRPC, upon which the client libraries tested here are built, requires tricky initialization after all processes have been forked to operate properly.Getting the test resultsThe following are results from running the load test framework under various conditions in Go, Java, Node and Python, the four most popular languages used for Cloud Pub/Sub. These results should be representative of the best-case performance of similar languages when only minimal processing is done per message. C# performance should be similar to Java and Go, whereas Ruby and PHP would likely exhibit performance on par with Python.To achieve maximum publisher throughput, we set the batching settings to create the largest batches allowed by the service, the maximum of either 1,000 messages or 10MB per batch, whichever is smaller. Note that these may not be the optimal settings for all use cases. Larger batch settings and longer wait times can delay message persistence and increase per-message latency.In testing, we found that publisher throughput effectively scales with an increase in available machine cores. Compiled/JIT language throughput from Java, Go and Node is significantly better than that of Python. For high-throughput publish use cases such as basic data transfer, you should choose one of these languages. Note that Java performs the best among the three. You can see here how each performed:The subscriber throughput should also be able to scale to take full advantage of the available resources for handling messages. Similar to the publisher model, thread-parallel languages should use the parallelism options in the subscriber client to achieve optimal throughput. Subscriber flow control settings, by default, are set to 1,000 messages and 1GB outstanding for most languages. It’s best to relax those for the small message use case, since they will hit the 1,000 message limit with only 1MB of outstanding messages, limiting their throughput. For our load testing purposes, we used no message limit and a 100MB per worker thread size limit. Process parallel languages should use one worker per process, similar to the publisher case.Subscriber throughput is much higher in Java or Go than in Node or Python. For high-throughput subscribing use cases, the difference between Java and Go performance is negligible, and either would work well. You can see this in graph form here:Considering other scaling modesThere are other considerations when looking at how the system scales beyond just the number of allocated CPU cores. For example, the number of workers per CPU core has a great impact on the throughput of subscriber clients. High-throughput users should change parallelPullCount in Java and numGoroutines in Go from their default values, which are set to one to prevent small subscribers from running out of memory.The default client library settings, because they exist as a safety mechanism, are tuned away from high-throughput use cases. For both Java and Go, the performance peak in our tests occurred at around 128 workers on a 16-core machine, or eight workers per core. Here’s what that looked like:Message size can also have an effect on throughput. Cloud Pub/Sub can more efficiently process larger messages than a batch of small messages, since less work is required to package them for publishing or unpackage them for subscribing. This is important for data transfer use cases: If you have control over the size of your data blocks, larger sizes will yield higher throughput. You still should not expect to see large gains beyond 10KB message sizes using these settings. If you increase the thread parallelism settings, you may see higher throughput for larger message sizes.A few notes on these particular results: If you want to replicate these results, you’ll need to publish at higher throughputs. To do so, apply for an exemption to the standard limits in the Google Cloud Console limits menu. In addition, an outstanding Node.js memory bug prevented us from collecting throughput results from 9MB message sizes. Here’s a look at throughput results on 16-CPU machines:The Cloud Pub/Sub client libraries are set up to be useful out of the box for the vast majority of use cases. If you want to maximize throughput, you can easily modify the batching and flow control settings in Go, Java and C# to achieve throughput that vertically scales with machine size. There are language limitations on pursuing a shared memory threading model, so you should scale purely horizontally to more machines if that is feasible in order to reach maximum throughput while using Python or Node.js. It can be hard to get single client instances to scale beyond one core.Try Compute Engine for horizontal autoscaling for this purpose, or Cloud Functions to deploy clients in a fully managed way. Learn more about Cloud Pub/Sub here.
Quelle: Google Cloud Platform

Integrating Azure CNI and Calico: A technical deep dive

This post was co-authored by Andy Randall, VP of Business Development, Kinvolk Gmbh

We are pleased to share the availability of Calico Network Policies in Azure Kubernetes Service (AKS). Calico policies lets you define filtering rules to control flow of traffic to and from Kubernetes pods. In this blog post, we will explore in more technical detail the engineering work that went into enabling Azure Kubernetes Service to work with a combination of Azure CNI for networking and Calico for network policy.

First, some background. Simplifying somewhat, there are three parts to container networking:

Allocating an IP address to each container as it’s created, this is IP address management or IPAM.

Routing the packets between container endpoints, which in turn splits into:

Routing from host to host (inter-node routing).

Routing within the host between the external network interface and the container, as well as routing between containers on the same host (intra-node routing).

Ensuring that packets that should not be allowed are blocked (network policy).

Typically, a single network plug-in technology addresses all these aspects. However, the open API used by Kubernetes Container Network Interface (CNI), actually allows you to combine different implementations.

The choice of configurations brings you opportunities, but also calls for a plan to make sure that the mechanisms you choose are compatible and enable you to achieve your networking goals. Let’s look a bit more closely into those details

Networking: Azure CNI

Cloud networks, like Azure, were originally built for virtual machines with typically just one or a small number of relatively static IP addresses. Containers change all that, and introduce a host of new challenges for the cloud networking layer, as dozens or even hundreds of workloads are rapidly created and destroyed on a regular basis, each of which is its own IP endpoint on the underlying network.

The first approach at enabling container networking in the cloud leveraged overlays, like VXLAN, to ensure only the host IP was exposed to the underlying network. Such overlay network solutions like flannel, or AKS’s kubenet (basic) networking mode, do a great job of hiding the underlying network from the containers. Unfortunately that is also the downside, the containers are not actually running in the underlying VNET, meaning they cannot be addressed like a regular endpoint and can only communicate outside of the cluster via network address translation (NAT).

With Azure CNI, which is enabled with advanced mode networking in AKS, we added the ability for each container to get its own real IP address within the same VNET as the host. When a container is created, the Azure CNI IPAM component assigns it an IP address from the VNET, and ensures that the address is configured on the underlying network through the magic of the Azure software-defined network layer, taking care of the inter-node routing piece.

So with IPAM and inter-node routing taken care of, we now need to consider intra-node routing. How do we do intra-node routing, i.e. get a packet between two containers, or between the host’s network interface (typically eth0) and the virtual ethernet (veth) interface of the container?

It turns out the Linux kernel is rich in networking capabilities, and there are many different ways to achieve this goal. One of the simplest and easiest is with a virtual bridge device. With this approach, all the containers are connected on a local layer two segment, just like physical machines that are connected via an ethernet switch.

Packets from the ‘real’ network are switched through the bridge to the appropriate container via standard layer two techniques (ARP and address learning).
Packets to the real network are passed through the bridge, to the NIC, where they are routed to the remote node.
Packets from one container to another also flow through the bridge, just like two PCs connected on an ethernet switch.

This approach, which is illustrated in figure one, has the advantage of being high performance and requiring little control plane logic to maintain, helping to ensure robustness.

Figure 1: Azure CNI networking

Network policy with Azure

Kubernetes has a rich policy model for defining which containers are allowed to talk to which other ones, as defined in the Kubernetes Network Policy API. As we demonstrated recently at Ignite, we have now implemented this API and it works in conjunction with Azure CNI in AKS or in your own self-managed Kubernetes clusters in Azure, with or without AKS-Engine.

We translate the Kubernetes network policy model to a set of allowed IP address pairs, which are then programmed as rules in the Linux kernel iptables module. These rules are applied to all packets going through the bridge. This is shown in figure two.

Figure 2: Azure CNI with Azure Policy Manager

Network policy with Calico

Kubernetes is also an open ecosystem, and Tigera’s Calico is well known as the first, and most widely deployed, implementation of Network Policy across cloud and on-premise environments. In addition to the base Kubernetes API, it also has a powerful extended policy model which supports a range of features such as global network policies, network sets, more flexible rule specification, the ability to run the policy enforcement agent on non-Kubernetes nodes, and application layer policy via integration with Istio. Furthermore, Tigera offers a commercial offering built on Calico, Tigera Secure, that adds a host of enterprise management, controls, and compliance features.

Given Kubernetes’ aforementioned modular networking model, you might think you could just deploy Calico for network policy along with Azure CNI, and it should all just work. Unfortunately, it is not this simple.

 

While Calico uses iptables for policy, it does so in a subtly different way. It expects containers to be established with separate kernel routes, and it enforces the policies that apply to each container on that specific container’s virtual ethernet interface. This has the advantage that all container-to-container communications are identical (always a layer 3 routed hop, whether internal to the host or across the underlying network), and security policies are more narrowly applied to the specific container’s context.

To make Azure CNI compatible with the way Calico works we added a new intra-node routing capability to the CNI, ,which we call ‘transparent’ mode. When configured to run in this mode, Azure CNI sets up local routes for containers instead of creating a virtual bridge device. This is shown in Figure 3.
  

Figure 3: Azure CNI with Calico Network Policy

Onward and upstream

A Kubernetes cluster with the enhanced Azure CNI and Calico policies can be created using AKS-Engine by specifying the following configuration in the cluster definition file.

"properties": {

"orchestratorProfile": {

"orchestratorType": "Kubernetes",

"kubernetesConfig":

{ "networkPolicy": "calico", "networkPlugin": "azure" }

These options have also been integrated into AKS itself, enabling you to provision a cluster with Azure networking and Calico network policy by simply specifying the options –network-plugin azure –network-policy Calico at cluster create time.

Find more information by visiting our documentation, “Azure Kubernetes network policies overview.”
Quelle: Azure

A First Look at Docker Desktop Enterprise

Delivered as part of Docker Enterprise 3.0, Docker Desktop Enterprise is a new developer tool that extends the Docker Enterprise Platform to developers’ desktops, improving developer productivity while accelerating time-to-market for new applications.
It is the only enterprise-ready Desktop platform that enables IT organizations to automate the delivery of legacy and modern applications using an agile operating model with integrated security. With work performed locally, developers can leverage a rapid feedback loop before pushing code or docker images to shared servers / continuous integration infrastructure.

Imagine you are a developer & your organization has a production-ready environment running Docker Enterprise. To ensure that you don’t use any APIs or incompatible features that will break when you push an application to production, you would like to be certain your working environment exactly matches what’s running in Docker Enterprise production systems. This is where Docker Enterprise 3.0 and Docker Desktop Enterprise come in. It is basically a cohesive extension of the Docker Enterprise container platform that runs right on developers’ systems. Developers code and test locally using the same tools they use today and Docker Desktop Enterprise helps to quickly iterate and then produce a containerized service that is ready for their production Docker Enterprise clusters.
The Enterprise-Ready Solution for Dev & Ops
Docker Desktop Enterprise is a perfect devbed for enterprise developers. It allows developers to select from their favourite frameworks, languages, and IDEs. Because of those options, it can also help organizations target every platform. So basically, your organization can provide application templates that include production-approved application configurations, and developers can take those templates and quickly replicate them right from their desktop and begin coding. With the Docker Desktop Enterprise graphical user interface (GUI), developers are no longer required to know lower-level Docker commands and can auto-generate Docker artifacts.

With Docker Desktop Enterprise, IT organizations can easily distribute and manage Docker Desktop Enterprise across teams of developers using their current  third-party endpoint management solution.
A Flawless Integration with 3rd Party Developer Tools

Docker Desktop Enterprise is designed to integrate with existing development environments (IDEs) such as Visual Studio and IntelliJ. And with support for defined application templates, Docker Desktop Enterprise allows organizations to specify the look and feel of their applications.
Exclusive features of Docker Desktop Enterprise

Let us talk about the various features of Docker Desktop Enterprise 2.0 which is discussed below:

Version selection: Configurable version packs ensure the local instance of Docker Desktop Enterprise is a precise copy of the production environment where applications are deployed, and developers can switch between versions of Docker and Kubernetes with a single click.

Docker and Kubernetes versions match UCP cluster versions.
Administrator command line tool simplifies version pack installation.

Application Designer: Application Designer provides a library of application and service templates to help developers quickly create new container-based applications. Application templates allow you to choose a technology stack and focus on the business logic and code, and require only minimal Docker syntax knowledge.

Template support includes .NET, Java Spring, and more.
Single service and multi-services applications are supported.
Deployable to Kubernetes or Swarm orchestrators
Supports Docker App format for multi-environment, parameterized application deployments and application bundling

Device management:

The Docker Desktop Enterprise installer is available as standard MSI (Win) and PKG (Mac) downloads, which allows administrators to script an installation across many developer workstations.

Administrative control:

IT organizations can specify and lock configuration parameters for creation of a standardized development environment, including disabling drive sharing and limiting version pack installations. Developers can then run commands using the command line without worrying about configuration settings.

Under this blog post, we will look at two of the promising features of Docker Desktop Enterprise 2.0:

Application Designer 
Version packs

Installing Docker Desktop Enterprise
Docker Desktop Enterprise is available both for Microsoft Windows and MacOS. One can download via the below links:

Windows
Mac

The above installer includes:

Docker Engine,
Docker CLI client, and
Docker Compose

Please note that you will have to clean up Docker Desktop Community Edition before you install Enterprise edition. Also, Enterprise version will require a separate License key which you need to buy from Docker, Inc.
To install Docker Desktop Enterprise, double-click the .msi or .pkg file and initiate the Setup wizard:

Click “Next” to proceed further and accept the End-User license agreement as shown below:

Click “Next” to proceed with the installation.

Once installed, you will see Docker Desktop icon on the Windows Desktop as shown below:

License file
As stated earlier, to use Docker Desktop Enterprise, you must purchase Docker Desktop Enterprise license file from Docker, Inc.
The license file must be installed and placed under the following location: C:UsersDockerAppDataRoamingDockerdocker_subscription.lic
If the license file is missing, you will be asked to provide it when you try to run Docker Desktop Enterprise. Once the license file is supplied, Docker Desktop Enterprise should come up flawlessly.

What’s New in Docker Desktop UI?
Docker Desktop Enterprise provides you with additional features compared to the Community edition. Right click on whale icon on Task Manager and select “About Docker Desktop” to show up the below window.

You can open up Powershell to verify Docker version up and running. Click on “Settings” option to get list of various sections like shared drives, advanced settings, network, proxies, Docker daemon and Kubernetes.

One of the new features introduced with Docker Desktop Enterprise is to allow Docker Desktop to start whenever you login automatically. This feature can be enabled by selecting “Start Desktop when you login” under General Tab. One can automatically check for updates by enabling this feature.
Docker Desktop Enterprise gives you the flexibility to pre-select resource limitations to make available for Docker Engine as shown below. Based on your system configuration and type of application you are planning to host, you can increase or decrease the resource limit.

Docker Desktop Enterprise includes a standalone Kubernetes server that runs on your Windows laptop, so that you can test deploying your Docker workloads on Kubernetes.

The Kubectl is a command line interface for running commands against Kubernetes clusters. It comes with Docker Desktop by default and one can verify by running the below command:

Running Your First Web Application
Let us try running the custom built Web application using the below command:

Open up the browser to verify that web page is up and running as shown below:

Application Designer

Application Designer provides a library of application and service templates to help Docker developers quickly create new Docker applications. Application templates allow you to choose a technology stack and focus on the business logic and code, and require only minimal Docker syntax knowledge.
Building a Linux-based Application Using Application Designer
Under this section, I will show you how to get started with Application Designer feature which was introduced for the first time.

Right click on whale-icon in the Taskbar and choose “Design New Application”. Once you click on it, it will open the below window:

Let us first try using the set of preconfigured application by clicking on “Choose a template”

Let us test drive Linux-based application. Click on “Linux” option and proceed further. This opens up a variety of ready-made templates as shown below:

Spring application is also included as part of Docker Desktop Enterprise which is basically a sample Java application with Spring framework and a Postgres database as shown below:

Let us go ahead and try out a sample python/Flask application with an Nginx proxy and a MySQL database. Select the desired application template and choose your choice of Python version and accessible port. You can select your choice of MySQL version and Nginx proxy. For this example, I choose Python version 3.6, MySQL 5.7 and Nginx proxy exposed on port 80.

Click on “Continue” to build up this application stack. This should build up your application stack.
Done. Click on “Run Application” to bring up your web application stack.
Once you click on “Run Application”, you can see the output right there on the screen as shown below:

As shown above, one can open up code repository in Visual Studio Code & Windows explorer. You get options to start, stop and restart your application stack.

To verify its functionality, let us try to open up the web application as shown below:

Cool, isn’t it?
Building Windows-based Application using Application Designer
Under this section, we will see how to build Windows-based application using the same Application Designer tool.
Before you proceed, we need to choose “Switch to Windows container” as shown below to allow Windows based container to run on our Desktop.

Right click on whale-icon in the Taskbar and choose “Design New Application”. Once you click on it, it will open the below window:

Click on “Choose a template” and select Windows this time as shown below:

Once you click on Windows, it will open up a sample ASP.Net & MS-SQL application.

Once clicked, it will show frontend and backend with option to set up desired port for your application.

I will go ahead and choose port 82 for this example. Click on “Continue” and supply your desired application name. I named it as “mywinapp2”. Next, click on “Scaffold” to build up your application stack.

While the application stack is coming up, you can open up Visual Studio to view files like Docker Compose, Dockerfile as shown below:

One can view logs to see what’s going on in the backend. Under Application Designer, one can select “Debug” option to open up “View Logs” to view the real time logs.

By now, you should be able to access your application via web browser.
Version Packs
Docker Desktop Enterprise 2.0 is bundled with default version pack Enterprise 2.1 which includes Docker Engine 18.09 and Kubernetes 1.11.5. You can download it via this link.

If you want to use a different version of the Docker Engine and Kubernetes for development work install version pack Enterprise 2.0, you can download version pack Enterprise 2.0 via this link.
Version packs are installed manually or, for administrators, by using the command line tool. Once installed, version packs can be selected for use in the Docker Desktop Enterprise menu.
Installing Additional Version Packs
When you install Docker Desktop Enterprise, the tool is installed under C:Program FilesDockerDesktop location. Version packs can be installed by double-clicking a .ddvp file. Ensure that Docker Desktop is stopped before installing a version pack. The easiest way to add Version Pack is through CLI running the below command:
Open up Windows Powershell via “Run as Administrator” and run the below command:
dockerdesktop-admin.exe’ -InstallVersionPack=’C:Program Files
DockerDockerenterprise-2.0.ddvp’
Uninstalling Version Packs
Uninstalling Version Pack is a matter of single-line command as shown below:
dockerdesktop-admin.exe’ -UninstallVersionPack <VersionPack>
In my next blog post, I will show you how to leverage Application Designer tool to build custom application.
References:

https://goto.docker.com/Docker-Desktop-Enterprise.html

https://blog.docker.com/2018/12/introducing-desktop-enterprise/

The post A First Look at Docker Desktop Enterprise appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Magazine Luiza: How we transformed our e-commerce platform with Apigee, Firebase, and GCP

Editors note:Today’s post comes from Andre Fatala, chief technology officer at Brazilian retailer Magazine Luiza. Apigee, Firebase, and Google Cloud Platform (GCP) have helped this 60-year-old company become one of the most successful e-commerce operations in Brazil.Founded in 1957, Magazine Luiza, or Magalu, is a technology and logistics company focused on the retail sector. In 2018 we posted 60% growth year over year in e-commerce sales, reaching 7 billion in Brazilian Real (nearly 2 billion in USD), with e-commerce contributing 35.7% of our total sales.From supply chain issues to economic fluctuations, the retail industry in Brazil is complex to say the least. But adopting mobile e-commerce presented us with an entirely new challenge—one that we had to respond to quickly to remain competitive. Brazilian e-commerce players, along with global internet giants, threatened to make inroads into a market in which we have held a leading position for decades. To help achieve our goals, we employed Google Cloud products like the Apigee API management platform, GCP, Firebase, and G Suite.In 2013 we had an e-commerce platform, and even a library of APIs, but those APIs were accessing an overstretched backend application built with 150,000 lines of code. Deployment of new APIs was slow, we were burdened with undesirable dependencies, and we faced scalability challenges as well as distributed responsibilities across siloed teams.Knowing we were under threat from competitors, our then-chief operating officer (and current CEO) Frederico Trajano put me in charge of a small team of developers. The team was walled off from IT governance processes and roadblocks from the greater organization, and given the keys to the company’s entire e-commerce operations. That’s around the time when we started using the Apigee API management platform. Apigee helped us to decouple our backend systems from the front end so it was easier and faster for my team to iterate on new apps while other teams maintained our legacy systems of record.  Our new approach accelerated mobile application development, and Firebase has played a big role in this. We started using Firebase soon after we learned about it at Google I/O in 2016. Firebase helped to reduce the complexity of building the apps that we need to reach our customers. We can quickly publish and test new features, and Firebase Crashlytics helps us keep our apps stable and users happy.Last year, after GCP launched its region here in Brazil, we began deploying workloads onto GCP. We were pleased with the latency of GCP—there just wasn’t any. The speed was notable and this is critical in e-commerce applications. We’d already been using Kubernetes for some time as it’s especially helpful with our multi-cloud strategy. Migrating all our data onto GCP was simple because we used Kubernetes along with our own open-source PaaS.Believing that we’d have better performance and stability to handle Black Friday traffic running on GCP, we made the decision to migrate 113 apps in less than 60 days before Black Friday. It was a move that paid off: 2018 was our biggest Black Friday ever, where we saw levels of API traffic that were dramatically higher than before. Apigee helped us with our execution, meeting customer demand throughout across our platforms, with visibility across all of our API and application activity.As a result of these successes, we now have plans to migrate our entire e-commerce platform onto GCP, and our big data team is moving away from our Hadoop environment to an architecture that uses GCP managed services.With this newfound ease and speed of spinning up new services and customer experiences and adjusting existing ones, everyone is able to work in small teams of five or six people that take care of segments of an application, whether its online checkout, physical store checkout, or order management. We work much more like a software company than a retail company now.Our approach, powered by Google Cloud technologies, supported us to expand our e-commerce strategy to third-party sellers and create a new digital marketplace. Other merchants can easily join this ecosystem via our API platform. Today, we support more than 3,300 sellers and offer 4.3 million SKUs (compared to our legacy sales and distribution system, which in 2016 supported 50,000 SKUs).Our goal has been to transform from a traditional retail company with a digital presence to a digital platform with a physical presence and a “human touch”—and now we’re much closer to that vision. People in Brazil enjoy the convenience of ordering from their computers or smartphones, but still appreciate coming into our stores to pick up their items. In fact, two-thirds of Brazilians buy this way. To make it even easier for our customers, we built 12 in-store apps that our salespeople use to make the once-slow sales process much faster—this helped us grow physical store sales 25.8% in 2018!Google Cloud has been a great partner for us. It’s no small feat to succeed in Brazil’s constantly evolving retail environment, but we feel like we’re on the right path.For more on Magazine Luiza, watch the Google Cloud Next ‘19 session “How Retailers Prepare for Black Friday on Google Cloud Platform.”
Quelle: Google Cloud Platform

Azure Marketplace new offers – Volume 37

We continue to expand the Azure Marketplace ecosystem. For this volume, 163 new offers successfully met the onboarding criteria and went live. See details of the new offers below:

Applications

Accela Civic Platform and Civic Applications: Accela's fast-to-implement civic applications and robust and extensible solutions platform help agencies respond to the rapid modernization of technology with SaaS solutions that offer high degrees of security, flexibility, and usability.

Actifile Guardrail-Secure Data on 0 Trust Devices: Actifile's Guardrail unique low-footprint technology enables secure usage of corporate data taken from any application or data source.

Adrenalin HCM: Human resource function is the quintessential force that enables an organization’s strongest asset to perform better and benefit themselves and the company. Reimagine your HR function with Adrenalin HCM.

Advanced Threat Protection for OneDrive: BitDam helps enterprises take full advantage of all OneDrive has to offer while delivering advanced threat protection against content-borne attacks.

AGR – Advanced Demand Planning: This modular AGR solution allows you to make more consistent planning decisions and more accurate buying decisions and helps ensure you have the right product in the right place at the right time.

agroNET – Digital Farming Management Platform: agroNET is a turnkey digital farming solution that enables smart agriculture service providers and system integrators to rapidly deploy the service tailored to the needs of farmers.

AIMSCO Azure MES/QM Platform for SME Manufacturers: With embedded navigation dashboards, displays, alerts, APIs, and BI interfaces, AIMSCO Azure MES/QM Platform users from the shop floor to the boardroom have real-time access to critical decision-making tools.

AIRA Robotics as a Service: Transform the installation of new equipment from CAPEX to OPEX as a part of a digital transformation using the AIRA digitalization system for long-term service relationships with suppliers.

Apex Portal: Use Apex Portal for supplier registration, self-service inquiry of invoice and payment status, dynamic discounting and early payments, and automated statement audits.

AppStudio: AppStudio is a suite of offerings for managing apps using a standardized methodology to ensure you are up to date and ready for the next challenge.

ArcBlock ABT Blockchain Node: ABT Blockchain Node is fully decentralized and uses ArcBlock's blockchain development platform to easily build, run, and use DApps and blockchain-ready services.

ArcGIS Enterprise 10.7: Manage, map, analyze, and share geographic information systems (GIS) data with ArcGIS Enterprise, the complete geospatial system that powers your data-driven decisions.

Area 1 Horizon Anti-Phishing Service for Office 365: Area 1 Security closes the phishing gap with a preemptive, comprehensive, and accountable anti-phishing service that seamlessly integrates with and fortifies Microsoft Office 365 security defenses.

Arquivar-GED: ArqGED is document management software that allows users to dynamically solve problems with location and traceability of information in any format (paper, digital, microfilm, etc.).

Aruba Virtual Gateway (SD-WAN): Aruba's software-defined WAN (SD-WAN) technology simplifies wide area network operations and improves application QoS to lower your total cost of ownership.

Arundo Analytics: Arundo delivers enterprise-scale machine learning and advanced analytics applications to improve operations in heavy asset industries.

Assurity Suite: The Assurity Suite platform provides assurance and control over your organization's documents, communications, investigations, compliance, information, and processes.

Atilekt.NET: Website-building platform Atilekt.NET is a friendly, flexible, and fast-growing content management system based on ASP.NET.

Axians myOperations Patch Management: Axians myOperations Server Patch Management integrates a complete management solution to simplify the rollout, monitoring, and reporting of Windows updates.

Axioma Risk: Axioma Risk is an enterprise-wide risk-management system that enables clients to obtain timely, consistent, and comparable views of risk across an entire organization and all asset classes.

Azure Analytics System Solution: BrainPad's Azure Analytics System Solution is designed for enterprises using clouds for the first time as well as companies considering sophisticated usage. This application is available only in Japanese.

Beam Communications: Communications are a fundamental element in institutional development, and Beam Communications boosts internal and external communications. This application is available only in Spanish.

Betty Blocks Platform: From mobile apps to customer portals to back-office management and everything in between, the Betty Blocks platform supports every app size and complexity.

BI-Clinical: BI-Clinical is CitiusTech’s ONC- and NCQA-certified BI and analytics platform designed to address the healthcare organization’s most critical quality reporting and decision support needs.

Bizagi Digital Business Platform: The Bizagi platform helps enterprises embrace change by improving operational efficiencies, time to market, and compliance.

Bluefish Editor on Windows Server 2019: The Bluefish software editor supports a plethora of programming languages including HTML, XHTML, CSS, XML, PHP, C, C++, JavaScript, Java, Google Go, Vala, Ada, D, SQL, Perl, ColdFusion, JSP, Python, Ruby, and Shell.

BotCore – Enterprise Chatbot Builder: BotCore is an accelerator that enables organizations to build customized conversational bots powered by artificial intelligence. It is fully deployable to Microsoft Azure and leverages many of the features available in it.

Brackets: With focused visual tools and preprocessor support, Brackets is a modern text editor that makes it easy to design in the browser. It's crafted for web designers and front-end developers.

Brackets on Windows Server 2019: With focused visual tools and preprocessor support, Brackets is a modern text editor that makes it easy to design in the browser. It's crafted for web designers and front-end developers.

bugong: The bugong platform combines leading algorithm technology with intelligent manufacturing management. This application is available only in Chinese.

Busit Application Enablement Platform: Busit Application Enablement Platform (AEP) enables fast and efficient handling of all your devices and services, regardless of the brand, manufacturer, or communication protocol.

ByCAV: ByCAV provides biometric identity validation through non-traditional channels for companies in diverse industries that require identity verification. This application is available in Spanish only in Colombia.

Camel Straw: Camel Straw is a cloud-based load testing platform that helps teams load test and analyze and improve the way their applications scale.

Celo: Celo connects healthcare professionals. From big hospitals to small clinics, Celo helps healthcare professionals communicate better.

Cirkled In – College Recruitment Platform: Cirkled In is a revolutionary, award-winning recruitment platform that helps colleges match with best-fit high school students based on students’ holistic portfolio.

Cirkled In – Student Profile & Portfolio Platform: Cirkled In is a secure, award-winning electronic portfolio platform for students designed to compile students’ achievements in seven categories from academics to sports to volunteering and more.

Cleafy Fraud Manager for Azure: Cleafy combines deterministic malware detection with passive behavioral and transactional risk analysis to protect online services against targeted attacks from compromised endpoints without affecting your users and business.

Cloud Desktop: Cloud Desktops on Microsoft Azure offers continuity and integration with the tools and applications that you already use.

Cloud iQ – Cloud Management Portal: Crayon Cloud-iQ is a self-service platform that enables you to manage cloud products (Azure, Office 365, etc.), services, and economics across multiple vendors through a single pane portal view.

Cloudneeti – Continuous Assurance SaaS: Cloudneeti SaaS enables instant visibility into security, compliance, and data privacy posture and enforces industry standards through continuous and integrated assurance aligned with the cloud-native operating model.

Collaboro – Digital Asset Management: Collaboro partners with brands, institutions, government, and advertising agencies to solve their specific digital asset management needs in a fragmented marketing and media space.

Connected Drone: Targeting power and utilities, eSmart Systems Connected Drone software utilizes deep learning to dramatically reduce utility maintenance costs and failure rates and extend asset life.

CyberVadis: By pooling and sharing analyst-validated cybersecurity audits, CyberVadis allows you to scale up your third-party risk assessment program while controlling your costs.

Data Quality Management Platform: BaseCap Analytics’ Data Quality Management Platform helps you make better business decisions by measurably increasing the quality of your greatest asset: data.

DatabeatOMNI: DatabeatOMNI provides you with everything you need to display great content, on as many screens as you want to – without complex interfaces, specialist training, or additional procurement costs.

dataDiver: dataDiver is an extended analytics tool for gaining insights into research design that is neither traditional BI nor BA. This application is available only in Japanese.

dataFerry: dataFerry is a data preparation tool that allows you to easily process data from various sources into the desired form. This application is available only in Japanese.

Dataprius Cloud: Dataprius offers a different way to work with files in the cloud, allowing you to work with company files without synchronizing, without conflicts, and with multiple users connected at the same time.

Denodo Platform 7.0 14-day Free Trial (BYOL): Denodo integrates all of your Azure data sources and your SaaS applications to deliver a standards-based data gateway, making it quick and easy for users of all skill levels to access and use your cloud-hosted data.

Descartes MacroPoint: Descartes MacroPoint consolidates logistics tracking data from carriers into a single integrated platform to meet two growing challenges: real-time freight visibility and automated capacity matching.

Digital Asset Management (DAM) Managed Application: Digital Asset Management delivers a secured and centralized repository to manage videos. It offers capabilities for advanced embed, review, approval, publishing, and distribution of videos.

Digital Fingerprints: Digital Fingerprints is a continuous authentication system based on behavioral biometrics.

DM REVOLVE – Dynamics Data Migration: DM REVOLVE is a dedicated Azure-based Dynamics end-to-end data migration solution that incorporates "Dyn-O-Matic," our specialized Dynamics automated load adaptor.

Docker Community Edition Ubuntu Bionic Beaver: Deploy Docker Community Edition with Ubuntu on Azure with this free, community-supported, DIY version of Docker on Ubuntu.

Docker Community Edition Ubuntu Xenial: Deploy Docker Community Edition with Ubuntu on Azure with this community-supported, DIY version of Docker on Ubuntu.

Dom Rock AI for Business Platform: The Dom Rock AI for business platform empowers people to make better and faster decision enlightened by data. This application is available only in Portuguese.

Done.pro: Done.pro will enable Uber for X cloud platforms customized and tuned for your business in order to provide customers with exceptional service.

eComFax: Secure Advanced Messaging Platform: Comunycarse Network Consultants eComFax is a secure, advanced messaging platform designed for compliance and mobility.

EDGE: The Edge system allows seamless operations across the UK – in both the established Scottish market and the new English market.

eJustice: The eJustice solution provides information and communication technology enablement for courts.

ekoNET – Air Quality Monitoring: ekoNET combines portable devices and cloud-based functionality to enable granular air quality monitoring indoors and outdoors.

Element AssetHub: AssetHub is a data hub connecting time series, IT, and OT to manage operational asset models.

Equinix Cloud Exchange Fabric: This software-defined interconnection solution allows you to directly, securely, and dynamically connect distributed infrastructure and digital ecosystems to your cloud service providers.

ERP Beam Education: ERP Beam Education efficiently integrates all the processes that are part of managing an educational center. This application is available only in Spanish.

Essatto Data Analytics Platform: Essatto enables more informed decision making by providing timely insights into your financial and business operations in a flexible, cost-effective application.

Event Monitor: Event Monitor is a user-friendly solution meant for security teams that are responsible for safety.

Firewall as a Service: Firewall as a Service delivers a next-generation managed internet gateway from Microsoft Azure including 24/7 support, self-service, and unlimited changes by our security engineers.

GDPR++ for Data Protection & Security: GDPR++ is an Azure-based tool that helps companies keep data protection and cyber security under control.

GEODI: GEODI helps you focus on your business by letting you share information, documents, notes, and notifications with contacts and stakeholders via mobile app or browser.

GeoServer: Make your spatial information accessible to all with this free, community-supported open source server based on Java for sharing geospatial data.

GeoServer on Windows Server 2019: Make your spatial information accessible to all with this free, community-supported open source server based on Java for sharing geospatial data.

Ghost Helm Chart: Ghost is a modern blog platform that makes publishing beautiful content to all platforms easy and fun. Built on Node.js, it comes with a simple markdown editor with preview, theming, and SEO built in.

Grafana Multi-Tier with Azure Managed DB: Grafana is an open source analytics and monitoring dashboard for over 40 data sources, including Graphite, Elasticsearch, Prometheus, MariaDB/MySQL, PostgreSQL, InfluxDB, OpenTSDB, and more.

HashiCorp Consul Helm Chart: HashiCorp Consul is a tool for discovering and configuring services in your infrastructure.

HPCBOX: HPC Cluster for STAR-CCM+: HPCBOX combines cloud infrastructure, applications, and managed services to bring supercomputer technology to your personal computer.

H-Scale: H-Scale is a modular, configurable, and scalable data integration platform that helps organizations build confidence in their data and accelerate their data strategies.

Integrated Cloud Suite: CitiusTech’s Integrated Cloud Suite is a one-stop solution that enables healthcare organizations to reduce complexity and drive a multi-cloud strategy optimally and cost-effectively.

JasperReports Helm Chart: JasperReports Server is a standalone and embeddable reporting server. It is a central information hub, with reporting and analytics that can be embedded into web and mobile applications.

Jenkins Helm Chart: Jenkins is a leading open source continuous integration and continuous delivery (CI/CD) server that enables the automation of building, testing, and shipping software projects.

Jenkins On Ubuntu Bionic Beaver: Jenkins is a simple, straightforward continuous integration tool that effortlessly distributes work across multiple devices and assists in building drives, tests, and deployment.

Jenkins-Docker CE on Ubuntu Bionic Beaver: This solution takes away the hassles of setting up the installation process of Jenkins and Docker. The ready-made image integrates Jenkins-Docker to make continuous integration jobs smooth, effective, and glitch-free.

Join2ship: Join2ship is a collaborative supply chain platform designed to digitalize your receipts and deliveries.

Kafka Helm Chart: Tested to work on the EKS platform, Apache Kafka is publish-subscribe messaging rethought as a distributed commit log.

Kaleido Enterprise Blockchain SaaS: Kaleido simplifies the process of creating and operating permissioned blockchains with a seamless experience across cloud properties and geographies for all network participants.

Kubeapps Helm Chart: Kubeapps is a web-based application deployment and management tool for Kubernetes clusters.

LOOGUE FAQ: LOOGUE FAQ is an AI virtual agent that creates chatbots that support queries by creating and uploading two columns of questions and answers in Excel. This application is available only in Japanese.

Magento Helm Chart: Magento is a powerful open source e-commerce platform. Its rich feature set includes loyalty programs, product categorization, shopper filtering, promotion rules, and much more.

MariaDB Helm Chart: MariaDB is an open source, community-developed SQL database server that is widely used around the world due to its enterprise features, flexibility, and collaboration with leading tech firms.

Metrics Server Helm Chart: Metrics Server aggregates resource usage data, such as container CPU and memory usage, in a Kubernetes cluster and makes it available via the Metrics API.

MNSpro Cloud Basic: MNSpro Cloud combines the management of your school network with a learning management system, whether you use Windows, iOS, or Android devices.

MongoDB Helm Chart: MongoDB is a scalable, high-performance, open source NoSQL database written in C++.

MySQL 5.6 Secured Ubuntu Container with Antivirus: MySQL is a popular open source relational SQL database management system. MySQL is one of the best RDBMS being used for developing web-based software applications.

MySQL 8.0 Secured Ubuntu Container with Antivirus: MySQL is a popular open source relational SQL database management system. MySQL is one of the best RDBMS being used for developing web-based software applications.

MySQL Helm Chart: MySQL is a fast, reliable, scalable, and easy-to-use open source relational database system. MySQL Server is designed to handle mission-critical, heavy-load production applications.

NATS Helm Chart: NATS is an open source, lightweight, and high-performance messaging system. It is ideal for distributed systems and supports modern cloud architectures and pub-sub, request-reply, and queuing models.

NetApp Cloud Volumes ONTAP: NetApp Cloud Volumes ONTAP, a leading enterprise-grade storage management solution, delivers secure, proven storage management services and supports up to a capacity of 368 TB.

Node.js Helm Chart: Node.js is a runtime environment built on V8 JavaScript engine. Its event-driven, non-blocking I/O model enables the development of fast, scalable, and data-intensive server applications.

Node 6 Secured Jessie Container with Antivirus: Node.js is an open source, cross-platform JavaScript runtime environment for developing a diverse variety of tools and applications.

Odoo Helm Chart: Odoo is an open source ERP and CRM platform that can connect a wide variety of business operations such as sales, supply chain, finance, and project management.

On-Demand Mobility Services Platform: Deploy this intelligent, on-demand transportation operating system for automotive OEMs that need to run professional mobility services to embrace the new automotive era and manage the decline of vehicle ownership.

OpenCart Helm Chart: OpenCart is free open source e-commerce platform for online merchants. OpenCart provides a professional and reliable foundation from which to build a successful online store.

OrangeHRM Helm Chart: OrangeHRM is a feature-rich, intuitive HR management system that offers a wealth of modules to suit the needs of any business. This widely used system provides an essential HR management platform.

Osclass Helm Chart: Osclass allows you to easily create a classifieds site without any technical knowledge. It provides support for presenting general ads or specialized ads and is customizable, extensible, and multilingual.

ownCloud Helm Chart: ownCloud is a file storage and sharing server that is hosted in your own cloud account. Access, update, and sync your photos, files, calendars, and contacts on any device, on a platform that you own.

Paladion MDR powered by AI Platform – AI.saac: Paladion's managed detection and response, powered by our next-generation AI platform, is a managed security service that provides threat intelligence, threat hunting, security monitoring, incident analysis, and incident response.

Parse Server Helm Chart: Parse is a platform that enables users to add a scalable and powerful back end to launch a full-featured app for iOS, Android, JavaScript, Windows, Unity, and more.

Phabricator Helm Chart: Phabricator is a collection of open source web applications that help software companies build better software.

 

PHP 5.6 Secured Jessie-cli Container with Antivirus: PHP is a server-side scripting language designed for web development. It is mainly used for server-side scripting and can collect form data, generate dynamic page content, and send and receive cookies.

 

PHP 5.6 Secured Stretch Container with Antivirus: PHP is a server-side scripting language designed for web development. It is mainly used for server-side scripting and can collect form data, generate dynamic page content, and send and receive cookies.

PHP 7.0 Secured Jessie Container with Antivirus: PHP is a server-side scripting language designed for web development. It is mainly used for server-side scripting and can collect form data, generate dynamic page content, and send and receive cookies.

PHP 7.0 Secured Jessie-cli Container – Antivirus: PHP is a server-side scripting language designed for web development. It is mainly used for server-side scripting and can collect form data, generate dynamic page content, and send and receive cookies.

PHP 7.0 Secured Stretch Container with Antivirus: PHP is a server-side scripting language designed for web development. It is mainly used for server-side scripting and can collect form data, generate dynamic page content, and send and receive cookies.

PHP 7.1 Secured Jessie Container with Antivirus: PHP is a server-side scripting language designed for web development. It is mainly used for server-side scripting and can collect form data, generate dynamic page content, and send and receive cookies.

PHP 7.1 Secured Jessie-cli Container with Antivirus: PHP is a server-side scripting language designed for web development. It is mainly used for server-side scripting and can collect form data, generate dynamic page content, and send and receive cookies.

PHP 7.1 Secured Stretch Container with Antivirus: PHP is a server-side scripting language designed for web development. It is mainly used for server-side scripting and can collect form data, generate dynamic page content, and send and receive cookies.

PHP 7.2 Secured Stretch Container with Antivirus: PHP is a server-side scripting language designed for web development. It is mainly used for server-side scripting and can collect form data, generate dynamic page content, and send and receive cookies.

PHP 7.3 Rc Stretch Container with Antivirus: PHP is a server-side scripting language designed for web development. It is mainly used for server-side scripting and can collect form data, generate dynamic page content, and send and receive cookies.

phpBB Helm Chart: phpBB is a popular bulletin board that features robust messaging capabilities such as flat message structure, subforums, topic split/merge/lock, user groups, full-text search, and attachments.

PostgreSQL Helm Chart: PostgreSQL is an open source object-relational database known for reliability and data integrity. ACID-compliant, it supports foreign keys, joins, views, triggers, and stored procedures.

Project Ares: Project Ares by Circadence is an award-winning, gamified learning and assessment platform that helps cyber professionals of all levels build new skills and stay up to speed on the latest tactics.

Python Secured Jessie-slim Container – Antivirus: This image is made for customers who are looking for deploying a self-managed Community Edition on Hardened kernel instead of just putting up a vanilla install.

Quvo: Quvo is a cloud-first, mobile-first working platform designed especially for public sector and enterprise mobile workforces.

RabbitMQ Helm Chart: RabbitMQ is a messaging broker that gives your applications a common platform to send and receive messages, and your messages a safe place to live until received.

Recordia: Smart Recording & Archiving Interactions: Recordia facilitates gathering all valuable customer interactions under one single repository in the cloud. Know how your sales, marketing, and support staff is doing.

Redis Helm Chart: Redis is an open source, advanced key-value store. It is often referred to as a data structure server since keys can contain strings, hashes, lists, sets, and sorted sets.

Redmine Helm Chart: Redmine is a popular open source project management and issue tracking platform that covers multiple projects and subprojects, each with its own set of users and tools, from the same place.

Secured MySQL 5.7 on Ubuntu 16.04 LTS: MySQL is a popular open source relational SQL database management system. MySQL is one of the best RDBMS being used for developing web-based software applications.

Secured MySQL 5.7 on Ubuntu 18.04 LTS: MySQL is a popular open source relational SQL database management system. MySQL is one of the best RDBMS being used for developing web-based software applications.

Smart Planner: Smart Planner is a web platform for the optimization of productive processes, continuous improvement, and integral management of the supply chain. This application is available only in Spanish.

SmartVM API – Improve your vendor master file: The SmartVM API vendor master cleansing, enriching, and continuous monitoring technology automates vendor master management to help you mitigate risks, eliminate costly information gaps, and improve your supplier records.

SuiteCRM Helm Chart: SuiteCRM is an open source, enterprise-grade customer relationship management (CRM) application that is a fork of the popular SugarCRM application.

Talend Cloud: Remote Engine for Azure: Talend Cloud is a unified, comprehensive, and highly scalable integration Platform as-a-Service (iPaaS) that makes it easy to collect, govern, transform, and share data.

TensorFlow ResNet Helm Chart: TensorFlow ResNet is a client utility for use with TensorFlow Serving and ResNet models.

Terraform on Windows Server 2019: Terraform is used to create, change, and improve your infrastructure via declarative code.

TestLink Helm Chart: TestLink is test management software that facilitates software quality assurance. It supports test cases, test suites, test plans, test projects and user management, and stats reporting.

Tomcat Helm Chart: Tomcat is a widely adopted open source Java application and web server. Created by the Apache Software Foundation, it is lightweight and agile with a large ecosystem of add-ons.

Transfer Center: The comprehensive patient analytics and real-time reporting in Transfer Center help ensure improved care coordination, streamlined patient flow, and full regulatory compliance.

Unity Cloud: Unity is underpinned by Docker, so you can write custom full-code extensions in any language and enjoy fault tolerance, high availability, and scalability.

User Management Pack 365: User Management Pack 365 is a powerful software application that simplifies user lifecycle and identity management across Skype for Business deployments.

Visual Studio Emulator on Windows Server 2016: Visual Studio Emulator plays an important role in the edit-compile-debug cycle of your Android testing.

Webfopag – Online Payroll: Fully process payroll while meeting your business compliance rules. This application is available only in Portuguese.

WordPress Helm Chart: WordPress is one of the world's most popular blogging and content management platform. It is powerful yet simple, and everyone from students to global corporations use it to build beautiful, functional websites.

XAMPP: XAMPP is specifically designed to make it easier for developers to install the distribution to get into the Apache universe.

XAMPP Windows Server 2019: XAMPP is specifically designed to make it easier for developers to install the distribution to get into the Apache universe.

XS VM Lift & Shift with Provisioning & Metering: Modernize migration, provisioning, and automatic metering with the Beacon42 metering tool. This application is available only in Spanish.

ZooKeeper Helm Chart: ZooKeeper provides a reliable, centralized register of configuration data and services for distributed applications.

 

Consulting Services

360 Degree Security System: 1-Hour Briefing: This 360 Degree Security System briefing will address why antivirus solutions are obsolete, how to automatically track and block brute force attacks, and how to automatically track and block malicious activity.

Application Migration: 3-Day Assessment: Chef consultants will attend your site and assess how to use Chef Habitat to migrate a legacy app from an older platform (such as Windows Server 2008 R2 and SQL Server 2008 R2) to Azure.

Archiving & Backup Essentials: 1-Hr Briefing: Learn how to take advantage of tiered storage in Microsoft Azure to dramatically reduce your storage and backup costs and enhance your resilience.

Azure Cloud Governance 1-Day Workshop: Join this day-long cloud governance learning event designed for IT and senior leadership. Discover cloud governance, understand the main concepts, and learn about what you can do to give your business an advantage.

Azure Data Centre Modernization: 3-Day Assessment: This Azure assessment will provide you with an understanding of what's possible for your business with a business case for migration that includes timing and cost estimates.

Azure Maturity: 4-Week Assessment: The Azure Maturity assessment aims at estimating the maturity of your organization (strengths and weaknesses) and building a roadmap that will allow you to make your cloud journey a success.

Azure: 5-Day Enterprise Scaffold Workshop: This workshop provides training, processes, and security settings to scale up and optimize the adoption of Azure by removing blockers to scale and introducing processes to scale safely and efficiently.

BizTalk to Azure Migration Assessment – 2 Day: This assessment will provide you with detailed guidance on how you can successfully move your BizTalk applications to Azure Integration Services running in the cloud.

Business Continuity System: 1 Hour Briefing: This briefing is for every IT director who wants to minimize downtime with dependable recovery, reduce infrastructure costs, or easily run disaster recovery drills without affecting ongoing replication.

Data Centre Migration Essentials: 1-Hr Briefing: Identify your migration options and uncover the best ROI opportunities in migrating your apps, data, and/or infrastructure to Microsoft Azure.

Data Compliance Monitoring – 3 Week Assessment: The CTO Boost team will work closely with your risk and compliance stakeholders to assess your compliance strategy and build a plan toward compliance automation.

Databricks 5 Day Data Engineering PoC: We will work with your development team to demonstrate the performance, scale, and reduced complexity that Azure Databricks can offer your business.

Email Compliance Essentials: 1-Hr Briefing: Discover how you can use Azure to provide email journaling, retention management, and e-discovery to meet your email compliance needs.

Legacy App Migration – 8-Week Assessment and Design: After investigating your legacy apps, we deliver a roadmap for your Azure cloud journey. Additionally, we design a modern user experience (UX) leveraging the latest usability and distributed workforce techniques.

Modern Data Architecture: 1-Hour Assessment: During this session we will discuss the different components that make up a modern data architecture to assess whether it is right for you and how Data Thirst could help you deliver a successful data platform that uses it.

Win/SQL 2008 EOL to Azure: 5-Day Assessment: This free assessment is focused on applications running on end-of-support Windows and SQL Server 2008 products and provides a detailed upgrade and migration plan to Microsoft Azure.

Windows/SQL 2008 to Azure: 1 Week Implementation: Need an efficient path forward for applications based on Windows or SQL Server 2008? This 1-week implementation provides a data-driven migration of your Windows or SQL workload to Microsoft Azure.

Quelle: Azure

Azure IoT Hub message enrichment simplifies downstream processing of your data

We just released a new capability that enables enriching messages that are egressed from Azure IoT Hub to other services. Azure IoT Hub provides an out-of-the-box capability to automatically deliver messages to different services and is built to handle billions of messages from your IoT devices. Messages carry important information that enable various workflows throughout the IoT solution. Message enrichments simplifies post-processing of your data and can reduce costs of calling device twin APIs for information. This capability allows you to stamp information on your messages, such as details from your device twin, your IoT Hub name or any static property you want to add.

A message enrichment has three key elements, the key name for the enrichment, the value of the enrichment key, and the endpoints that the enrichment applies to. Message enrichments are added to the IoT Hub message as application properties. You can add up to 10 enrichments per IoT Hub for standard and basic tier IoT Hubs and two enrichments for free tier IoT Hub. Enrichments can be applied to messages going to the built-in endpoint, messages that are routed to the built-in endpoint, or custom endpoints such as Azure blob storage, Event Hubs, Service Bus Queue, and Service Bus topic. Each enrichment will have a key that can be set as any string, and a value that can be a path to a device twin (e.g. $twin.tag.field), the IoT Hub sending the message (e.g. $iothubname), or any static value (e.g. myapplicationId).

You can also use the IoT Hub Create or Update REST API, and add enrichments as part of the RoutingProperties. For example:

"routing": {
"enrichments": [
{
"key": "appId",
"value": "myApp",
"endpointNames": ["events"]
},
{
"key": "Iot-Hub-Name",
"value": "$iothubname",
"endpointNames": ["events"]
},
{
"key": "Device-Location",
"value": "$twin.tags.location",
"endpointNames": ["events"]
}
],
"endpoints": {
"serviceBusQueues": [],
"serviceBusTopics": [],
"eventHubs": [],
"storageContainers": []
},
"routes": [{
"name": myfirstroute",
"source": "DeviceMessages",
"condition": "true",
"endpointNames": [
"events"
],
"isEnabled": true
}],
"fallbackRoute": {
"name": "$fallback",
"source": "DeviceMessages",
"condition": "true",
"endpointNames": [
"events"
],
"isEnabled": true
}
}

This feature is available for preview in all public regions except East US, West US, and West Europe. We are excited for you to try this capability and build more streamlined IoT solutions for your business. Try this tutorial to get started.

We would love to hear more about your experiences with the preview and get your feedback! Are there other capabilities in IoT Hub that you would like to see? Please continue to submit your suggestions through the Azure IoT User Voice forum.
Quelle: Azure