Microsoft Azure available from new cloud regions in Switzerland

UBS Group, Swiss Re Group, Swisscom, and others turn to Microsoft for their digital transformations

Today, we’re announcing the availability of Azure from our new cloud regions in Switzerland. These new regions and our ongoing global expansion are in response to customer demand as more industry leaders choose Microsoft’s cloud services to further their digital transformations. As we enter new markets, we work to address scenarios where data residency is of critical importance, especially for highly regulated industries seeking the compliance standards and extensive security offered by Azure.

Additionally, Office 365—the world’s leading cloud-based productivity solution—and Dynamics 365 and Power Platform—the next generation of intelligent business applications and tools—will be offered from these new cloud regions to advance even more customers on their cloud journeys.  

Trusted Microsoft cloud services

Microsoft cloud services delivered from a given geography, such as our new regions in Switzerland, offer scalable, highly available, and resilient cloud services while helping enterprises and organizations meet their data residency, security, and compliance needs. We have deep expertise protecting data and empowering customers around the globe to meet extensive security and privacy requirements by offering the broadest set of compliance certifications and attestations in the industry.

Accelerating cloud adoption in Switzerland

In Switzerland, where we’ve been operating for 30 years, Azure is now available from new cloud datacenter regions located near Zurich and Geneva. More than 30 customer and partner organizations are already using these Azure services. Companies becoming more efficient, innovative, and productive through their usage of Azure in Switzerland include:

UBS Group, the world’s largest wealth manager, is using Microsoft Azure cloud technology to modernize many critical business applications, to leverage digital channels, and to rethink how its global workforce collaborates.
The Swiss Re Group, one of the world’s leading providers of reinsurance, insurance, and other forms of insurance-based risk transfer, has chosen us as a strategic partner and preferred public cloud provider. Through their use of technology and our partnership, Swiss Re strives to make insurance simpler and more accessible than ever.
Swisscom, the national telecommunications provider, is now offering its customers managed public cloud services delivered via our global infrastructure and new Swiss cloud regions. Swisscom will be the first Swiss telecommunications provider to offer ExpressRoute, a secure, highly available, high-performance, and private connection to Azure services.

Additional customers now taking advantage of Azure services in this new region include BKW, City of Zug, die Mobiliar, Exploris Health, and Skyguide to name a few.

These types of investments help us deliver on our continued commitment to serve our customers, reach new ones, and elevate their businesses through the transformative capabilities of the Microsoft Azure cloud platform.

Please contact your Microsoft representative to learn more about opportunities in Switzerland or follow this link to learn about Microsoft Azure.
Quelle: Azure

Latency is the new currency of the Cloud: Announcing 31 new Azure edge sites

Providing users fast and reliable access to their cloud services, apps, and content is pivotal to a business’ success.

The latency when accessing cloud-based services can be the inhibitor to cloud adoption or migration. In most cases, this is caused by commercial internet connections that aren’t tailored to today’s global cloud needs. Through deployment and operation of globally and strategically placed edge sites, Microsoft dramatically accelerates the performance and experience when you are accessing apps, content, or services such as Azure and Office 365 on the Microsoft global network.

Edges optimize network performance through local access points to and from the vast Microsoft global network, in many cases providing 10x the acceleration to access and consume cloud-based content and services from Microsoft.

What is the network edge?

Solely providing faster network access isn’t enough, and applications need intelligent services to expedite and simplify how a global audience accesses and experiences their offerings. Edge sites provide application development teams increased visibility and higher availability to access services that improve how they deliver global applications.

Edge sites benefit infrastructure and development teams in multiple key areas

Improved optimization for application delivery through Azure Front Door (AFD.) Microsoft recently announced AFD, which allows customers to define, manage, accelerate, and monitor global routing for web traffic with customizations for the best performance and instant global failover for application accessibility.
An enhanced customer experience via high-bandwidth access to Azure Blob storage, web applications, and live video-on-demand streams. Azure Content Delivery Network delivers high-bandwidth content by caching objects to the consumer’s closest point of presence.
Private connectivity and dedicated performance through Azure ExpressRoute. ExpressRoute provides up to 100 gigabits per second of fully redundant bandwidth directly to the Microsoft global network at select peering locations across the globe, making connecting to and through Azure a seamless and integrated experience for customers.

New edge sites

Today, we’re announcing the addition of 31 new edge sites, bringing the total to over 150 across more than 50 countries. We’re also adding 14 new meet-me sites to Azure ExpressRoute to further enable and expand access to dedicated private connections between customers’ on-premises environments and Azure.

More than two decades of building global network infrastructure have given us a keen awareness of globally distributed edge sites and their critical role in a business’ success.

By utilizing the expanding network of edge sites, Microsoft provides more than 80 percent of global GDP with an experience of sub-30 milliseconds latency. We are adding new edges every week, and our ambition is to provide this level of performance to all of our global audience.

This expansion proves its value further when workloads move to the cloud or when Microsoft cloud services such as Azure, Microsoft 365, and Xbox are used. By operating over a dedicated, premium wide-area-network, our customers avoid transferring customer data over the public internet, which ensures security, optimizes traffic, and increases performance.

New edge sites

Country

City

Colombia

Bogota

Germany

Frankfurt
Munich

India

Hyderabad

Indonesia

Jakarta

Kenya

Nariobi

Netherlands

Amsterdam

New Zealand

Auckland

Nigeria

Lagos

Norway

Stavanger

United Kingdom

London

United States

Boston
Portland

Vietnam

Saigon

Upcoming edge sites

Country

City

Argentina

Buenos Aires

Egypt

Cairo

Germany

Dusseldorf

Israel

Tel Aviv

Italy

Rome

Japan

Tokyo

Norway

Oslo

Switzerland

Geneva

Turkey

Istanbul

United States

Detroit
Jacksonville
Las Vegas
Minneapolis
Nashville
Phoenix
Quincy (WA)
San Diego

New ExpressRoute meet-me sites

Country

City

Canada

Vancouver

Colombia

Bogota

Germany

Berlin
Dusseldorf

Indonesia

Jakarta

Italy

Milan

Mexico

Queretaro (Mexico City)

Norway

Oslo
Stavanger

Switzerland

Geneva

Thailand

Bangkok

United States

Minneapolis
Phoenix
Quincy (WA)

With this latest announcement, Microsoft continues to offer cloud customers the fastest and most accessible global network, driving a competitive advantage for organizations accessing the global market and increased satisfaction for consumers.

Explore the Microsoft global network to learn about how it can benefit your organization today.
Quelle: Azure

Harnessing the power of the Location of Things with Azure Maps

The Internet of Things (IoT) is the beginning of accessing planetary-scale insights. With the mass adoption of IoT and the very near future explosion of sensors, connectivity, and computing, humanity is on the cusp of a fully connected, intelligent world. We will be part of the generation that realizes the data-rich, algorithmically deterministic lifestyle the world has never seen. The inherent value of this interconnectedness lies within the constructs of human nature to thrive. Bringing all of this information together with spatial intelligence has been challenging to say the least. Until today.

Today, we’re unveiling a cross-Azure IoT collaboration simplifying the use of location and spatial intelligence used in conjunction with IoT messaging. The result is the means for customers to use Azure IoT services to stay better informed about their “things” in terms of space. Azure IoT customers can now implement IoT spatial analytics using Azure Maps. Providing spatial intelligence to IoT devices means greater insights into not just what’s happening, but where it’s happening.

Azure Maps provides geographic context for information and, as it pertains to IoT, thus geographic insights based on IoT information. Customers are using Azure Maps and Azure IoT for monitoring movement of assets and cross reference the “things” with their location. For example, assume a truck is delivering refrigerated goods from New York City to Washington DC. A route is calculated to determine the path and duration the truck should take to deliver the goods. From the route, a geofence can be created and stored in Azure Maps. The black box on the truck tracking the vehicle would provide Azure IoT Hub to determine if the truck ever leaves the predetermined path. If it does, this could signal that something is wrong—a detour could be disastrous for refrigerated goods. Notifications of detours could be setup and communicated through Azure Event Grid and sent over email, text, or a myriad of other communication mediums.

When we talk about Azure IoT, we often talk about data (from sensors) which leads to insights (when computed) which leads to actions (a result of insights). With The Location of Things, we’re now talking about data from sensors which leads to insights which leads to actions and where they are needed. Knowing where to take actions has massive implications in terms of cost efficacy and time management. When you know where you have issues or opportunities, you can then make informed decisions of where to deploy resources, where to deploy inventory, or where to withdraw them. Run this over time and with enough data and you have artificial intelligence you could deploy at the edge to help with real-time decision making. Have enough data coming in fast enough and you’d be making decisions fast enough to predict future opportunities and issues—and where to deploy resources before you need them.

Location is a powerful component of providing insights. If you have a means of providing location via your IoT messages you can start doing so immediately. If you don’t have location natively, you’d be surprised at how you can get location associated with your sensors and device location. RevIP, Wi-Fi, and cell tower triangulation all provide a means of getting location into your IoT messages. Get that location data into the cloud and start gaining spatial insights today.
Quelle: Azure

Azure Load Balancer becomes more efficient

Azure introduced an advanced, more efficient Load Balancer platform in late 2017. This platform adds a whole new set of abilities for customer workloads using the new Standard Load Balancer. One of the key additions the new Load Balancer platform brings, is a simplified, more predictable and efficient outbound connectivity management.

While already integrated with Standard Load Balancer, we are now bringing this advantage to the rest of Azure deployments. In this blog, we will explain what it is and how it makes life better for all our consumers. An important change that we want to focus on is the outbound connectivity behavior pre and post platform integration as this is a very important design point for our customers.

Load Balancer and Source NAT

Azure deployments use one or more of three scenarios for outbound connectivity, depending on the customer’s deployment model and the resources utilized and configured. Azure uses Source Network Address Translation (SNAT) to enable these scenarios. When multiple private IP addresses or roles share the same public IP (public IP address assign to Load Balancer, used for outbound rules or automatically assigned public IP address for standalone virtual mahines), Azure uses port masquerading SNAT (PAT) to translate private IP addresses to public IP addresses using the ephemeral ports of the public IP address. PAT does not apply when Instance Level Public IP addresses (ILPIP) are assigned.

For the cases where multiple instances share a public IP address, each instance behind an Azure Load Balancer VIP is pre-allocated a fixed number of ephemeral ports used for PAT (SNAT ports), needed for masquerading outbound flows. The number of pre-allocated ports per instance is determined by the size of backend pool, see the SNAT algorithm section for details.

Differences between legacy and new SNAT algorithms

The platform improvements also involved improvements in the way the SNAT algorithm works in Azure. The table below does a side-by-side comparison of these allocation modes and their properties

 
Legacy SNAT Port Allocation
(Legacy Basic SKU Deployments)
New SNAT Port Allocation
(Recent Basic SKU deployments and Standard SKU deployments)

Applicability
Services deployed before September 2017 use this allocation mode
Services deployed after September 2017 use this allocation mode.

Pre-allocation
160
(smaller number for tenants larger than 300 instances)

For SNAT port allocation according to the back-end pool size and the pool boundaries, visit SNAT port pre-allocation.

In case outbound rules are used, the pre-allocation will be equal to the ports defined in the outbound rules. If the ports are exhausted on a subset of instances they will not be allocated any SNAT ports.

Max ports
No ceiling;
Dynamic, on-demand allocation of a small number of ports until all are exhausted.
No throttling of requests.
All available SNAT ports are allocated dynamically on-demand.
Some throttling of requests is applied (per instance per sec.)

Scale up

Port re-allocation is done.

Existing connections might drop on re-allocation.

Static SNAT ports are always allocated to the new instance.

If backend pool boundary is changed, or ports are exhausted, port reallocation is done.

Existing connections might drop on re-allocation.

Scale Down
Port re-allocation is done.
If backend pool boundary is changed, port reallocation is done to allocate additional ports to all.

Use Cases

Noisy neighbors could consume all ports and starve remaining instances/tenants.
Management of Port allocation is nearly impossible without any throttling.

Much better customization & control over the SNAT port allocation.
Higher pre-allocation to cover the majority of customer scenarios.
highly predictable port allocation and application behavior.

Platform Integration & impact on SNAT port allocation

We’re working on the integration of the two platforms to extend reliability and efficiency and enable capabilities like telemetry and SKU upgrade for the customers. As a result of this integration, all the users across Azure will be moved to the new SNAT port allocation algorithm. This integration exercise is in progress and expected to finish before Spring 2020.

What type of SNAT allocation do I get after platform integration?

Let’s categorize these into different scenarios:

Legacy SNAT port allocation is the older mode of SNAT port allocation and is being used by deployments made before September 2017. This mode allocates a small number of SNAT ports (160) statically to instances behind a Load Balancer and relies on SNAT failures and dynamic on-demand allocations afterwards.

After platform integration, these deployments will be moved to the new SNAT allocation in the new platform as described in section A above. However, we’ll ensure a port allocation equal to a maximum of <Static port allocation, Dynamic port allocation in older platform> in the new platform after migration.

New SNAT port allocation mode in the older platform was introduces in early 2018. This mode is same as the new SNAT port allocation Mode described above.

After platform integration, these deployments will remain unchanged, ensuring the preservation of SNAT port allocation from the older platform.

How does it impact my services or my existing outbound flows?

In majority of the cases, where the instances are consuming less than the default pre-allocated SNAT ports, there will be no impact to the existing flows.
In a small number of the customer deployments, which are using a significantly higher number of SNAT ports (received via Dynamic allocation), there might be a temporary drop of a portion of flows, which depend on additional dynamic port allocation. This should auto-correct within a few seconds.

What should I do right now?

Review and familiarize yourself with the scenarios and patterns described in Managing SNAT port exhaustion for guidance on how to design for reliable and scalable scenarios.

How do I ensure no disruption for upcoming critical period?

The platform integration & resulting port allocation algorithm is an Azure platform level change. However, we do understand that you are running critical production workloads in Azure and want to ensure this level of service logic changes are not implemented during critical periods and avoiding any service disruption. In such scenarios, please create a Load Balancer support case from the portal with your deployment information, and we’ll work with you to ensure no disruption to your services.
Quelle: Azure

Azure and VMware innovation and momentum

Since announcing Azure VMware Solutions at Dell Technologies World this spring, we’ve been energized by the positive feedback we’ve received from our partners and customers who are beginning to move their VMware workloads to Azure. One of these customers is Lucky Brand, a leading retailer that is embracing digital transformation while staying true to its rich heritage. As part of their broader strategy to leverage the innovation possible in the cloud, Lucky Brand is transitioning several VMware workloads to Azure.

“We’re seeing great initial ROI with Azure VMware Solutions. We chose Microsoft Azure as our strategic cloud platform and decided to dramatically reduce our AWS footprint and 3rd Party co-located data centers. We have a significant VMware environment footprint for many of our on-premises business applications.

The strategy has allowed us to become more data driven and allow our merchants and finance analysts the ability to uncover results quickly and rapidly with all the data in a central cloud platform providing great benefits for us in the competitive retail landscape. Utilizing Microsoft Azure and VMware we leverage a scalable cloud architecture and VMware to virtualize and manage the computing resources and applications in Azure in a dynamic business environment.

Since May, we’ve been successfully leveraging these applications on the Azure VMware Solution by CloudSimple platform. We are impressed with the performance, ease of use and the level of support we have received by Microsoft and its partners.” 

Kevin Nehring, CTO, Lucky Brand

Expanding to more regions worldwide and adding new capabilities

Based on customer demand, we are excited to announce that we will expand Azure VMware Solutions to a total of eight regions across the US, Western Europe, and Asia Pacific by end of year.

In addition to expanding to more regions, we are continuing to add new capabilities to Azure VMware Solutions and deliver seamless integration with native Azure services. One example is how we’re expanding the supported Azure VMware Solutions storage options to include Azure NetApp Files by the end of the year. This new capability will allow IT organizations to more easily run storage intensive workloads on Azure VMware Solutions. We are committed to continuously innovating and delivering capabilities based on customer feedback.

Broadening the ecosystem

It is amazing to see the market interest in Azure VMware Solutions and the partner ecosystem building tools and capabilities that support Azure VMware Solutions customer scenarios.

RiverMeadow now supports capabilities to accelerate the migration of VMware environments on Azure VMware Solutions.

“I am thrilled about our ongoing collaboration with Microsoft. Azure VMware Solutions enable enterprise customers to get the benefit of cloud while still running their infrastructure and applications in a familiar, tried and trusted VMware environment. Add with the performance and cost benefits of VMware on Azure, you have a complete solution. I fully expect to see substantial enterprise adoption over the short term as we work with Microsoft’s customers to help them migrate even the most complex workloads to Azure.”

Jim Jordan, President and CEO, RiverMeadow

Zerto has integrated its IT Resilience Platform with Azure VMware Solutions, delivering replication and failover capabilities between Azure VMware Solution by CloudSimple, Azure and any other Hyper-V or VMware environments, keeping the same on-premises environment configurations, and reducing the impact of disasters, logical corruptions, and ransomware infections.

"Azure VMware Solution by CloudSimple, brings the familiarity and simplicity of VMware into Azure public cloud. Every customer and IT pro using VMware will be instantly productive with minimal or no Azure competency. With Zerto, VMware customers gain immediate access to simple point and click disaster recovery and migration capabilities between Azure VMware Solutions, the rest of Azure, and on-premises VMware private clouds. Enabled by Zerto, one of Microsoft's top ISVs and an award-winning industry leader in VMware-based disaster recovery and cloud migration, delivers native support for Azure VMware Solutions. "

Peter Kerr, Vice President of Global Alliances, Zerto

Veeam Backup & Replication™ software is specialized in supporting VMware vSphere environments, their solutions will help customers meet the backup demands of organizations deploying Azure VMware Solutions.

“As a leading innovator of Cloud Data Management solutions, Veeam makes it easy for our customers to protect their virtual, physical, and cloud-based workloads regardless of where those reside. Veeam’s support for Microsoft Azure VMware Solutions by CloudSimple further enhances that position by enabling interoperability and portability across multi-cloud environments. With Veeam Backup & Replication, customers can easily migrate and protect their VMware workloads in Azure as part of a cloud-first initiative, create an Azure-based DR strategy, or simply create new Azure IaaS instances – all with the same proven Veeam solutions they already use today.”  

Ken Ringdahl, Vice President of Global Alliances Architecture, Veeam Software

Join us at VMworld

If you plan to attend VMworld this week in San Francisco, stop by our booth and witness Azure VMware Solutions in action; or sit down for a few minutes and listen to one of our mini theater presentations addressing a variety of topics such as Windows Virtual Desktop, Windows Server, and SQL Server on Azure in addition to Azure VMware Solutions!

Learn more about Azure VMware Solutions.
Quelle: Azure

Preview of custom content in Azure Policy guest configuration

Today we are announcing a preview of a new feature of Azure Policy. The guest configuration capability, which audits settings inside Linux and Windows virtual machines (VMs), is now ready for customers to author and publish custom content.

The guest configuration platform has been generally available for built-in content provided by Microsoft. Customers are using this platform to audit common scenarios such as who has access to their servers, what applications are installed, if certificates are up to date, and whether servers can connect to network locations.

Starting today, customers can use new tooling published to the PowerShell Gallery to author, test, and publish their own content packages both from their developer workstation and from CI/CD platforms such as Azure DevOps.

For example, if you are running an application on an Azure virtual machine that was developed by your organization, you can audit the configuration of that application in Azure and be notified when one of the VMs in your fleet is not compliant.

This is also an important milestone for compliance teams who need to audit configuration baselines. There is already a built-in policy to audit Windows machines using Microsoft’s recommended security configuration baseline.  Custom content expands the scenario to content from a popular source of configuration details, group policy. There is tooling available to convert from group policy format to the desired state configuration syntax used by Azure Policy guest configuration. Group policy is a common format used by organizations that publish regulatory standards, and a popular tool for enterprise organizations to manage servers in private datacenters.

Finally, customers that are publishing custom content packages can include third party tooling. Many customers have existing tools used for performing audits of settings inside virtual machines before they are released to production. As an example, the gcInSpec module is published as an open source project with maintainers from Microsoft and Chef. Customers can include this module in their content package to audit Windows virtual machines using their existing investment in Chef InSpec.

For more information, and to get started using custom content in Azure Policy guest configuration see the documentation page ”How to create Guest Configuration policies.”
Quelle: Azure

Plan migration of your Hyper-V servers using Azure Migrate Server Assessment

Azure Migrate is focused on streamlining your migration journey to Azure. We recently announced the evolution of Azure Migrate, which provides a streamlined, comprehensive portfolio of Microsoft and partner tools to meet migration needs, all in one place. An important capability included in this release is upgrades to Server Assessment for at-scale assessments of VMware and Hyper-V virtual machines (VMs.)

This is the first in a series of blogs about the new capabilities in Azure Migrate. In this post, I will talk about capabilities in Server Assessment that help you plan for migration of Hyper-V servers. This capability is now generally available as part of the Server Assessment feature of Azure Migrate. After assessing your servers for migration, you can migrate your servers using Microsoft’s Server Migration solution available on Azure Migrate. You can get started right away by creating an Azure Migrate project.

Server Assessment earlier supported assessment of VMware VMs for migration to Azure. We’ve now included Azure suitability analysis, migration cost planning, performance-based rightsizing, and application dependency analysis for Hyper-V VMs. You can now plan at-scale, assessing up to 35,000 Hyper-V servers in one Azure Migrate project. If you use VMware as well, you can discover and assess both Hyper-V and VMware servers in the same Azure Migrate project. You can create groups of servers, assess by group, and refine the groups further using application dependency information.

Azure suitability analysis

The assessment determines whether a given server can be migrated as-is to Azure. Azure support is checked for each server discovered. If it is found that a server is not ready to be migrated, remediation guidance is automatically provided. You can customize your assessment and regenerate the assessment reports. You can apply subscription offers and reserved instance pricing on the cost estimates. You can also generate a cost estimate by choosing a VM series of your choice, and specify the uptime of the workloads you will run in Azure.

Cost estimation and sizing

Assessment reports provide detailed cost estimates. You can optimize on cost using performance-based rightsizing assessments. The performance data of your on-premise server is taken into consideration to recommend an appropriate Azure VM and disk SKU. This helps to optimize and right-size on cost as you migrate servers that might be over-provisioned in your on-premise data center.

Dependency analysis

Once you have established cost estimates and migration readiness, you can go ahead and plan your migration phases. Use the dependency analysis feature to understand the dependencies between your applications. This is helpful to understand which workloads are interdependent and need to be migrated together, ensuring you do not leave critical elements behind on-premises. You can visualize the dependencies in a map or extract the dependency data in a tabular format. You can divide your servers into groups and refine the groups for migration using this feature.

Assess your Hyper-V servers in three simple steps:

Create an Azure Migrate project and add the Server Assessment solution to the project.
Set up the Azure Migrate appliance and start discovery of your Hyper-V virtual machines. To set up discovery, the Hyper-V host or cluster names are required. Each appliance supports discovery of 5,000 VMs from up to 300 Hyper-V hosts. You can set up more than one appliance if required.
Once you have successfully set up discovery, create assessments and review the assessment reports.
Use the application dependency analysis features to create and refine server groups to phase your migration.

Note that the inventory metadata gathered is persisted in the geography you select while creating the project. You can select a geography of your choice. Server Assessment is available today in Asia Pacific, Australia, Azure Government, Canada, Europe, India, Japan, United Kingdom, and United States geographies.

When you are ready to migrate the servers to Azure, you can use Server Migration to carry out the migration. You will be able automatically carry over the assessment recommendations from Server Assessment into Server Migration. You can read more in our documentation “Migrate Hyper-V VMs to Azure.”

In the coming months, we will add assessment capabilities for physical servers. You will also be able to run a quick assessment by adding inventory information using a CSV file. Stay tuned!

In the upcoming blogs, we will talk about tools for scale assessments, scale migrations, and the partner integrations available in Azure Migrate.

Resources to get started

Tutorial on how to assess Hyper-V servers using the server assessment feature of Azure Migrate.
Prerequisites for assessment of Hyper-V servers.
Guide on how to plan an assessment for a large-scale environment. Each appliance supports discovery of 5,000 VMs from up to 300 Hyper-V hosts.
Tutorial on how to migrate Hyper-V servers using the Server Migration feature of Azure Migrate.

Quelle: Azure

IoT Plug and Play is now available in preview

Today we are announcing that IoT Plug and Play is now available in preview! At Microsoft Build in May 2019, we announced IoT Plug and Play and described how it will work seamlessly with IoT Central. We demonstrated how IoT Plug and Play simplifies device integration by enabling solution developers to connect and interact with IoT devices using device capability models defined with the Digital Twin definition language. We also announced a set of partners who have launched devices and solutions that are IoT Plug and Play enabled. You can find their IoT Plug and Play certified devices at the Azure Certified for IoT device catalog.

With today’s announcement, solution developers can start using Azure IoT Central or Azure IoT Hub to build solutions that integrate seamlessly with IoT devices enabled with IoT Plug and Play. We have also launched a new Azure Certified for IoT portal, for device partners interested to streamline the device certification submission process and get devices into the Azure IoT device catalog quickly.

This article outlines how solution developers can use IoT Plug and Play devices in their IoT solutions, and how device partners can build and certify their products to be listed in the catalog.

Faster device integration for solution developers

Azure IoT Central is a fully managed IoT Software as a Service (SaaS) offering that makes it easy to connect, monitor, and manage your IoT devices and products. Azure IoT Central simplifies the initial setup of your IoT solution and cuts the management burden, operational costs, and overhead of a typical IoT project. Azure IoT Central integration with IoT Plug and Play takes this one step further by allowing solution developers to integrate devices without writing any embedded code. IoT solution developers can choose devices from a large set of IoT Plug and Play certified devices to quickly build and customize their IoT solutions end-to-end. Solution developers can start with a certified device from the device catalog and customize the experience for the device, such as editing display names or units. Solution developers can also add dashboards for solution operators to visualize the data; as part of this new release, developers have a broader set of visualizations to choose from. There is also the option to auto generate dashboards and visualizations to get up and running quickly. Once the dashboard and visualizations are created, solution developers can run simulations based on real models from the device catalog. Developers can also integrate with the commands and properties exposed by IoT Plug and Play capability models to enable operators to effectively manage their device fleets. IoT Central will automatically load the capability model of any certified device, enabling a true Plug and Play experience!

Another option available for developers who’d like more customization is to build IoT solutions with Azure IoT Hub and IoT Plug and Play devices. With today’s release, Azure IoT Hub now supports RESTful digital twin APIs that expose the capabilities of IoT Plug and Play device capability models and interfaces. Developers can set properties to configure settings like alarm thresholds, send commands for operations such as resetting a device, route telemetry, and query which devices support a specific interface. The most convenient way is to use the Azure IoT SDK for Node.js (other languages are coming soon). And all devices enabled for IoT Plug and Play in the Azure Certified for IoT device catalog will work with IoT Hub just like they work with IoT Central.

Streamlined certification process for device partners

The Azure Certified for IoT device catalog allows customers to quickly find the right Azure IoT certified device to quickly start building IoT solutions. To help our device partners certify their products as IoT Plug and Play compatible, we have revamped and streamlined the Azure Certified for IoT program by launching a new portal and submission process. With the Azure Certified for IoT portal, device partners can define new products to be listed in the Azure Certified for IoT device catalog and specify product details such as physical dimensions, description, and geo availability. Device partners can manage their IoT Plug and Play models in their company model repository, which limits access to only their own employees and select partners, as well as the public model repository. The portal also allows device partners to certify their products by submitting to an automated validation process that verifies correct implementation of the Digital Twin definition language and required interfaces implementation.

Device partners will also benefit from investments in developer tooling to support IoT Plug and Play. The Azure IoT Device Workbench extension for VS Code adds IntelliSense for easy authoring of IoT Play and Play device models. It also enables code generation to create C device code that implements the IoT Plug and Play model and provides the logic to connect to IoT Central, without customers having to worry about provisioning or integration with IoT Device SDKs.

The new tolling capabilities also integrates with the model repository service for seamless publishing of device models. In addition to the Azure IoT Device Workbench, device developers can use tools like the Azure IoT explorer and the Azure IoT extension for Azure Command-line Interface. Device code can be developed with the Azure IoT SDK for C and for Node.js.

Connect sensors on Windows and Linux gateways to Azure

If you are using a Windows or Linux gateway device and you have sensors that are already connected to the gateway, then you can make these sensors available to Azure by simply editing a JSON configuration. We call this technology the IoT Plug and Play bridge. The bridge allows sensors on Windows and Linux to just work with Azure by bridging these sensors from the IoT gateway to IoT Central or IoT Hub. On the IoT gateway device, the sensor bridge leverages OS APIs and OS plug and play capabilities to connect to downstream sensors and uses the IoT Plug and Play APIs to communicate with IoT Central and IoT Hub on Azure. A solution builder can easily select from sensors enumerated on the IoT device and register them in IoT Central or IoT Hub. Once available in Azure, the sensors can be remotely accessed and managed. We have native support for Modbus and a simple serial protocol for managing and obtaining sensor data from MCUs or embedded devices and we are continuing to add native support for other protocols like MQTT. On Windows, we also support cameras, and general device health monitoring for any device the OS can recognize (such as USB peripherals). You can extend the bridge with your own adapters to talk to other types of devices (such as I2C/SPI), and we are working on adding support for more sensors and protocols (such as HID).

Next steps

Read IoT Central documentation to learn how to build solutions with IoT Plug and Play devices.
Read the IoT Plug and Play documentation to learn how to build solutions using the Azure IoT platform.
Learn how to build and certify IoT Plug and Play devices.
View the Digital Twin Definition Language specification on GitHub.
Tune in to the Internet of Things Show deep dive on September 11.
Browse IoT Plug and Play devices on the Azure IoT Device Catalog.
See a demo of IoT Plug and Play bridge with a MODBUS environmental sensor on the Channel 9 IoT Show.
Try IoT Plug and Play bridge on GitHub.
Learn how to implement IoT spatial analytics using Azure Maps and IoT Plug and Play location schema.

Quelle: Azure

IRAP protected compliance from infra to SAP application layer on Azure

Australian government organizations are looking for cloud managed services providers capable of providing deployment of a platform as a service (PaaS) environment suitable for the processing, storage, and transmission of AU-PROTECTED government data that is compliant with the objectives of the Australian Government Information Security Manual (ISM) produced by the Australian Signals Directorate (ASD).

One of Australia’s largest federal agencies that is responsible for improving and maintaining finances of the state was looking to implement the Information Security Registered Assessors Program (IRAP) which is critical to safeguard sensitive information and ensure security controls around transmission, storage, and retrieval.

The Information Security Registered Assessors Program is an Australian Signals Directorate initiative to provide high-quality information and communications technology (ICT) security assessment services to the government.

The Australian Signals Directorate endorses suitably-qualified information and communications technology professionals to provide relevant security services that aim to secure broader industry and Australian government information and associated systems.

Cloud4C took up this challenge to enable this federal client on the cloud delivery platforms. Cloud4C analyzed and assessed the stringent compliance requirements within the Information Security Registered Assessors Program guidelines.

Following internal baselining, Cloud4C divided the whole assessment into three distinct categories – physical, infrastructure, and managed services. The Information Security Registered Assessors Program has stringent security controls around these three specific areas.

Cloud4C realized that the best way to successfully meet this challenge was to partner and share responsibilities to achieve an improbable but successful and worthy assessment together. In April 2018, the Australian Cyber Security Center (ACSC) announced the certification of Azure and Office 365 at the PROTECTED classification. Microsoft became the first and only public cloud provider to achieve this level of certification. Cloud4C partnered with Microsoft to deploy the SAP applications and SAP HANA database on Azure and utilized all the Information Security Registered Assessors Program compliant infrastructure benefits to enable seamless integration of native and marketplace tools and technologies on Azure.

Cloud4C identified the right Azure data center in Australia, Australia Central and Australia Central 2, which had undergone a very stringent Information Security Registered Assessors Program assessment for physical security and information and communications equipment placements.

This compliance by Azure for infrastructure and disaster recovery gave Cloud4C a tremendous head-start as a managed service provider in focusing energies to address the majority of remaining controls that were focused solely for the cloud service provider.

The Information Security Registered Assessors Program assessment for Cloud4C involved meeting 412 high risks and 19 of the most critical security aspects distributed across 22 major categories, after taking out the controls that were addressed by Azure disaster recovery.

Solution overview

The scope of the engagement was to configure and manage the SAP landscape onto Azure with managed services up to the SAP basis layer while maintaining the Information Security Registered Assessors Program protected classification standards for the processing, storage, and retrieval of classified information. As the engagement model is PaaS, the responsibility matrix was up to the SAP basis layer and application managed services were outside the purview of this engagement.

Platform as a service with single service level agreement and Information Security Registered Assessors Program protected classification

The proposed solution included various SAP solutions including SAP ERP, SAP BW, SAP CRM, SAP GRC, SAP IDM, SAP Portal, SAP Solution Manager, Web Dispatcher, and Cloud Connector with a mix of databases including SAP HANA, SAP MaxDB, and former Sybase databases. Azure Australia Central, the primary disaster recovery, and Australia Central 2, the secondary disaster recovery region, were identified as the physical disaster recovery locations for building the Information Security Registered Assessors Program protected compliant environment. The proposed architecture encompassed certified virtual machine stock keeping units (SKUs) for SAP workloads, optimized storage and disks configuration, right network SKUs with adequate protection, a mechanism to achieve high availability, disaster recovery, backup, and monitoring, an adequate mix of native and external security tools, and most importantly, processes and guidelines around service delivery.

The following Azure services were considered as part of the proposed architecture:

Azure Availability Sets
Azure Active Directory
Azure Privileged Identity Management
Azure Multi-Factor Authentication
Azure ExpressRoute gateway
Azure application gateway with web application firewall
Azure Load Balancer
Azure Monitor
Azure Resource Manager
Azure Security Center
Azure storage and disk encryption
Azure DDoS Protection
Azure Virtual Machines (Certified virtual machines for SAP applications and SAP HANA database)
Azure Virtual Network
Azure Network Watcher
Network security groups

Information Security Registered Assessors Program compliance and assessment process

Cloud4C navigated through the accreditation framework with the help of the Information Security Registered Assessors Program assessor, who helped to understand and implement the Australian government security and build the technical feasibility of porting SAP applications and the SAP HANA database to the Information Security Registered Assessors Program protected setup on the Azure protected cloud.

The Information Security Registered Assessors Program assessor assessed the implementation, appropriateness, and effectiveness of the system's security controls. This was achieved through two security assessment stages, as dictated in the Australian Government Information Security Manual (ISM):

Stage 1: Security assessment identifies security deficiencies that the system owner rectifies or mitigates
Stage 2: Security assessment assesses residual compliance

Cloud4C has achieved successful assessment under all applicable information security manual controls, ensuring the zero risk environment and protection of the critical information systems with support from Microsoft.

The Microsoft team provided guidance around best practices on how to leverage Azure native tools to achieve compliance. The Microsoft solution architect and engineering team participated in the design discussions and brought an existing knowledge base around Azure native security tools, integration scenarios for third party security tools, and possible optimizations in the architecture.

During the assessment, Cloud4C and the Information Security Registered Assessors Program assessor performed the following activities:

Designed the system architecture incorporating all components and stakeholders involved in the overall communication
Mapped security compliance against the Australian government security policy
Identified physical facilities, the Azure Data centers Australia Central and Australia Central 2, that are certified by the Information Security Registered Assessors Program
Implemented Information Security Manual security controls
Defined mitigation strategies for any non-compliance
Identified risks to the system and defined the mitigation strategy

Steps to ensure automation and process improvement

Quick deployment using Azure Resource Manager (ARM) templates combined with tools. This helped in the deployment of large landscapes comprising of more than 100 virtual machines and 10 SAP solutions in less than a month.
Process automation using Robotic Process Automation (RPA) tools. This helped to identify the business as usual stage within the SAP eco-system deployed for the Information Security Registered Assessors Program environment and enhanced the process to ensure minimum disruption to actual business processes on top of automation that takes care of the infrastructure level ensuring the application availability.

Learnings and respective solutions that were implemented during the process

The Azure Central and Azure Central 2 regions were connected to each other over fibre links offering less than sub-ms latency, with the SAP application and SAP HANA database replication in synchronous mode and zero recovery point objective (RPO) was achieved.
Azure Active Directory Domain Services were not available in the Australia Central region, so the Azure South-East region was leveraged to ensure seamless delivery.
Azure Site Recovery was successfully used for replication of an SAP Max DB database.
Traffic flowing over Azure ExpressRoute was not encrypted by default, it was encrypted using a network virtual appliance from a Microsoft security partner.

Complying with the Information Security Registered Assessors Program requires Australian Signals Directorate defined qualifications to be fulfilled and to pass through assessment phases. Cloud4C offered the following benefits:

Reduced time to market – Cloud4C completed the assessment process in 9 months as compared to the industry achievement of nearly 1-2 years.
Cloud4C’s experience and knowledge of delivering multiple regions and industry specific compliances for customers on Azure helped in mapping the right controls with Azure native and external security tools.

The partnership with Microsoft helped Cloud4C reach another milestone and take advantage of all the security features that Azure Hyperscaler has to offer to meet stringent regulatory and geographic compliances.

Cloud4C has matured in the use of many of the security solutions that are readily available from Azure Native, as well as Azure Marketplace to reduce time-to-market. Cloud4C utilized the Azure portfolio to its fullest in terms of securing the customer's infrastructure as well as encourage a secure culture in supporting their clients as an Azure Expert Managed Service Provider (MSP). The Azure security portfolio has been growing and so has Cloud4C's use of its solution offerings.

Cloud4C and Microsoft plan to take this partnership to even greater heights in terms of providing an unmatched cloud experience to customers in the marketplace across various geographies and industry verticals.

Learn more

Azure Security Solutions from Microsoft
Azure Native Products
Workloads Migration to Azure
Cloud4C Azure Managed Services
Cloud4C solutions for SAP on Azure

Quelle: Azure

Reducing SAP implementations from months to minutes with Azure Logic Apps

It's always been a tricky business to handle mission-critical processes. Much of the technical debt that companies assume comes from having to architect systems that have multiple layers of redundancy, to mitigate the chance of outages that may severely impact customers. The process of both architecting and subsequently maintaining these systems has resulted in huge losses in productivity and agility throughout many enterprises across all industries.

The solutions that cloud computing provides help enterprises shift away from this cumbersome work. Instead of spending countless weeks or even months trying to craft an effective solution to the problem of handling critical workloads, cloud providers such as Azure now provide an out-of-the-box way to run your critical processes, without fear of outages, and without incurring costs associated with managing your own infrastructure.

One of the latest innovations in this category, developed by the Azure Logic Apps team, is a new SAP connector that helps companies easily integrate with the ERP systems that are critical to the day-to-day success of a business. Often, implementing these solutions can take teams of people months to get right. However, with the SAP connector from Logic Apps, this process often only takes days, or even hours!

What are some of the benefits of creating workflows with Logic Apps and SAP?

In addition to the broad value that cloud infrastructure provides, Logic Apps can also help:

Mitigate risk and reduce time-to-success from months to days when implementing new SAP integrations.
Make your migration to the cloud smoother by moving at your own speed.
Connect best-in-class cloud services to your SAP instance, no matter where SAP is hosted.

Logic Apps help you turn your SAP instances from worrisome assets that need to be managed, to value-generation centers by opening new possibilities and solutions.

What's an example of this?

Take the following scenario—an on-premises instance of SAP receives sales orders from an e-commerce site for software purchases. In order to complete the entirety of this transaction, there are several points of integration that must happen—between the on-premises instance of the SAP ERP software, the service that generates new software license keys for the customer, the service that generates the customer invoice, and finally a service that emails the newly generated key to the customer, along with the final invoice.

In this scenario, it is necessary to move between both on-premises environments and cloud environments, which can often be tricky to accomplish in a secure way. Logic Apps solves for this by connecting securely and bi-directionally via a virtual network, ensuring that data stays safe.

Leveraging both Azure and Logic Apps, this solution can be done with a team of one, in a minimal amount of time, and with diminished risk of impacting other key business activities.

If you’re interested in trying this for yourself, or learning more about how we implemented this solution, you can follow along with Microsoft Mechanics as they walk through, step-by-step, how they implemented this solution.

How do I get started?

Azure Logic Apps reduces the complexity of creating and managing critical workloads in the enterprise, freeing up your team to focus on delivering new processes that drive key business outcomes.

Get started today:

Logic Apps

Logic Apps and SAP
Quelle: Azure