Simplify disaster recovery with Managed Disks for VMware and physical servers

Azure Site Recovery (ASR) now supports disaster recovery of VMware virtual machines and physical servers by directly replicating to Managed Disks. Beginning in March 2019 and moving forward, all new protections have this capability available on the Azure portal. In order to enable replication for a machine, you no longer need to create storage accounts. You can now write replication data directly to a type of Managed Disk. The choice of Managed Disk type should be based on the data change rate on your source disks. Available options are Standard HDD, Standard SSD and Premium SSD.

Please note, this change will not impact the machines which are already in a protected state. They will continue to replicate to storage accounts. However, you can still choose to use Managed Disks at the time of failover by updating the settings in compute and network blade.

There are benefits in writing to Managed Disks:

Hassle free management of capacity on Microsoft Azure: You don’t need to track and manage multiple target storage accounts anymore. ASR will create the replica disks at the time of enabling replication. An Azure Managed Disk is created for every virtual machine (VM) disk at on-premises. This is managed by Azure.
Seamless movement between different types of Managed Disks: After enabling protection, if the data change rate or churn pattern on your source disk changes, you do not need to disable and enable the replication with Managed Disks. You can simply choose to switch the type of Managed disk in order to handle the new data change rate. However, once you change the Managed Disk type, please be sure that you wait for fresh recovery points to be generated if you need to do test failover or failover post this activity.

The replication architecture for ASR is refined with replication logs first being uploaded to a cache storage account in Azure. These logs are processed by ASR and then pushed into the replica Managed Disk in Azure. Snapshots are created on these Managed Disks at a frequency which is applied by replication policy at the time of enabling replication. You can find the name of replica and target Managed Disks on the disks blade of the replicated item. At the time of failover, one of the recovery points on replica Managed Disk is chosen by the customer. This recovery point is used to create the target Managed Disk in Azure which is attached to the VM when it is brought up.

It is recommended to use the replication option LRS for cache storage. Since cache account is standard storage and only holds temporary data, it is not required to have multiple cache storage accounts in a recovery services vault.

Get started with ASR today. Support for writing to Managed Disks is available in all Azure regions. It will be released on national clouds soon!

Related links and additional content

Tag Managed Disks in Azure for billing
Learn more about pricing of Managed Disks
Learn more about Azure Site Recovery churn limits
Set up disaster recovery for VMware or physical machines to Azure

Quelle: Azure

Spinning up cloud-scale analytics is even more compelling with Talend and Microsoft

Special thanks to Lee Schlesinger and the Talend team for their contribution to this blog post. Following the significant announcement around the continued price-performance leadership of Azure Data Warehouse in February 2019, Talend announced support of Stitch Data Loader for Azure SQL Data Warehouse. Stich Data Loader is Talend’s recent addition to its offering portfolio small and mid-market customers. With Stitch Data Loader, customers can load 5 million rows/month into Azure SQL Data Warehouse for free or scale up to an unlimited number of rows with a subscription.

All across the industry, there is a rapid shift to the cloud. Utilizing fast, flexible, and secure cloud data warehouse is an important first step in that journey. With Microsoft Azure SQL Data Warehouse and Stitch Data Loader companies can get started faster than ever. The fact that ADW can be up to 14x faster, and 94 percent less expensive than similar options in the marketplace, should only help further accelerate adoption of cloud scale analytics by customers of all sizes.

Building pipelines to the cloud with Stitch Data Loader

The Stitch team built the Azure SQL Data Warehouse integration with the help of Microsoft engineers. The solution leverages Azure Blob Storage and PolyBase to get data into the Azure cloud and ultimately loaded to SQL Data Warehouse. We take care of all issues with data type transformation between source and destination, schema changes, and bulk loading.

To start moving data, just specify your host address and database name and provide authentication credentials. Stitch will then start loading data from all of your sources in minutes.

Stitch Data Loader enables Azure SQL Data Warehouse users to analyze data from more than 90 data sources, including databases, SaaS tools, and ad networks. We also sponsor and integrate with the Singer open source ETL project, which makes it easy to get additional or custom data sources into Azure SQL Data Warehouse.

Stitch’s destination switching feature also makes it easy for existing Stitch users to take their existing integrations and start loading them into Azure SQL Data Warehouse right away.

Going further with Talend Cloud and Azure SQL Data Warehouse

What if you’re ready to scale out your data warehousing efforts and layer on data transformation, profiling, and quality? Talend Cloud offers many more sources as well as more advanced data processing and data quality features that are available within the ADW and the Azure Platform. With over 900 connectors available, you’ll be able to move all your data, no matter the format or source. With data preparation and additional security features built-in, you can get Azure-ready in no time.

Take Uniper for instance. Using Azure and Talend Cloud, they built a cloud-based data analytics platform to integrate over 100 data sources including temperature and IoT sensors, from various external and internal sources. They constructed the full flow of business transactions — spanning market analytics, trading, asset management, and post-trading — while enabling data governance and self-service, resulting in reduced integration costs by 80 percent and achieving ROI in 6 months.

What’s next?

Start your free trial of Stitch today and load data into Azure SQL Data Warehouse in minutes
Find out more about the Azure SQL Data Warehouse’s unmatched price-performance and related announcements from Microsoft.

Quelle: Azure

Azure Databricks – VNet injection, DevOps Version Control and Delta availability

Azure Databricks provides a fast, easy, and collaborative Apache® Spark™-based analytics platform to accelerate and simplify the process of building big data and AI solutions that drive the business forward, all backed by industry-leading SLAs.

With Azure Databricks, you can set up your Spark environment in minutes and autoscale quickly and easily. You can also apply your existing skills and collaborate on shared projects in an interactive workspace with support for Python, Scala, R, and SQL, as well as data science frameworks and libraries like TensorFlow and PyTorch.

We’re continuously listening to customers and answering questions as we evolve this service. This blog outlines important service announcements that we are proud to deliver for our customers.

Azure Databricks Delta available in Standard and Premium SKUs

Azure Databricks Delta brings new levels of reliability and performance for production workloads based on a number of improvements including transaction support, schema validation, indexing, and data versioning.

Since the preview of Delta was announced, we have received overwhelmingly positive feedback on how it has helped customers build complex pipelines for both batch and streaming data, and simplified ETL pipelines. We are excited to announce that Delta is now available in our Standard SKU offering in addition to Premium SKU offering so you can leverage its capabilities to the fullest and build pipelines more efficiently. Now everyone can get the benefits of Databricks Delta‘s reliability and performance.

You can read more about Azure Databricks Delta in our guide, “Introduction to Databricks Delta,” and import our quickstart notebook.

Azure DevOps Services Version Control

Azure DevOps is a collection of services that provide an end-to-end solution for the five core practices of DevOps: planning and tracking, development, build and test, delivery, and monitoring and operations.

Initially, we started with GitHub integration for Azure Databricks notebooks. On popular demand, we have introduced the ability to set your Git provider to Azure DevOps Services.

Authentication with Azure DevOps Services is done automatically when you authenticate using Azure Active Directory (Azure AD). The Azure DevOps Services organization must be linked to the same Azure AD tenant as Databricks. You can easily select your Git provider to Azure DevOps Services as shown in the documentation, “Azure DevOps Services Version Control.”

Deploy Azure Databricks in your own Azure virtual network (VNet injection) preview

By default, we deploy and manage your clusters for you in managed VNETs, with peering enabled. We create and manage these VNETs, but they reside in your subscription. We also manage the accompanying network security group rules.

Some customers, however, require network customization. I am pleased to announce that if you need to, now you can deploy Azure Databricks in your own existing virtual network (VNet injection). Connect Azure Databricks to other Azure services, such as Azure Storage, in a secure manner using service endpoints or to on-premises data sources for use with Azure Databricks, taking advantage of user-defined routes. You can also connect Azure Databricks to a network virtual appliance to inspect all outbound traffic and take actions according to allow and deny rules. Configure Azure Databricks to use custom DNS and configure network security group (NSG) rules to specify egress traffic restrictions.

Deploying Azure Databricks to your own virtual network also lets you take advantage of flexible CIDR ranges. See the documentation to quickly and easily configure Azure Databricks in your Vnet using the Azure Portal UI.

Get started today!

Try Azure Databricks and let us know your feedback!

Try Azure Databricks through a 14-day premium trial with free Databricks Units.
Sign up for the webinar on Machine Learning on Azure.
Watch the video on how to get started with the Apache Spark on Azure Databricks.
Visit the repository of Azure Databricks resources to continue learning.

Quelle: Azure

Azure Marketplace new offers – Volume 33

We continue to expand the Azure Marketplace ecosystem. From February 1 to February 15, 2019, 50 new offers successfully met the onboarding criteria and went live. See details of the new offers below:

Virtual machines

Attunity Replicate: Attunity Replicate integrates data in real time to Azure targets, including Azure SQL Data Warehouse, Azure SQL Database, and Azure Event Hubs, and it helps load, ingest, migrate, distribute, consolidate, and synchronize data.

Cyber Security Assessment Tool (CSAT): The Cyber Security Assessment Tool (CSAT) from QS solutions provides insight into security vulnerabilities through automated scans and analyses.

Fortinet FortiSandbox Advanced Threat Protection: FortiSandbox for Azure enables organizations to defend against advanced threats natively in the cloud, alongside third-party security solutions, or as an extension to their on-premises security architectures.

InterSystems IRIS for Health Community Edition: InterSystems IRIS for Health provides the capabilities for building complex, data-intensive applications. It’s a comprehensive platform spanning data management, interoperability, transaction processing, and analytics.

KNIME Server: KNIME Server offers shared repositories, advanced access management, flexible execution, web enablement, and commercial support. Share data, nodes, metanodes, and workflows across your team and throughout your company.

ME PasswordManagerPro 20 admins,25 keys: ManageEngine Password Manager Pro is a web-based, privileged identity management solution that lets you manage privileged identity passwords, SSH keys, and SSL certificates.

MODX on Windows Server 2016: MODX is an agile and user-friendly content management system that offers unlimited scalability and high flexibility.

MODX on Windows Server 2019: MODX is an agile and user-friendly content management system that offers unlimited scalability and high flexibility.

Panzura Freedom NAS 7.1.8.0: Panzura Freedom Filer is a hybrid cloud data management solution that enables global enterprise customers to consolidate their data islands into a single source of truth in Azure.

Puppet Enterprise 2018.1.7: Puppet Enterprise lets you automate the entire lifecycle of your Azure infrastructure, simply and securely, from initial provisioning through application deployment.

Vemn Digital Folder: Digital Folder is a web application that facilitates the management of your organization’s digital documents, creating sustainable digital transformation.

WALLIX Bastion: With an unobtrusive architecture, full multitenancy, and virtual appliance packaging, WALLIX Bastion (WAB Suite) provides an effective route to security and compliance.

Web applications

Discovery Hub and Azure SQL DB: Discovery Hub supports core analytics, the modern data warehouse, IoT, and AI. Developed with a cloud-first mindset, Discovery Hub provides a cohesive data fabric across Microsoft on-premises technology and Azure Data Services.

Discovery Hub and Azure SQL DB and AAS: Discovery Hub Application Server for Azure is a high-performance data management platform that accelerates your time to data insights.

Forcepoint Email Security V8.5.3: Forcepoint Email Security is an enterprise email and data loss prevention solution offering inbound and outbound protection against malware, blended threats, and spam.

MultiChain on Azure: Save time configuring servers and installing MultiChain, a leading enterprise blockchain platform, using these templates.

NetGovern: NetGovern enables legal supervisors, attorneys, paralegals, and case administrators to perform e-discovery on file systems, email archives, SharePoint, and file-sharing solutions such as Box.com and Citrix ShareFile.

NetGovern Multitenant: Enable anyone to rapidly respond to e-discovery requests. This application will deploy a shared infrastructure layer with one tenant preloaded for cloud service providers to deliver service to their customers.

Radware Alteon VA Application Cluster: This Alteon virtual appliance on Azure provides a simple and agile way to consume and deploy all standard ADC functionality as well as advanced services like WAF, acceleration, and application performance monitoring.

S2IX – Secure Search and Information Exchange: Secure by design, S2IX provides a protected business process automation solution. This gives users the ability to collaborate and manage documents in an environment they can depend on, even in remote or challenging locations.

Starburst Presto (v 0.213-e) for Azure HDInsight: Architected for the separation of storage and compute, Presto is perfect for querying data in Azure Blob storage, Azure Data Lake storage, relational databases, Cassandra, MongoDB, Kafka, and many others.

Veritas Resiliency Platform Repository Server: Veritas Resiliency Platform (VRP) provides single-click disaster recovery and migration for any source workload into Azure. This version of Veritas Resiliency Platform Repository Server will upgrade your installation.

Container solutions

Drupal with NGINX Container Image: Drupal with NGINX enhances the popular open-source content management system with the performance and security of NGINX. Drupal's modular architecture lets you create many different types of websites and applications.

Consulting services

4-Week Azure Assessment: This cloud migration assessment by Quisitive is designed to help organizations assess cloud readiness, evaluate best paths for their application environment, and build a clear road map and ROI view for potential Azure migration.

AgileAscend-M365 Migration: 3 week Implementation: Working alongside your project manager and IT staff, Agile IT's award-winning professional team will assist in planning, preparing, and migrating on-premises infrastructure to Microsoft Azure.

AgileProtect: Azure Data Backup – 3-Week Imp.: AgileProtect Standard is designed for business-critical systems. AgileProtect Standard backs up everything on the system, enabling a complete replica to be spun up on suitable hardware at will.

AgileSecurity: Intune – 3 week implementation: With Agile IT, define a mobile device management strategy that fits the needs of your organization. Set granular app policies to containerize data access while preserving the familiar Office 365 user experience.

Azure 5-Day Proof of Concept (POC): Chorus IT's proof of concept allows you to evaluate Azure with a small-scale partial implementation or focus on a particular area you want to evaluate.

Azure Accelerate: 2-week Proof of Concept: iLink Systems Inc.'s proof of concept is designed to streamline an organization’s journey to the cloud through a combination of training, workshops, comparative analysis, and rapid prototyping.

Azure Analytics services – 2 Hour Briefing: This briefing by Incremental Group will take you through the capabilities available from Azure Analytics and discuss how these could help your organization.

Azure Cost Optimization: 3 Week Assessment: This assessment by DXC focuses on analyzing current Azure consumption to identify opportunities to right-size the environment, inclusive of storage, networking, and virtual machines.

Azure Managed CI/CD Pipeline: 8-Wk Implementation: 2nd Watch will identify your environment requirements and implement CI/CD pipeline tools to your specifications, accelerating your adoption of agile methodologies.

Azure Migration Quickstart: 4 Week POC: The Azure Migration Quickstart by DXC works to test an initial workload of O/S, application, and/or database to migrate into Azure as a proof of concept.

Azure Performance Optimization: 3 Week Assessment: DXC's Azure Performance assessment provides a data-driven review of your existing Azure environment to help identify and resolve performance challenges.

Azure Security Managed Services: 2 Wk Assessment: In this assessment, DXC consultants will implement one or more tools to assist in the security review of your Azure configuration and architecture.

Cloud Cost Optimisation – 10 day implementation: risual's implementation allows organizations to track and monitor their costs on Azure to ensure they are getting the most value possible.

Cloud Readiness Assessment: 1-Day: Incremental Group’s Cloud Readiness Assessment reviews an organization's IT infrastructure and supports the organization's future business plans while assessing the impact they could have on the IT infrastructure.

DevOps Quality Services: 8 Wk Implementation: Sogeti USA offers assistance in designing and implementing DevOps quality programs, including establishing an automation testing framework, building a quality pipeline, and providing recommendations on enterprise metrics.

Identity and Access Management – 2 Hour Briefing: One of Incremental Group’s expert consultants will talk you through the wide range of services available from Microsoft Azure to help you manage your identity and access management.

Microsoft Azure AI Chatbot: 1-Hr Assessment: In this assessment, Cynoteck Technology Solutions will discuss AI chatbot development. Learn how chatbots and Azure Bot Service can benefit your business.

Microsoft Azure Health Check: 1 Week Assessment: This health check by DXC will involve a review of cloud architecture, cost optimization, Azure security best practices, and configuration best practices.

Migrate Dynamics GP to Azure: 1 Hr Assessment: Syvantis Technologies will walk you through the process of migrating your Microsoft Dynamics GP system to Microsoft Azure.

Modernize Your Apps – 2 Week Assessment: Modernize your legacy systems and build new applications in Azure with SPR Consulting's two-week assessment to help you take advantage of the flexibility of Azure.

PCI Azure Implementation Services: 4 Week POC: DXC will build a detailed proof of concept to offload the bulk of deploying and managing PCI-compliant workloads in Microsoft Azure.

SAP to Azure Migration: 1-Day Workshop: This SAP on Azure workshop by Infopulse will cover best practices for architecting, developing, and managing SAP services and apps in Azure. Customers should have good architectural knowledge in SAP Basis and Azure services.

SAP to Azure Migration: 1-Hour Briefing: Infopulse's briefing will help you understand the key benefits and challenges of a SAP-to-Azure migration and choose the best cloud solution for your business.

SAP to Azure Migration: 2-Week PoC: Infopulse will help you develop a proof of concept to validate the feasibility of your ideas and identify the benefits of migrating your on-premises SAP solution to Microsoft Azure.

SAP to Azure Migration Readiness: 1-Day Assessment: Infopulse will help you identify business drivers and the potential challenges of a SAP migration to Azure, then gather all requirements and create a suitable migration strategy.

Small Systems Mainframe Migration: 3-Wk Assessment: This assessment by Asysco Inc. will investigate smaller mainframe systems in order to develop a plan to migrate to Azure.

TCO & Cloud Readiness Assessment – 6 Wk Assessment: Ensono's assessment will involve installing a console server (built by the customer), gathering data, creating an HCP tenant, ingesting the initial server list, and conducting analysis.

Quelle: Azure

Create a transit VNet using VNet peering

Azure Virtual Network (VNet) is the fundamental building block for any customer network. VNet lets you create your own private space in Azure, or as I call it your own network bubble. VNets are crucial to your cloud network as they offer isolation, segmentation, and other key benefits. Read more about VNet’s key benefits in our documentation, “What is Azure Virtual Network?”

With VNets, you can connect your network in multiple ways. You can connect to on-premises using Point-to-Site (P2S), Site-to-Site (S2S) gateways or ExpressRoute gateways. You can also connect to other VNets directly using VNet peering.

Customer network can be expanded by peering Virtual Networks to one another. Traffic sent over VNet peering is completely private and stays on the Microsoft Backbone. No extra hops or public Internet involved. Customers typically leverage VNet peering in the hub-and-spoke topology. The hub consists of shared services and gateways, and the spokes comprise business units or applications.

Gateway transit

Today I’d like to do a refresh of a unique and powerful functionality we’ve supported from day one with VNet peering. Gateway transit enables you to use a peered VNet’s gateway for connecting to on-premises instead of creating a new gateway for connectivity. As you increase your workloads in Azure, you need to scale your networks across regions and VNets to keep up with the growth. Gateway transit allows you to share an ExpressRoute or VPN gateway with all peered VNets and lets you manage the connectivity in one place. Sharing enables cost-savings and reduction in management overhead.

With Gateway transit enabled on VNet peering, you can create a transit VNet that contains your VPN gateway, Network Virtual Appliance, and other shared services. As your organization grows with new applications or business units and as you spin up new VNets, you can connect to your transit VNet with VNet peering. This prevents adding complexity to your network and reduces management overhead of managing multiple gateways and other appliances.

VNet peering works across regions, across subscriptions, across deployment models (classic to ARM), and across subscriptions belonging to different Azure Active Directory tenants.

You can create a Transit VNet like one shown below.

Easy to set up – just check a box!

To use this powerful capability, simply check a box.

Create or update the virtual network peering from Hub-RM to Spoke-RM inside the Azure portal. Navigate to the Hub-RM VNet or the VNet with the gateway you’d like to use for gateway transit, and select Peerings, then Add:

Set the Allow gateway transit option
Select OK

Create or update the virtual network peering from Spoke-RM to Hub-RM from the Azure portal. Navigate to the Spoke-RM VNet, select on Peerings, then Add:

Select the Hub-RM virtual network in the corresponding subscription
Set the Use remote gateways option
Select OK

You can learn more in this detailed step by step guide on how to configure VPN gateway transit for virtual network peering.

Security

You can use Network Security Groups and security rules to control the traffic between on-premises and your Azure VNets. Security policies can be centralized in the Hub or transit VNet. A network virtual appliance (NVA) can inspect all traffic going on-premises as well as into Azure. Since there policy is set in a central VNet, you can set it up just once.

Routing

We plumb the routes, so you don’t have to. Each Azure Virtual Machine (VM) you deploy will benefit with the routes being plumbed automatically. To confirm a virtual network peering, you can check effective routes for a network interface in any subnet in a virtual network. If a virtual network peering exists, all subnets within the virtual network have routes with next hop type VNet peering, for each address space in each peered virtual network.

Monitoring

You can check the health status of your VNet Peering connection in the Azure portal. Connected means you are all peered and good to go. Initiated means a second link needs to be created. Disconnected means the peering has been deleted from one side.

You can also troubleshoot connectivity to a virtual machine in a peered virtual network using Network Watcher's connectivity check. Connectivity check lets you see how traffic is routed from a source virtual machine's network interface to a destination virtual machine's network interface as seen below.

Limits

You can peer with 100 other VNets. We’ve increased limits by four times and as our customers scale in Azure we will continue to increase these limits. Stay updated with limits by visiting our documentation, “Azure subscription and service limits, quotas, and constraints.”

Pricing

You pay only on traffic that goes through the gateway. No double charge. Traffic passing through a remote gateway in this scenario is subject to VPN gateway charges and does not incur VNet peering charges. For example, If VNetA has a VPN gateway for on-premise connectivity and VNetB is peered to VNetA with appropriate properties configured, traffic from VNetB to on-premises is only charged egress per VPN gateway pricing. VNet peering charges do not apply.

Availability

VNet peering with gateway transit works across classic Azure Service Management (ASM) and Azure Resource Manager (ARM) deployment models. Your gateway should be in your ARM VNet. It also works across subscriptions and subscriptions belonging to different Azure Active Directory tenants.

Gateway transit has been available since September 2016 for VNet peering in all regions and will be available for global VNet peering shortly.

Gateway Transit with VNet peering enables you to create a transit VNet to keep your shared services in a central location. To keep up with your growing scale, you can scale your VNets and use your existing VPN gateway saving management overhead, cost, and time. We developed a template you can use to get started. Try it out today!
Quelle: Azure

Stay informed about service issues with Azure Service Health

When your Azure resources go down, one of your first questions is probably, “Is it me or is it Azure?” Azure Service Health helps you stay informed and take action when Azure service issues like incidents and planned maintenance affect you by providing a personalized health dashboard, customizable alerts, and expert guidance.

In this blog, we’ll cover how you can use Azure Service Health’s personalized dashboard to stay informed about issues that could affect you now or in the future.

Monitor Azure service issues and take action to mitigate downtime

You may already be familiar with the Azure status page, a global view of the health of all Azure services across all Azure regions. It’s a good reference for major incidents with widespread impact, but we recommend using Azure Service Health to stay informed about Azure incidents and maintenance. Azure Service Health only shows issues that affect you, provides information about all incidents and maintenance, and has richer capabilities like alerting, shareable updates and RCAs, and other guidance and support.

Azure Service Health tracks three types of health events that may impact you:

Service issues: Problems in Azure services that affect you right now.
Planned maintenance: Upcoming maintenance that can affect the availability of your services in the future. Typically communicated at least seven days prior to the event.
Health advisories: Health-related issues that may require you to act to avoid service disruption. Examples include service retirements, misconfigurations, exceeding a usage quota, and more. Usually communicated at least 90 days prior, with notable exceptions including service retirements, which are announced at least 12 months in advance, and misconfigurations, which are immediately surfaced.

Learn more about your personalized health dashboard.

Get started with Azure Service Health

Azure Service Health’s dashboard provides a large amount of information about incidents, planned maintenance, and other health advisories that could affect you. While you can always visit the dashboard in the portal, the best way to stay informed and take action is to set up Azure Service Health alerts. With alerts, as soon as we publish any health-related information, you’ll get notified on whichever channels you prefer, including email, SMS, push notification, webhook into ServiceNow, and more.

Below are a few resources to help you get started:

Review your Azure Service Health dashboard and set up alerts.
If you need help getting started, check our Azure Service Health documentation.
We always welcome feedback. Submit your ideas or email us with any questions or comments at servicehealth@microsoft.com.

Quelle: Azure

Economist study: OEMs create new revenue streams with next-gen supply chains

Original equipment manufacturers (OEMs) make the wheels go round for the business world. But demand for faster, cheaper, and smarter products and components put major downward pressure on profit margins. Successful OEMs are always on the lookout for opportunities to drive down costs and differentiate their brands and the rise of the Internet of Things (IoT) offers a golden opportunity to do so by embracing fundamental supply chain transformation.

To get a better understanding of the benefits, best practices, and current state of play in supply chain transformation, we enlisted The Economist Intelligence Unit to survey 250 senior executives at OEMs in North America, Europe, and Asia-Pacific. Our learnings from those conversations drove insights for the basis of the new study, Putting customers at the center of the supply chain. Here are some of the intriguing highlights.

Creating the intelligent supply chain

According to the study, 99 percent of OEMs believe supply chain transformation is important to meet their organizations’ strategic objectives. The vast majority, 97 percent, consider cloud technology to be an essential component of that transformation, which makes sense given that cloud offers the unprecedented ability to collect and analyze data at scale. To date, just 61 percent have embraced cloud across their organization—meaning that for many, cloud remains an obvious and notable opportunity.

Beyond cloud, IoT presents a significant opportunity for OEMs. IoT is the fundamental technology underpinning smart products and components, like embedded sensors that monitor performance, or telemetry systems on connected vehicles.

IoT-enabled products and components can effectively extend the supply chain to include the customer, enabling the delivery of software updates directly, while providing ongoing access to data about how offerings are being used. This adds supply-chain complexity but also delivers significant new business opportunities.

This extension of the supply chain gives OEMs the ability to get a far deeper understanding of customer behaviors and needs and to better serve customers via add-on services based on that deeper understanding. To optimize the value of the customer data they collect, some are even embracing entirely new business models.

Armed with real, data-based insights into exactly how and when their products are being used, OEMs can become service providers, and shift from selling products to customers to charging them subscription or per-use fees. Rolls-Royce, for example, charges a monthly fee for customers of its jet engines that is based on flying hours. Industrial machinery makers like Sandvik Coromant are also now charging customers based on use.

Other emerging technologies that OEMs are turning to for assistance in transforming supply chains include robotics that generate valuable data while performing tasks like product assembly and order picking faster and with greater accuracy than humans, artificial intelligence (AI) that’s used in smart products for things like predictive maintenance, and blockchain which enables supply-chain stakeholders to share an immutably accurate record of deliveries. These technologies can supercharge the collection, management, analysis, and security of supply-chain data. And like IoT, they can drive the creation of brand new ways of doing business.

Best practices in supply-chain transformation

In a world where a growing number of things around us collect data about us, forward-thinking OEMs are increasingly embracing fundamental changes in their supply chains. With the goal of achieving operational excellence informed by a closed feedback loop with the customer, OEMs can deliver better service and products by better understanding and anticipating exactly what customers want and need.

To achieve this vision, they’re turning to technologies like cloud, IoT, AI, robotics, and blockchain. Learn more about the specific steps and approaches being taken in the full Economist report.
Quelle: Azure

AzCopy support in Azure Storage Explorer now available in public preview

We are excited to share the public preview of AzCopy in Azure Storage Explorer. AzCopy is a popular command line utility that provides performant data transfer into and out of a storage account. The new version of AzCopy further enhances the performance and reliability through a scalable design, where concurrency is scaled up according to the number of machine’s logical cores. The tool’s resiliency is also improved by repeated retries.

Azure Storage Explorer provides the UI interface for various storage tasks, and now it supports using AzCopy as a transfer engine to provide the highest throughput for transferring your files for Azure Storage. This capability is available today as a preview in Azure Storage Explorer.

Enable AzCopy for blob upload and download

We have heard from many of you that the performance of your data transfer matters. Let’s be honest, we all have better things to do than wait around for files to be transferred to Azure. Now with AzCopy in Azure Storage Explorer, we give you all that time back!

With AzCopy preview, the blob operations will be faster than before. To enable this option, go to the Preview menu and select Use AzCopy for improved blob Upload and Download.

We are working on the support for Azure Files and batch blob deletes. Feel free letting us know what you would like to see supported through our GitHub repository.

Figure 1: Enable AzCopy in Azure Storage Explorer

How fast is it?

With a quick test in our environment we were able to see great improvements in uploading files with AzCopy in Azure Storage Explorer. Note that the times may vary on each machine.

 
Storage Explorer
Storage Explorer w/AzCopyV10
Improvement

10K 100KB files
1 hour 36 minutes
59 seconds
98.9 percent

100 100MB
5 minutes 12 seconds
1 minute 35 seconds
69.5 percent

1 10GB file
3 minutes 41 seconds
1 minute 40 seconds
54.7 percent

Figure 2: Performance improvement from using AzCopy as transfer engine for blog upload and download

Figure 3: AzCopy uploads/downloads blobs efficiently (1 x 10GB file)

Figure 4: AzCopy uploads/downloads blobs efficiently (10,000 x 10KB files)

Next steps

We invite you to try out the AzCopy preview feature in Azure Storage Explorer today, and we look forward to hearing your feedback. If you identify any problems or want to make a feature suggestion, please make sure to report your issue on our GitHub repository.
Quelle: Azure

Azure Stack IaaS – part four

Protect your stuff

In this post, we’ll cover the concepts and best practices to protect your IaaS virtual machines (VMs) on Azure Stack. This post is part of the Azure Stack Considerations for Business Continuity and Disaster Recovery white paper.

Protecting your IaaS virtual machine based applications

Azure Stack is an extension of Azure that lets you deliver IaaS Azure services from your organization’s datacenter. Consuming IaaS services from Azure Stack requires a modern approach to business continuity and disaster recovery (BC/DR). If you’re just starting your journey with Azure and Azure Stack, make sure to work through a comprehensive BC/DR strategy so your organization understands the immediate and long-term impact of modernizing applications in the context of cloud. If you already have Azure Stack, keep in mind that each application must have a well-articulated BC/DR plan calling out the resiliency, reliability, and availability requirements that meet the business needs of your organization.

What Azure Stack is and what it isn’t

Since launching Azure Stack at Ignite 2017, we’ve received feedback from many customers on the challenges they face within their organization evangelizing Azure Stack to their end customers. The main concerns are the stark differences from traditional virtualization. In the context of modernizing BC/DR practices, three misconceptions stand out:

Azure Stack is just another virtualization platform

Azure Stack is delivered as an appliance on prescriptive hardware co-engineered with our integrated system partners. Your focus must be on the services delivered by Azure Stack and the applications your customers will deploy on the system. You are responsible for working with your applications teams to define how they will achieve high availability, backup recovery, disaster recovery, and monitoring in the context of modern IaaS, separate from infrastructure running the services.

I should be able to use the same virtualization protection schemes with Azure Stack

Azure Stack is delivered as a sealed system with multiple layers of security to protect the infrastructure. Constraints include:

Azure Stack operators only have constrained administrative access to the system. Elevated access to the system is only possible through Microsoft support.
Scale unit nodes and infrastructure services have code integrity enabled.
At the networking layer, the traffic flow defined in the switches is locked down at deployment time using access control lists.

Given these constraints, there is no opportunity to install backup/replication agents on the scale-unit nodes, grant access to the nodes from an external device for replication and snapshotting, or physically attach external storage devices for storage level replication to another site.

Another ask from customers is the possibility of deploying one Azure Stack scale-unit across multiple datacenters or sites. Azure Stack doesn’t support a stretched or multi-site topology for scale-units. In a stretched deployment, the expectation is that nodes in one site can go offline with the remaining nodes in the secondary site available to continue running applications. From an availability perspective, Azure Stack only supports N-1 fault tolerance, so losing half of the node count will take the system offline. In addition, based on how scale-units are configured, Azure Stack only supports fault domains at a node level. There is no concept of a site within the scale-unit.

I am not deploying modern applications in Azure, none of this applies to me

Azure Stack is designed to offer cloud services in your datacenter. There is a clear separation between the operation of the infrastructure and how IaaS VM-based applications are delivered. Even if you’re not planning to deploy any applications to Azure, deploying to Azure Stack is not “business as usual” and will require thinking through the BC/DR implications throughout the entire lifecycle of your application.

Define your level of risk tolerance

With the understanding that Azure Stack requires a different approach to BC/DR for your IaaS VM-based applications, let’s look at the implications of having one or more Azure Stack systems, the physical and logical constructs in Azure Stack, and the recovery objectives you and your application owners need to focus on.

How far apart will you deploy Azure Stack systems

Let’s start by defining the impact radius you want to protect against in the event of a disaster. This can be as small as a rack in a co-location facility or an entire region of a country or continent. Within the impact radius, you can choose to deploy one or more Azure Stack systems. If the region is large enough you may even have multiple datacenters close together, each with Azure Stack systems. The key takeaway is that if the site goes offline due to a disaster or catastrophic event, there is no amount of redundancy that will keep the Azure systems online. If your intent is to survive the loss of an entire site as the diagram below shows, then you must consider deploying Azure Stack systems into multiple geographic locations separated by enough distance so a disaster in one location does not impact any other locations.

Help your application owners understand the physical and logical layers of Azure Stack

Next it’s important to understand the physical and logical layers that come together in an Azure Stack environment. The Azure Stack system running all the foundational services and your applications physically reside within a rack in a datacenter. Each deployment of Azure Stack is a separate instance or cloud with its own portal. The diagram below shows the physical and logical layering that’s common for all Azure Stack systems deployed today and for the foreseeable future.

 

Define the recovery time objectives for each application with your application owners

Now that you have a clear understanding of your risk tolerance if a system goes offline, you need to decide the protection schemes for your applications. You need to make sure you can quickly recover applications and data on a healthy system. We’re talking about making sure your applications are designed to be highly available within a scale-unit using availability sets to protect against hardware failures. In addition, you should also consider the possibility of an application going offline due to corruption or accidental deletion. Recovery can be as simple as scaling-out an application or restoring from a backup.

To survive an outage of the entire system, you’ll need to identify the availability requirements of each application, where the application can run in the event of an outage, and what tools you need to introduce to enable recovery. If your application can run temporarily in Azure, you can use services like Azure Site Recovery and Azure Backup to protect your application. Another option is to have additional Azure Stack systems fully deployed, operational, and ready to run applications. The time required to get the application running on a secondary system is the recovery time objective (RTO). This objective is established between you and the application owners. Some application owners will only tolerate minimal downtime while others are ok with multiple days of downtime if the data is protected in a separate location. Achieving this RTO will differ from one application to another. The diagram below summarizes the common protection schemes used at the VM or application level.

 

In the event of a disaster, there will be no time to request an on-demand deployment of Azure Stack to a secondary location. If you don’t have a deployed system in a secondary location, you will need to order one from your hardware partner. The time required to deliver, install, and deploy the system is measured in weeks.

Establish the offerings for application and data protection

Now that you know what you need to protect on Azure Stack and your risk tolerance for each application, let’s review some specific patterns used with IaaS VMs.

Data protection

Applications deployed into IaaS VMs can be protected at the guest OS level using backup agents. Data can be restored to the same IaaS VM, to a new VM on the same system, or a different system in the event of a disaster. Backup agents support multiple data sources in an IaaS VM such as:

Disk: This requires block-level backup of one, some, or all disks exposed to the guest OS. It protects the entire disk and captures any changes at the block level.
File or folder: This requires file system-level backup of specific files and folders on one, some, or all volumes attached to the guest OS.
OS state: This requires backup targeted at the OS state.
Application: This requires a backup coordinated with the application installed in the guest OS. Application-aware backups typically include quiescing input and output in the guest for application consistency (for example, Volume Shadow Copy Service (VSS) in the Windows OS).

Application data replication

Another option is to use replication at the guest OS level or at the application level to make data available in a different system. The replication isn’t offloaded to the underlying infrastructure, it’s handled at the guest OS or above. One example is applications like SQL support asynchronous replication in a distributed availability group.

High availability

For high availability, you need to start by understanding the data persistence model of your applications:

Stateful workloads write data to one or more repositories. It’s necessary to understand which parts of the architecture need point-in-time data protection and high availability to recover from a catastrophic event.
Stateless workloads on the other hand don’t contain data that needs to be protected. These workloads typically support on-demand scale-up and scale-down and can be deployed in multiple locations in a scale-out topology behind a load balancer.

To support application level high availability within an Azure Stack system, multiple virtual machines are grouped into an availability set. Applications deployed in an availability set sit behind a load balancer that distributes incoming traffic randomly among multiple virtual machines.

Across Azure Stack systems, a similar approach is possible with the following differences; The load balancer must be external to both systems or in Azure (i.e. Traffic Manager). Availability sets do not span across independent Azure Stack systems.

Conclusion

Deploying your IaaS VM-based applications to Azure and Azure Stack requires a comprehensive evaluation of your BC/DR strategy. “Business as usual” is not enough in the context of cloud. For Azure Stack, you need to evaluate the resiliency, availability, and recoverability requirements of the applications separate from the protection schemes for the underlying infrastructure.

You must also reset end user expectations starting with the agreed upon SLAs. Customers onboarding their VMs to Azure Stack will need to agree to the SLAs that are possible on Azure Stack. For example, Azure Stack will not meet the stringent zero data loss requirements required by some mission critical applications that rely on storage level synchronous replication between sites. Take the time to identify these requirements early on and build a successful track record of onboarding new applications to Azure Stack with the appropriate level of protection and disaster recovery.

Learn more

Azure Stack Considerations for Business Continuity and Disaster Recovery white paper
Backup and data recovery for Azure Stack with the Infrastructure Backup Service
List of all the BC/DR partners with validated offers for Azure Stack
Azure Backup support for Azure Stack
Azure Site Recovery support for Azure Stack
Understanding architectural patterns and practices for business continuity and disaster recovery on Microsoft Azure Stack
Configure multiple virtual machines in an availability set for redundancy
Availability Sets in Azure Stack
Tutorial: Create and deploy highly available virtual machines

In this blog series

We hope you come back to read future posts in this series. Here are some of our planned upcoming topics:

Azure Stack at its core is an Infrastructure-as-a-Service (IaaS) platform
Start with what you already have
Foundation of Azure Stack IaaS
Do it yourself
Pay for what you use
It takes a team
If you do it often, automate it
Build on the success of others
Journey to PaaS

Quelle: Azure

Cloud Commercial Communities webinar and podcast newsletter–March 2019

Welcome to the Cloud Commercial Communities monthly webinar and podcast update. Each month the team focuses on core programs, updates, trends, and technologies that Microsoft partners and customers need to know to increase success using Azure and Dynamics. Make sure you catch a live webinar and participate in live QA. If you miss a session, you can review it on demand. Also consider subscribing to the industry podcasts to keep up to date with industry news.

Upcoming in March 2019

Webinars

Getting Started with High Performance Computing (HPC)
Tuesday, March 19, 2019 10AM Pacific
Commandments of Outstanding Presentation Slides
Thursday, March 21, 2019 9AM Pacific
Launching the New H Series of High-Performance Computing (HPC) Clusters
Tuesday, March 26, 2019 10AM Pacific
Securely Migrating to Azure with F5
Wednesday, March 27, 2019 10AM Pacific

Podcasts

Changing Everything for Retailers
Thursday, March 7, 2019
Growing a culture of innovation at FIS with InnovateIN48
Thursday, March 21, 2019

Recap for February 2019

Webinars

Optimize Your Marketplace Listing with Featured Apps and Services
Tuesday, February 5, 2019 11:00 AM PST
Do you have an application or service listed on Azure Marketplace or AppSource? Looking to optimize your listing to be more discoverable by customers? Discoverability in Azure Marketplace and AppSource can be optimized in a variety of ways. Join this session to learn about how you can gain more visibility for your listings by optimizing content, using keywords, adding trials, and about what matters to Microsoft for Featured Apps and Featured Services on Azure Marketplace and AppSource.
Leveraging Free Azure Sponsorship to Grow Your Business on Azure
Tuesday, February 12, 2019 10:00 AM PST
Microsoft has made significant investments in our partners and customers to help them meet today’s complex business challenges and drive business growth. Through Microsoft Azure Sponsorship, partners and customers can get access to free Azure based on their deployment and technical needs. Azure Sponsorship is available to new and existing Azure customers looking to try new partner solutions, and to partners working to build their solutions on Azure.
Get the Most Out of Azure with Azure Advisor
Tuesday, February 19, 2019 10:00 AM PST
Azure Advisor is a free Azure service that analyzes your configurations and usage and provides personalized recommendations to help you optimize your resources for high availability, security, performance, and cost. In this demo-heavy webinar, you’ll learn how to review and remediate Azure Advisor recommendations so you can stay on top of Azure best practices and get the most out of your Azure investment both for your own organization and your customers.
Incidents, Maintenance, and Health Advisories: Stay Informed with Azure Service Health
Tuesday, February 26, 2019 10:00 AM PST
Azure Service Health is a free Azure service that provides personalized alerts and guidance when Azure service issues affect you. It notifies you, helps you understand the impact to your resources, and keeps you updated as the issue is resolved. It can also help you prepare for planned maintenance and changes that could affect the availability of your resources. In this demo-heavy webinar, you’ll learn how to use Azure Service Health keep both your organization and your customers informed about Azure service incidents.
Introducing a New Approach to Learning: Microsoft Learn
Wednesday, February 27, 2019 11:00 AM PST
At Microsoft Ignite 2019, Microsoft launched an exciting new learning platform called Microsoft Learn. During this session, we will provide a demo and overview of the platform, the inspiration and vision of its design, and how we have adapted training to modern learning styles.

Podcasts

The full lifecycle of implementing IoT with PTC
Thursday, Feb 7, 2019
Running an eCommerce system in Azure (and more)
Friday, Feb 22, 2019

Check out recent podcast episodes at the Microsoft industry experiences team podcast page.
Quelle: Azure