New Deploy to Azure extension for Visual Studio Code

Organizations and teams that adopt DevOps methodologies are consistently seeing improvements in their ability to deliver high-quality code, with faster release cycles, and ultimately achieve higher level of satisfaction for their own customers, whether they’re internal or external. Continuous Integration and Continuous Delivery (CI/CD) is one of the pillars of DevOps, consisting in automatically building, testing and deploying applications, but setting up a full CI/CD pipeline can be a complex task.

Today, we’re sharing the launch of the Deploy to Azure extension for Visual Studio Code. This new extension allows developers working in Visual Studio Code to seamlessly create, build, and deploy their apps in a continuous manner to the cloud, without leaving the editor.

Deploy to Azure extension

The Deploy to Azure extension works with both GitHub Actions and Azure Pipelines. It helps developers by auto-generating a CI/CD pipeline definition that takes care of building and deploying your app to the cloud with Azure. You can use Deploy to Azure extension to deploy application code present in your local system, or in Azure Repos or GitHub. We plan to expand the scope to other Git repositories in future.

You can use this extension to set up CI/CD pipeline for every code push. It will give you an auto-generated and fully-customizable CI/CD pipeline, defined in a YAML file that is formatted for either GitHub Actions or Azure Pipelines. The YAML file is pre-populated with build and release tasks, which can be edited by the developers as needed.

In the workflow, we are also setting up Azure Pipelines and GitHub with relevant Azure-related configurations, as well as repository-related configurations, without you needing to do worry about the plumbing of the systems.

Installation and usage

The Deploy to Azure extension can be downloaded for free from the Visual Studio Code Marketplace. After installing it, you can invoke it from the Command Palette (Ctrl + P or Cmd + P) > Deploy to Azure: Configure Pipeline.

Once you run the pipeline creation workflow, the extension will inspect your application’s code and generate a pipeline optimized for your project.

In this first release, the Deploy to Azure extension in Visual Studio Code supports generating pipelines to deploy Node.js-based apps to Azure App Service or Azure Functions App, as well as any containerized application (with a Dockerfile) to Azure Kubernetes Service.

We’re working on adding support for creating workflows for other languages, starting with Python, and for other Azure resources. Additionally, we will roll out support for other Git repository providers; in addition to GitHub and Azure Repos which are available today, we’re working on supporting source code on BitBucket and other locations.

We will also roll out support for other Git repository providers; in addition to GitHub and Azure Repos which are available today, we’re working on supporting source code on BitBucket and other locations.

Get started

You can get started today by installing the extension. Then, start adding CI/CD pipelines to your apps and have them deployed to the cloud continuously.

Please let us know your thoughts on this extension and how it helps your workflows, and anything we can do to improve your experience. You can connect with us on the extension’s project page on GitHub.
Quelle: Azure

Unified network monitoring with Connection Monitor now in preview

Azure Network Watcher’s new and improved Connection Monitor now provides unified end-to-end connection monitoring capabilities for hybrid and Azure deployments. Users can now use the same solution to monitor connectivity for on-premises, Azure, and multi-cloud setups. In this preview phase, the solution brings together the best of two key capabilities—Network Watcher's Connection Monitor and Network Performance Monitor's (NPM) Service Connectivity Monitor. Check out the documentation and start using Connection Monitor to check connectivity in your network.

The monitoring question

Customers have long stressed over the need for unified connection monitoring for hybrid deployments, where complex applications transact across Azure, on-premises, and with other public applications to deliver business-critical functionality. These challenges escalate in multi-cloud environments. Monitoring teams then wrestle with basic challenges including:

Which monitoring solution to use in these complex set-ups?
Do I need different monitoring solutions for on-premises and Azure or any other clouds?
Where does my data go and how do I correlate data from multiple sources?
How do I get the fastest alerts when things go wrong in my network?

Connection Monitor in preview

With the new Connection Monitor, you can now configure both Azure and non-Azure virtual machines and hosts for monitoring connectivity to global endpoints from a single console. You can set up Connection Monitor and create multiple test groups for various use cases including connectivity between Azure regions, connectivity to Office 365, and connectivity between app and database tiers. With the ability to add multiple sources and destinations in one test group, configuring monitoring gets much easier. You also benefit from an aggregated view of your network parameters, with the ability to drill down to individual links at the time of troubleshooting.

You can monitor loss and latency of network connections both within Azure and between Azure and external destinations, and view the topology to localize issues. The solution identifies the top five tests in your Connection Monitor, test groups, sources, and destinations, then highlights potential problem tests. For Azure resources, issues with your hops are shown in the topology.

Alerts and data storage

Monitoring data is stored in both Azure Monitor as metrics and in Log Analytics workspaces. You can now set up fast, metrics-based alerts to react to issues expeditiously. To build additional correlations on your historical data, use Log Analytics queries.

Other benefits

Single console for configuring and monitoring connectivity and network quality from Azure and on-premises virtual machines and hosts.
Monitor multiple endpoints within and across Azure regions, on-premises sites, and global service locations.
Higher and configurable probing frequencies.
More protocols supported to give better visibility into network performance.
Cross-region, cross-workspace monitoring.
Access to historical monitoring data retained in Log Analytics.
Rich user experience.
Automation through PowerShell and CLI.

Start monitoring today

The new Connection Monitor feature will be available for no charge during preview. The general availability pricing for Connection Monitor will be available soon on the pricing page. For more details, please visit the Connection Monitor (Preview) documentation.

We're here for you

We would love to hear from you. Send us your suggestions via the User Voice page.
Quelle: Azure

New features for Form Recognizer now available

Extracting text and structure information from documents is a core enabling technology for robotic process automation and workflow automation. Since its preview release in May 2019, Azure Form Recognizer has attracted thousands of customers to extract text, key and value pairs, and tables from documents to accelerate their business processes.

Today, we're sharing the new Form Recognizner features that are available.

Updates for Azure Form Recognizer

The Form Recognizer March release is a major update that includes many new features our customers have asked for:

Customization: The service now supports training with and without labels, which makes it easier for customers to reliably extract valuable information from their forms. The APIs have also been redesigned as long-running operations to improve support for larger customer data sets. Automatic detection of key value pairs and table extraction have been enhanced and improved. A new sample labeling tool UX container will help customers label data more efficiently and extract the values of interest.
  

Form Recognizer Custom: Train with Labels, Form Recognizer Sample Labeling Tool.

In addition, Form Recognizer Sample Labeling Tool is now available as an open source project located here. You can integrate it within your solutions and make customer-specific changes to meet your needs.

Layout: We released a new Layout API that is capable of extracting text and tables from documents with high accuracy optical character recognition (OCR) results on small texts. It also extracts tables from arbitrary documents, enabling a very popular application scenario for document extraction.

Layout text and table extraction: Table extracted with 5 columns and 30 rows.

Pre-Built Receipt: The new version features major accuracy improvements. Error rates for certain fields like merchant name, phone number, transaction time, and subtotal have been reduced by more than 30 percent. We also added support for recognizing tips, receipt type, and line items, as well as providing confidence values.
   

Pre-built Receipt: Key fields extracted from itemized sales receipt.

Learn more on what’s new in Form Recognizer here.

Our customers

Acumatica and Zelros are customers using Azure Form Recognizer and have shared their experiences with Microsoft.

“By automating expense reporting with Form Recognizer, we can eliminate almost all human errors—which really helps accounting teams streamline approvals and reimbursement.“ Ajoy Krishnamoorthy, Vice President of Platform and Technology Acumatica.

Learn more in our case study with Acumatica here.

“Zelros Documents2Insights leverages Form Recognizer to speed up the insurers' and bancassurers’ underwriting process. Identity card, proof of residence, vehicle registration document, driving license, and more. Speeding up and simplifying this business process is key to improve the experience of policyholders. Zelros Documents2Insights automates the underwriting processes, based on the Cognitive Services Computer Vision API and built on top of the Form Recognizer feature, the solution automatically reads and analyzes documents. It also cross-references information in order to correct and lower the error rate, while complying with regulatory requirements. With this, we are to process documents and subscriptions faster.”  Fabien Vauchelles, CTO of Zelros

Getting started

To get started, please login to the Azure Portal to create a Form Recognizer resource. Once your resource is created you can extract data from your forms by following one of our Quickstart templetes:

Custom: Train a custom model for your forms to extract text, key value pairs, and tables.

Train without labels:

Quickstart: Train a Form Recognizer model and extract form data by using the REST API with cURL.
Quickstart: Train a Form Recognizer model and extract form data by using the REST API with Python.

Train with labels:

Quickstart: Train a Form Recognizer model with labels using the sample labeling tool.
Quickstart: Train a Form Recognizer model with labels using REST API and Python.

Prebuilt receipts: Extract data from USA sales receipts.

Quickstart: Extract receipt data using the REST API with cURL.
Quickstart: Extract receipt data using the REST API with Python.

Layout: Extract text and table structure (row and column numbers) from your documents.

 Quickstart: Extract layout data using the REST API with Python.

Quelle: Azure

Learn new strategies and technologies to optimize your hybrid cloud

IT environments are becoming more complex as organizations are combining on-premises, cloud, and edge infrastructures. There are some major benefits to having a flexible, hybrid IT environment, such as the ability to create new business value while also meeting local and industry compliance requirements, but the headaches of managing and securing these environments are hard to ignore. But, with a solid strategy and the right tools, there’s enormous potential for innovation and growth with a hybrid environment.

This is why we're sharing the upcoming one-hour Azure Hybrid Virtual Event on Tuesday, March 31, 2020 starting at 8:00 AM Pacific Time. At this free online event, you’ll get to watch demos, learn hybrid best practices, and find out which strategies work—and which don’t—from two real Azure hybrid customers: online retailer ASOS and professional services company KPMG. Julia White, Corporate Vice President of Microsoft Azure Marketing, will kick off the event with a keynote on current and future hybrid cloud trends, followed by some great sessions:

Insights from Bain & Company—building a successful hybrid cloud: Hear from Bill Radzevych, a partner at Bain & Company, about market trends and customer insights with digital transformation and cloud adoption. 
Seamlessly manage and govern resources: Learn how to seamlessly manage, govern, and secure resources across on-premises, multicloud, and the edge from a single control plane.
Bring cloud services to any infrastructure: Learn how to bring cloud services to your existing infrastructure to take advantage of cloud innovation everywhere and discuss real-world examples from companies like KPMG.
Modernize your datacenter: Learn how to modernize virtualized apps or bring cloud to your datacenter while meeting regulatory and data sovereignty requirements.
Bring AI to the edge: Learn about different ways to take advantage of edge computing to create new business opportunities.
Secure your organization: Hear from George Mudie, Chief Information Security Officer from ASOS, on how Azure Sentinel empowers their SecOps to improve organizational security and efficiency.

 

See you there

Azure Hybrid Virtual Event: Tuesday, March 31, 2020 from 8:00 AM to 9:00 AM Pacific Time.

Delivered in partnership with Intel.

Quelle: Azure

Announcing the general availability of Azure Monitor for virtual machines

Today we're announcing the general availability of Azure Monitor for virtual machines (VMs), which provides an in-depth view of VM performance trends and dependencies. You can access Azure Monitor for VMs from the Azure VM resource blade to view details about a single VM, from the Azure Virtual Machine Scale Sets (VMSS) resource blade to view details about a single VM scale set, and from Azure Monitor to understand compute issues at scale.

Azure Monitor for VMs brings together key monitoring data about your Windows and Linux VMs, allowing you to:

Troubleshoot guest-level performance issues and understand trends in VM resource utilization.
Determine whether back-end VM dependencies are connected properly and which clients of a VM may be affected by any issues the VM is having.
Discover VM hotspots at scale based on resource utilization, connection metrics, performance trends, and alerts.

Performance

Performance views are powered by Log Analytics, and offer powerful aggregation and filtering capabilities including “Top N” VM sorting and searching across subscriptions and regions, aggregation of VM metrics (such as average memory) across all VMs in a resource group across regions, percentiles of performance values over time, and breakdown and selection of VM Scale Set instances.

It can be challenging to monitor thousands of VMs. Our performance views were created to address this problem. You can use them to figure out which VMs are resource constrained, which ones are having logical disk or memory consumption issues, or to get performance diagnostics.

Maps

Azure Monitor for VMs includes dependency maps powered by the Service Map dependency agent extension. Maps deliver an Azure-centric user experience, with VM resource blade integration, Azure metadata, and dependency maps for Resource Groups and Subscriptions. Maps show how VMs and processes are interacting and can identify dependencies on third party services. Azure Monitor for VMs also monitors connection failures, live connection counts, network bytes sent and received by process, and service-level latency.

In addition to the visual experience and group-level mapping in the user experience, you can query the data sets in Log Analytics to alert on spikes in network traffic from selected workloads, query at scale for failed dependencies, and plan Azure migrations from on-premises VMs by analyzing connections over weeks or months. To assist in this analysis we offer several workbooks that provide tabular views into this rich network data set.

 

Getting started

To get started with an Azure resource, go to the resource blade for your VM or VM scale set and click on Insights in the Monitoring section. When you click Enable, you’ll be prompted to pick an existing Log Analytics workspace or create one.

Once you’re comfortable with the capabilities on a few VMs, you can view VMs at scale in Azure Monitor under Virtual Machines, and on-board to entire resource groups and subscriptions using our Get Started page, Azure Policy, or Powershell.

Check out our full documentation to get more details. Pricing is based on data ingestion and retention to your Log Analytics workspace. We’d love to hear what you like and don’t like about Azure Monitor for VMs, and where you’d like us to take it. Please click Provide Feedback in the user experience to share your thoughts.
Quelle: Azure

Azure Container Registry: Preview of customer-managed keys

The Azure Container Registry team is sharing the preview of customer-managed keys for data encryption at rest. Azure Container Registry already encrypts data at rest using service-managed keys. With the introduction of customer-managed keys you can supplement default encryption with an additional encryption layer using keys that you create and manage in Azure Key Vault. This additional encryption should help you meet your company’s regulatory or compliance needs.

Azure Container Registry encryption is supported through integration with Azure Key Vault. You can create your own encryption keys and store them in a Key Vault, or you can use Azure Key Vault API to generate encryption keys. With Azure Key Vault, you can also audit key usage.

During preview, customer-managed keys can only be enabled while creating a new registry in the Premium SKU. Enabling and disabling the feature on an existing registry will be available in an upcoming release.

With this release, you can try out the following scenarios on a customer-managed keys enabled registry:

Rotate the encryption keys using the Azure portal or the Azure command-line interface (CLI).
Geo-replicated registries and Virtual Network integration are supported.
You can enforce encryption for your registries through the built-in Azure Policy.

You can try out this feature using Azure Portal or the Azure CLI. For details, please see the documentation.

Availability and feedback

The Azure portal and CLI experience for customer-managed keys in Azure Container Registry are now in preview. As always, we love to hear your feedback on existing features as well as ideas for our product roadmap.

Roadmap: For visibility into our planned work.

UserVoice: To vote for existing requests or create a new request.

Issues: To view existing bugs and issues or log new ones.

ACR documents: For Azure Container Registry tutorials and documentation.
Quelle: Azure

Power your Azure GPU workstations with flexible GPU partitioning

Today we're sharing the general availability of NVv4 virtual machines in South Central US, East US, and West Europe regions, with additional regions planned in the coming months. With NVv4, Azure is the first public cloud to offer GPU partitioning built on industry-standard SR-IOV technology.

NVv4 VMs feature AMD’s Radeon Instinct MI25 GPU, up to 32 AMD EPYC™ 7002-series vCPUs with clock frequencies up to 3.3 GHz, 112 GB of RAM, 480 MB of L3 cache, and simultaneous multithreading (SMT).

Pay-As-You-Go pricing for Windows  deployments is available now. One- and three-year Reserved Instance and Spot Pricing for NVv4 VMs will be available on April 1. Support for Linux will be available soon.

Affordable, modern GPU powered virtual desktops in the cloud

As enterprises look to the cloud to provide virtual desktops and workstations in a secure way to a highly mobile workforce, they face the significant challenge of managing cost and performance while meeting user experience expectations. Traditionally, public clouds offered virtual machines with one or more GPUs, which are best suited for the most GPU intensive workloads that required the full power and resources of a GPU. But for the regular knowledge worker profile, a full GPU could be overkill. For some of these customers, multi-session virtual desktops like those offered by Windows Virtual Desktop fit the bill, by letting concurrent sessions share the GPU dynamically. However, some VDI customers need a dedicated virtual machine (VM) per user, either for performance or isolation reasons. For these kinds of workloads, customers are looking for a scale-down option to choose the right GPU size to meet the requirements.

Our customers needed cost-effective VM options that are sized appropriately with dedicated GPU resources for each user, starting from office workers running productivity apps to engineering workstations running GPU-powered workloads such as CAD, gaming, and simulation.

“With the new AMD-powered Workspot cloud desktops on Azure, we now have several perfectly sized cloud workstations for our different workloads. We’ve found the new entry level cloud workstation, using a fraction of the AMD GPU, is just right for our users running Microsoft Office 365 productivity tools and Adobe design tools (Photoshop, Illustrator and InDesign). This fills in an additional much-needed point on the price/performance curve, which allows us to move even more users to the AMD-powered Workspot cloud desktops on Azure.” Andy Knauf, CIO, Mead & Hunt

Pick the right GPU virtual machine size for the VDI user profile

The NVv4 virtual machine series is designed specifically for the cloud virtual desktop infrastructure (VDI) and the desktop-as-a-service (DaaS) markets. We wanted to bring GPU processing power to the masses by putting a slice of the GPU in every desktop in the cloud. NVv4 enables enterprises to provide modern desktops in the cloud, with the ideal balance of price and performance for their workloads.

The following diagram shows how the different VM sizes align with the different VDI user profiles and requirements.

“Based on the application requirements of each engineer, we can dedicate all or a fraction of the AMD GPU to their Workspot workstation on Azure. This finer resolution of control gives us the financial edge we need to move more people to Workspot cloud desktops on Azure and increase our overall productivity.”  Eric Quinn, CTO, C&S Companies.

Predictable performance and security with hardware partitioning of the GPU

In Azure, the security of the customer's workload is always a top priority. SR-IOV based GPU partitioning provides a strong, hardware-backed security boundary with predictable performance for each virtual machine. We partition a single AMD Radeon Instinct MI 25 GPU and allocate it up to eight virtual machines. Each virtual machine can only access the GPU resources dedicated to them and the secure hardware partitioning prevents unauthorized access by other VMs.

“The Azure NVv4 VM series offers ArcGIS Pro users an exceptional graphical user experience. The four NVv4 sizes provide flexibility to accommodate workloads ranging from light GIS editing to 3D manipulation. ”  Ryan Danzey, Sr. Product Engineer – Performance, ESRI ArcGIS

Designed to work with Windows Virtual Desktop and VDI partners you use today

Customers in the VDI segment have many choices for remote protocol and infrastructure management. We worked closely with the key partners to ensure support for NVv4 virtual machines.

Windows Virtual Desktop supports the new NVv4 virtual machines with native WVD deployments that use RDP as well as solutions delivered by Citrix and VMware, our approved providers.

NVv4 virtual machines support Microsoft Remote Desktop Protocol (RDP), Teradici PCoIP, and HDX 3D Pro. The graphics API support covers DirectX 9 through 12, OpenGL 4.6, and Vulkan 1.1.

Windows Virtual Desktop, Citrix, Teradici, Workspot, and Nutanix Frame are some of the Azure VDI partners who have extensively validated the new NVv4 virtual machines and are ready to offer it to their customers.

"This is exciting news for our Citrix customers who are delivering Citrix Workspaces from the cloud. As we see more customers migrate to the cloud, the release of the NVv4 instance ensures that customers have more options to deliver graphically accelerated  Citrix workloads  on Azure while optimizing costs." – Carisa Stringer, Sr Director Workspace Services Product Marketing

"The new Azure NV_v4 series will give our Xi Frame customers a wider range of GPU options for their virtual desktop and application streaming needs. By enabling virtualized GPUs in the cloud, Azure now delivers a whole new level of value that unlocks a much broader set of use cases."  Carsten Puls, Sr. Director of Xi Frame at Nutanix.

“The flexibility that Azure NVv4 provides to share and access GPU resources as needed is a valuable feature that we see will benefit many Teradici customers. We are excited to be working with Microsoft and AMD to enable more flexible, cost-effective GPU options for virtual desktop and virtual workstation use cases such as AEC.”  Ziad Lammam, Vice President of Product Management at Teradici

“With the new AMD-powered Workspot cloud workstations and the use of industry leading cloud offerings in Azure, ASTI and Workspot are positioned to address the needs of the SMB market for Virtual Desktop Infrastructure in the AEC industry. These new AMD-powered systems will provide the computing power and graphics power of enterprise class systems, that allow an organization to spend less time managing their resources and more time completing projects.  They provide a balance of computing power and graphics performance without costly over provisioning.” Doug Dahlberg, Director of IT Operations, Applied Software (ASTI) – Workspot and Microsoft Partner

Next steps

For more information on topics covered here, see the following documentation:

NVv4 virtual machine documentation.
Virtual machine pricing.
AMD EPYC™ 7002-series.

Quelle: Azure

Microsoft named a leader in The Forrester New Wave: Functions-as-a-Service Platforms

We’re excited to share that Forrester has named Microsoft as a leader in the inaugural report, The Forrester New Wave™: Function-As-A-Service Platforms, Q1 2020 based on their evaluation of Azure Functions and integrated development tooling. We believe Forrester’s findings reflect the strong momentum of event-driven applications in Azure and our vision, crediting Azure Functions with“robust programming model and integration capabilities”, and also confirm Microsoft’s commitment to be the best technology partner for you as customers call out the responsiveness of Microsoft Azure's "engineering and support teams as key to their success.”

Best-in-class development experience

Azure Functions is an event-driven serverless compute platform with a programming model based on triggers and bindings for accelerated and simplified applications development. Fully integrated with other Azure services and development tools, its end-to-end development experience allows you to build and debug your functions locally on any major platform (Windows, macOS, and Linux), as well as deploy and monitor them in the cloud. You can even deploy the exact same functions code to other environments, such as your own infrastructure or your Kubernetes cluster, enabling seamless hybrid deployments.

In their report, Forrester noted Azure Functions programming model“supports a multitude of programming languages with extensive integration options, … and bindings for Azure Event Hub, and Azure Event Grid helps developers build event-driven microservices.”

Enterprise-grade FaaS platform

Enterprise customers like Chipotle love the velocity and productivity that event-driven architectures bring to developing applications. We are committed to building great experiences that enable the modernization of those enterprise workloads, and the Forrester report states that “strategic adopters of Azure will find that Azure Functions helps integrate Microsoft’s fast-expanding array of cloud services”, making that transformation journey easier. Some of our latest innovations are focused on the needs of enterprise customers, such as the Premium plan to host functions without cold-start for low latency workloads or PowerShell support enabling serverless automation scenarios for cloud and hybrid deployments.

In their report, Forrester also recognized Azure Functions as “a good fit for companies that need stateful functions” thanks to Durable Functions, an extension to the Azure Functions runtime that brings stateful and orchestration capabilities to serverless functions. Durable Functions stands alone in the serverless space, providing stateful functions and a way to define serverless workflows programmatically. Forrester mentioned specifically in the report that “clients modernizing enterprise apps will find that Durable Functions offers an alternative to refactoring existing business logic into bite-size stateless chunks."

Read the full Forrester report and learn more about Azure Functions today.

If you have any feedback or questions, please reach us on Twitter, GitHub, StackOverflow or UserVoice.
Quelle: Azure

Plan migration of physical servers using Azure Migrate

At Microsoft Ignite, we announced new Microsoft Azure Migrate assessment capabilities that further simplify migration planning. In this post, I will talk about how you can plan migration of physical servers. Using this feature, you can also plan migration of virtual machines of any hypervisor or cloud. You can get started right away with these features by creating an Azure Migrate project or using an existing project.

Previously, Azure Migrate: Server Assessment only supported VMware and Hyper-V virtual machine assessments for migration to Azure. At Ignite 2019, we added physical server support for assessment features like Azure suitability analysis, migration cost planning, performance-based rightsizing, and application dependency analysis. You can now plan at-scale, assessing up to 35K physical servers in one Azure Migrate project. If you use VMware or Hyper-V as well, you can discover and assess both physical and virtual servers in the same project. You can create groups of servers, assess by group and refine the groups further using application dependency information.

While this feature is in preview, the preview is covered by customer support and can be used for production workloads. Let us look at how the assessment helps you plan migration.

Azure suitability analysis

The assessment checks Azure support for each server discovered and determines whether the server can be migrated as-is to Azure. If incompatibilities are found, remediation guidance is automatically provided. You can customize your assessment by changing its properties, and recomputing the assessment. Among other customizations, you can choose a virtual machine series of your choice and specify the uptime of the workloads you will run in Azure.

Cost estimation and sizing

Assessment also provides detailed cost estimates. Performance-based rightsizing assessments can be used to optimize on cost; the performance data of your on-premise server is used to recommend a suitable Azure Virtual Machine and disk SKU. This helps to optimize on cost and right-size as you migrate servers that might be over-provisioned in your on-premise data center. You can apply subscription offers and Reserved Instance pricing on the cost estimates.

Dependency analysis

Once you have established cost estimates and migration readiness, you can plan your migration phases. Using the dependency analysis feature, you can understand which workloads are interdependent and need to be migrated together. This also helps ensure you do not leave critical elements behind on-premise. You can visualize the dependencies in a map or extract the dependency data in a tabular format. You can divide your servers into groups and refine the groups for migration by reviewing the dependencies.

Assess your physical servers in four simple steps

Create an Azure Migrate project and add the Server Assessment solution to the project.
Set up the Azure Migrate appliance and start discovery of your server. To set up discovery, the server names or IP addresses are required. Each appliance supports discovery of 250 servers. You can set up more than one appliance if required.
Once you have successfully set up discovery, create assessments and review the assessment reports.
Use the application dependency analysis features to create and refine server groups to phase your migration.

When you are ready to migrate the servers to Azure, you can use Server Migration to carry out the migration. You can read more about migrating physical servers here. In the coming months, we will add support for application discovery and agentless dependency analysis on physical servers as well.

Note that the inventory metadata gathered is persisted in the geography you select while creating the project. You can select a geography of your choice. Server Assessment is available today in Asia Pacific, Australia, Brazil, Canada, Europe, France, India, Japan, Korea, United Kingdom, and United States geographies.

Get started right away by creating an Azure Migrate project. In the upcoming blogs, we will talk about import-based assessments, application discovery, and agentless dependency analysis.

Resources to get started

Tutorial on how to assess physical servers using Azure Migrate: Server Assessment.
Prerequisites for assessment of physical servers
Guide on how to plan an assessment for a large-scale environment. Each appliance supports discovery of 250 servers. You can discover more servers by adding
Tutorial on how to migrate physical servers using Azure Migrate: Server Migration.

Quelle: Azure

IoT Signals energy report: Embracing transparent, affordable, and sustainable energy

The increased use of renewables, resiliency challenges, and sustainability concerns are all disrupting the energy industry today. New technologies are accelerating the way we source, store, and distribute energy. With IoT, we can gain new insights about the physical world that enables us to optimize and create more efficient processes, reduce energy waste, and track specific consumption. This is a great opportunity for IoT to support power and utilities (P&U) companies across grid assets, electric vehicles, energy optimization, load balancing, and emissions monitoring.

We've recently published a new IoT Signals report focused on the P&U industry. The report provides an industry pulse on the state of IoT adoption to help inform us how to better serve our partners and customers, as well as help energy companies develop their own IoT strategies. We surveyed global decision-makers in P&U organizations to deliver an industry-level view of the IoT ecosystem, including adoption rates, related technology trends, challenges, and benefits of IoT.

The study found that while IoT is almost universally adopted in P&U, it comes with complexity. Companies are commonly deploying IoT to improve the efficiency of operations and employee productivity, but can be challenged by skills and knowledge shortages, privacy and security concerns, and timing and deployment issues. To summarize the findings:

Top priorities and use cases for IoT in power and utilities

Optimizing processes through automation is critical for P&U IoT use. Top IoT uses cases in P&U include automation-heavy processes such as smart grid automation, energy optimization and load balancing, smart metering, and predictive load forecasting. In support of this, artificial intelligence (AI) is often a component of energy IoT solutions, and they are often budgeted together. Almost all adopters have either already integrated AI into an IoT solution or are considering integration.
Using IoT to improve both data security and employee safety is a top priority. Almost half of decision-makers we talked to use IoT to make their IT practices more secure. Another third are implementing IoT to make their workplaces safer, as well as improve the safety of their employees.
P&U companies also leverage IoT to secure their physical assets. Many P&U companies use IoT to secure various aspects of their operations through equipment management and infrastructure maintenance.
The future is bright with IoT adoption continuing to focus on automation, with growth in adoption for use cases related to optimizing energy and creating more efficient maintenance systems.

Today, customers around the world are telling us they are heavily investing in four common use cases for IoT in the energy sector:

Grid asset maintenance

Visualize your grid’s topology, gather data from grid assets, and define rules to trigger alerts. Use these insights to predict maintenance and provide more safety oversight. Prevent failures and avoid critical downtime by monitoring the performance and condition of your equipment.

Energy optimization and load balancing

Balance energy supply and demand to alleviate pressure on the grid and prevent serious power outages. Avoid costly infrastructure upgrades and gain flexibility by using distributed energy resources to drive energy optimization.

Emissions monitoring and reduction

Monitor emissions in near real-time and make your emissions data more readily available. Work towards sustainability targets and clean energy adoption by enabling greenhouse gas and carbon accounting and reporting.

E-mobility

Remotely maintain and service electric vehicle (EV) charging points that support various charging speeds and vehicle types. Make it easier to own and operate electric vehicles by incentivizing ownership and creating new visibility into energy usage.

Learn more about IoT for energy

Read about the real world customers doing incredible things with IoT for energy where you can learn about market leaders like Schneider Electric making remote asset management easier using predictive analytics.

"Traditionally, machine learning is something that has only run in the cloud … Now, we have the flexibility to run it in the cloud or at the edge—wherever we need it to be." Matt Boujonnier, Analytics Application Architect, Schneider Electric.

Read the blog where we announced Microsoft will be carbon negative by 2030 and discussed our partner Vattenfall delivering a new, highly transparent 24/7 energy matching solution; a first-of-its-kind approach that gives customers the ability to choose the green energy they want and ensure their consumption matches that goal using Azure IoT.

We are committed to helping P&U customers bring their vision to life with IoT, and this starts with simplifying and securing IoT. Our customers are embracing IoT as a core strategy to drive better outcomes for energy providers, energy users, and the planet. We are heavily investing in this space, committing $5 billion in IoT and intelligent edge innovation by 2022, and growing our IoT and intelligent edge partner ecosystem.
 
When IoT is foundational to a transformation strategy, it can have a significantly positive impact on the bottom line, customer experiences, and products. We are invested in helping our partners, customers, and the broader industry to take the necessary steps to address barriers to success. Read the full IoT Signals energy report and learn how we're helping power and utilities companies embrace the future and unlock new opportunities with IoT.
Quelle: Azure