The year in review: Hybrid applications for developers

As 2018 comes to an end, I look at the technology landscape. I look at the kinds of hybrid scenarios our customers are developing. for example, we see Airbus transforming aerospace with Microsoft Azure Stack and I realize that this year has been amazing for developers that design, develop, and maintain cloud-based apps. Azure Stack has improved support for DevOps practices. You can use Kubernetes containers. You can use API Profiles with Azure Resource Manager and the code of your choice. You can review walkthroughs and tutorials on getting up and running with a development practice using a continuous integration pipeline. With Azure Stack, your apps can be developed in the cloud. You can code once and deploy to environments in Azure or in your local data center.

We are now seeing some of your favorite services from Azure arrive on Azure Stack. The Azure Stack team is also excited to come together with other members of the Azure Edge family, which include Data Box Edge, IoT Edge, and Azure Sphere. If you didn’t get a chance to attend Ignite 2018’s session on the Intellgent Edge check out the “Delivering Intelligent Edge and Microsoft Azure Stack and Data Box” session. The Edge closes the gap between on-premises solutions and the cloud. You can write applications based on a consistent Azure model. You can deploy different parts of your apps to different locations that make the most sense for each solution.

Over the course of this year we demonstrated that all of this is indeed a reality and not a distant vision, it is available today. Azure Stack is available in many regions throughout the world. Data Box Edge is also available together with other members of the Data Box family. You can also use exciting new services like IoT Edge and Azure Sphere on Azure Stack and Azure for a comprehensive hybrid platform.

Over the course of this year, both my team and our partners delivered capabilities that make it easier for you to use our Edge offerings in your apps. Early this year, Pivotal announced the general availability of Pivotal Cloud Foundry (PCF) in Azure Stack. PCF streamlines the way you push your.NET and Java code. You do not need to focus on where or how your code runs. The combination of PCF with Azure and Azure Stack opens opportunities for your hybrid app development.

As developers, we love when we can focus more on building our apps and worry less about infrastructure. That’s where infrastructure as code comes into play. The Azure Resource Manager (ARM) is the perfect answer for this need. The same ARM that is available in Azure is also available in Azure Stack. Your scripts and templates are consistent no matter where you point your deployment target. My team has worked on many improvements for ARM to make your hybrid app development experience easier. First among those improvements are the ARM API Profiles. API Profiles expose a set of resource types and API versions that are consistent across the different Azure clouds.

Back in June 2018, Hashicorp also released the Terraform provider for Azure Stack allowing you to provision and manage the infrastructure the same way you do in Azure.

During the second half of the year, we kept busy continuing to deliver services that our developers use. In partnership with the Azure SDK Team, we delivered API profile support for both .Net and Java alongside existing support for Python, Ruby, and Go.

During Ignite 2018 we had a variety of hybrid development-oriented talks. If you missed them, watch them! A few good ones include:

Kirtana Venkatraman and Michela Sainato provided a great walkthrough of how to get started as a developer.
Walter Oliver, gave a thorough overview of all things Open Source.
Anjay Ajodha and I walked through the hybrid application patterns that are available today.
The joint session between Shriram Natarajan and Siddique Juman was our first rated session and third for overall Azure infrastructure. It focused on DevOps and what it means in the hybrid world of Azure Stack and Azure.

Finally, we provided new capabilities to make Azure Stack the best place for you to host your Kubernetes containers on-premise. You can use Kubernetes containers in Azure Stack through the Kubernetes Marketplace Item and through Red Hat OpenShift support. If you are a developer creating containerized applications, you have options. You have a choice on where to host your containers, a choice of different Azure services to integrate with your containers, and a choice to host your containers on-premises, the public cloud, and in Sovereign Azure clouds. On top of that, you can leverage the Open Source Service Broker to making it easy to consume Azure services and the Cloud Native Application Bundles to streamline your package dependencies.

This has been a great year as we worked with our customers to deliver innovation that will make your job easier. We have a pipeline full of exciting new capabilities coming in 2019, and as usual I will continue to post about them twice a month!

Now it’s time to unwind and enjoy the holidays! To learn more about hybrid application development read the previous blog post in this series, “A hybrid approach to Kubernetes.”
Quelle: Azure

The biggest IoT stories of 2018

This blog post was authored by Peter Cooper, Senior Product Manager, Microsoft IoT.

Back in April, we announced our intention to invest $5 billion in the Internet of Things (IoT) over the next five years. The importance of this commitment has become even clearer since, as technology has already evolved, customers have innovated, and possibilities have grown. As 2018 draws to a close, here’s a look back at the topics that drove the most interest and excitement here on our blog—and a window into what’s coming for this technology in the near future.

Smart spaces

The spaces around us are coming alive with the power of data. In our post, “Smart buildings, built on Azure IoT,” we talked about how IoT and AI are helping those who own, manage, and use buildings increase efficiency to reduce cost and improve productivity. With announcements of products such as Azure Sphere and Azure Digital Twins, we empowered our partners and customers to explore new possibilities for managing and improving the built environment responsively, in real time.

Over the past year, we’ve also seen customers expand their vision of what smart spaces can do. Traditionally, these projects were heavily focused on operational aspects of building management such as infrastructure maintenance, water, and power usage. This is still the foundational use case and justification for IoT-enabled buildings, but people are increasingly excited about the transformative capabilities of smart spaces. Customers are exploring how they can use analytics to understand and optimize how people use the spaces they inhabit. Furthermore, they’re designing smart building solutions with the potential to dramatically influence day-to-day productivity and increase positive interactions.

For example, Steelcase showed how they’re creating smart and connected workplaces. As Scott Sadler, Steelcase Smart + Connected manager said, “By embedding technology into the work environment, we are enabling people to tell organizations what spaces are successful and why. We can measure and identify patterns in how and where people are working.” The ease of obtaining these insights will only increase in the coming year, and we’re thrilled to see where this field is headed. As smart space initiatives expand beyond the workplace to encompass stadiums, schools, hospitals, banks, and more—and as edge and cloud technologies connect them to the larger built environment—truly transformative possibilities are bound to emerge. 

The intelligent edge

As with smart buildings, we’ve been inspired by the visionary scope our partners and customers have for edge computing and look forward to big things in 2019. IT departments are using edge computing to solve infrastructure and security challenges to make IoT a reality. Hardware vendors are expanding the intelligence of their devices to take advantage of new functionalities. A diverse and vibrant ecosystem is arising that will push what’s possible at the edge.

We highlighted five ways edge will transform business, including reduced IoT solution costs, improved security, lower latency, greater reliability, and interoperability with legacy devices. Enabling this goodness requires a strong technology foundation, which is why the Azure IoT Edge platform garnered so much attention from the industry. Since then, the solution has moved into general availability, enabling any business to deliver cloud intelligence locally on cross-platform IoT devices.

Edge computing has depth, fueling growth in both infrastructure and IoT, which allows data processing, analytics, and advanced functionality on connected devices whether they’re connected to the cloud or not.

These innovations are many and varied. With a consistent deployment model, companies can code and test edge capabilities on any platform and launch them seamlessly. For example, some are training data models using cloud-scale machine learning engines, and then deploying those models as-is to edge devices. Others are using edge as a way to aggregate and preprocess information so that only relevant data is delivered to the cloud. Edge computing also makes it possible to build IoT solutions that are offline for extended periods of time yet deliver powerful predictive capabilities based on local data. It all adds up to more efficient, effective use of data to improve everyday lives around the world. 

Open standards and interoperability

Interoperability is a hot topic, especially in the manufacturing space, where businesses are looking for simple, comprehensive solutions that allow them to enable the connected factory with a mix of IoT-ready and legacy equipment. Our April post on OPC Unified Architecture (OPC UA) highlighted how manufacturers are using the standard to enable openness and interoperability while maintaining high standards of security.

In fact, this past year could be considered “the year of OPC UA,” with ABB, Rockwell, and Schneider Electric joining the OPC Foundation board members, alongside SIEMENS, SAP, Yokogava, Iconics, Ascolab, and, of course, Microsoft.

National industry initiatives have also continued deepening their commitment to interoperability. Germany’s Industrie 4.0 has released new testbeds and specifications based on the standard, and the China 2025 initiative has made a similar all-in commitment to OPC UA. We’ve made our own contributions to the world of OPC UA with new and updated products. Discrete manufacturing is also getting in on the interoperability act, with the German machine tool association VDW announcing the open universal machine tool interface (umati) initiative, which incorporates OPC UA into its architecture.

Looking ahead

The big lesson from all this energetic activity? IoT is a catalyst for digital transformation across traditional boundaries. We’re seeing new ecosystems and solutions emerge that unify data and insights from multiple places to enable new possibilities. As smart cities, vehicles, buildings, spaces, energy, and more converge, the opportunities grow—and so do needs for end-to-end manageability and security. We are committed to solving these challenges with built-in connectivity, real-time performance, and security innovation at the intelligent edge. Learn more about how Microsoft is helping build the connected future.
Quelle: Azure

Conversational – AI updates December 2018

This blog post was co-authored by Vishwac Sena Kannan, Principal Program Manager, FUSE Labs.

We are thrilled to present the release of Bot Framework SDK version 4.2 and we want to use this opportunity to provide additional updates on Conversational-AI releases from Microsoft.

In the SDK 4.2 release, the team focused on enhancing monitoring, telemetry, and analytics capabilities of the SDK by improving the integration with Azure App Insights. As with any release, we fixed a number of bugs, continued to improve Language Understanding (LUIS) and QnA integration, and enhanced our engineering practices. There were additional updates across the other areas like language, prompt and dialogs, and connectors and adapters. You can review all the changes that went into 4.2 in the detailed changelog. For more information, view the list of all closed issues.

Telemetry updates for SDK 4.2

With the SDK 4.2 release, we started improving the built-in monitoring, telemetry, and analytics capabilities provided by the SDK. Our goal is to provide developers with the ability to understand their overall bot-health, provide detailed reports about the bot’s conversation quality, as well as tools to understand where conversations fall short. To do that, we decided to further enhance the built-in integration with Microsoft Azure Application Insights. For that end, we have streamlined the integration and default telemetry emitted from the SDK. This includes waterfall dialog instrumentation, docs, examples for querying data, and a PowerBI dashboard.

Bot Framework can use the App Insights telemetry to provide information about how your bot is performing and track key metrics. For example, once you enable App Insights for your bot, the SDK will automatically trace important information for each activity that gets sent to your bot. Essentially, per activity, for example, a user interacts with your bot by typing some utterance, the SDK emit traces for all different stages of the activity processing. This then can be placed on a timeline showing each component latency and performance – as you can see from the following image.

This can help identify slow responses and further optimize your bot performance.

Beyond basic bot performance analysis, we have instrumented the SDK to emit traces for the dialog stack in the SDK, primarily the waterfall dialog. The following image is a visualization showing the behavior of a waterfall dialog. Specifically, this image shows three events before and after someone completes a dialog across all sessions. The center “Initial Event” is the starting point that fans left and right showing before and after, respectively. This is great to show drop-off rate, shown in red, and where most conversation ‘flows’ by the thickness of the line. This view is a default app insight report, all we had to do is connect the wires between the SDK, dialogs, and App Insights.

The SDK and integration with App Insights provide a lot more capabilities, for example:

Complete activity tracing including all dependencies.
LUIS telemetry, including non-functional such as latency, error rate, and functions such as intent distribution, intent sentiment, and more.
QnA telemetry including non-functional such as latency, error rate, and functional such QnA score and relevance.
Word Cloud, common and utterances showing for top most used words and phrases – this can help in case you missed some intents or QnA.
Conversation length expressed in term of time and step-count.
Come up with your own reports using custom queries.
Custom logging to your bot.

Solutions

The creation of a high-quality conversational experience requires a foundational set of capabilities. To help customers and partners succeed with building great conversational experiences, we released the enterprise bot template at Microsoft Ignite 2018. This template brings together all the best practices and supporting components we've identified through the building of conversational experiences.

Synchronized with the SDK 4.2 release, we have delivered updates to the enterprise template which provides additional localization for LUIS models and responses including multi-language dispatcher support for customers that wish to support multiple native languages in one bot deployment. We’ve also replaced custom telemetry work with the new native SDK support for dialog telemetry and a new Conversational Analytics Power BI dashboard providing deep analytics into usage, dialog quality, and more.

The enterprise template is now joined by retail customer support focused template which provides further LUIS models for this scenario and example dialogs for order management, stock availability, and store location.

The virtual assistant solution accelerator, which enables customers and partners to build their own virtual assistants tailored to their brand and scenarios, has continued to evolve. Ignite was our first crucial milestone for virtual assistant and skills. Work has continued with regular updates to all elements of the overall solution.

We now have full support for six languages including Chinese for the virtual assistant and skills. The productivity skills (i.e., calendar, email, and tasks) have updated conversation flows, entity handling, new pre-built domain language models, and work with Microsoft Graph. This release also includes the first automotive capabilities enabling control of car features along with updates to skills enabling proactive experiences, Speech DDK integration, and experimental skills (restaurant booking and news).

Language Understanding December update

December was a very exciting month for Language Understanding in Microsoft. On December 4, 2018, we announced Docker container support for LUIS in public preview. Hosing LUIS runtime on containers provides a great set of benefits including:

Control over data: Allow customers to use the service with complete control over their data. This is essential for customers that cannot send data to the cloud but need access to the technology. Support consistency in hybrid environments – across data, management, identity, and security.

Control over model updates: Provide customers flexibility in versioning and updating of models deployed in their solutions.

Portable architecture: Enable the creation of a portable application architecture that can be deployed in the cloud, on-premises, and the edge.

High throughput/low latency: Provide customers the ability to scale for high throughput, low latency, requirements by enabling Cognitive Services to run in Azure Kubernetes Service physically close to their application logic, and data.

We recently posted a technical reference blog, “Getting started with Cognitive Services Language Understanding container.” We also posted a demo video, “Language Understanding – Container Support,” which shows how to run containers.

LUIS has expanded its service to seven new regions completing worldwide availability in all major Azure regions including UK, India, Canada, and Japan.

Among the other notables is the enhancement of the training experience. This included the improvement in the time required to train application. The team also released new pre-built entity extractors for people’s names, and geographical locations in English and Chinese, and expanded the phone number, URL, and email entities across all languages.

QnA Maker updates

In December, the QnA Maker service released an improvement for its intelligent extraction capabilities. Along with accuracy improvements for existing supported sources, QnA Maker can now extract information from simple “Support” URLs. Read more about extraction and supported data sources in the documentation, “Data sources for QnA Maker content.” QnA Maker also rolled out an improved ranking and scoring algorithm for all English KBs. Details on the confidence score can be found on documentation.

The team also released SDKs for the service in .NET, Node.js, Go, and Ruby.

Web chat speech update

We now support the new Cognitive Services Speech to Text and Text to Speech services directly in Web Chat 4.2. This sample is a great place to learn about the new feature and start migrating your bot from Bing Speech to the new Speech Services.

We also added a few samples including backchannel injection and minimize mode. Backchannel injection demonstrates how to add sideband data to outgoing activities. You can leverage this technique to send browser language and time zone information alongside with messages sent by the user. Minimize mode sample shows how to load Web Chat on-demand and overlay it on top of your existing web page.

You can read more about Web Chat 4.2 in our changelog.

Get started

As we continue to improve our conversational AI tools and framework, we look forward to seeing what conversational experiences you will build for your customers. Get started today!
Quelle: Azure

IoT in Action: New insights for retail

The pace of development for retail Internet of Things (IoT) solutions continues to build. From enhanced customer insights to better staff utilization and increased supply chain efficiency, sophisticated IoT solutions are helping retailers improve, and even reimagine, the retail experience.

For in-depth insights around the latest developments in IoT for retail, including how customer expectations are changing and how IoT investments can impact store profitability, register for our live IoT in Action event in New York (co-located with NRF 2019) on January 14, 2019 or sign up for our industry-specific retail webinar on January 8, 2019.

Focusing on store performance

In-store retail continues to account for approximately 90 percent of retail sales, but the retail landscape is changing. According to IHL, nearly 10,000 stores closed in the United States in 2017 – but another 14,000 opened. In the new retail environment, successful stores are focused on improving the customer experience and in-store operations with the goal of offering truly frictionless shopping. IoT technologies are helping to transform both efforts, allowing rapid testing and deployment in a common platform that spans both digital and physical environments.

Four ways IoT can help increase conversions

It's one thing to get a customer into your store, but another to create a successful shopping experience where they make the purchase they intended to make. IoT can increase sales conversions by reducing wait times, making items easy to find, and removing friction from the experience. Here are a few methods for doing so:

1. Better store navigation

A customer can only purchase something if they can find it. By analyzing traffic patterns in the store from cameras and traffic sensors, then layering this data on top of inventory and purchase data, stores can optimize their physical layout, so shoppers can quickly find items and be inspired by complimentary items. Where online retailers try to get products in front of customers in the fewest number of clicks, IoT technologies can help brick-and-mortar retailers minimize the number of steps a customer has to take.

2. Helping customers when they need it most

One of the reasons customers shop in stores is to get in-person help from sales associates. But if the sales associates don't interact with the customer at the right time, sales and upsells can be lost. That’s why companies are creating IoT solutions that use existing video infrastructure to offer intelligent retail insights and recommendations that help stores convert shoppers to customers.

Also working to improve the retail customer experience is Genetec™, a Microsoft partner who has IoT solutions that leverage existing store security cameras to provide retailers with customer and operational insights to improve business outcomes. For example, their solution can detect increased congestion at checkout terminals to reduce abandonment due to delays checking out and provide customer traffic and flow information for making good merchandising decisions based on that intelligence.

3. The right item for the right person

Even if the right item is in the store, if an associate can't find it and hand it to a customer, it won't get bought. IoT solutions can help make sure every item is on the right shelf. They can also keep track of items that are moving within the store, for example to the fitting rooms and the sometimes-cluttered racks outside them.

4. A more personalized experience

IoT solutions are also enabling retailers to glean far deeper insights into customer needs, preferences, and buying habits. Microsoft IoT solutions can assess how customers interact with your brand and the products on your shelves. They can help gauge customer sentiment and track search and buying habits through Dynamics CRM. These insights can enable retailers to truly personalize customer experiences and promotions to increase loyalty and market share.

Operational improvements through retail IoT

Secure IoT solutions on the intelligent edge and intelligent cloud can increase operational efficiencies and reduce costs. Add in the Azure Sphere solution, which ensures end-to-end device security for MCU-powered devices, and retailers can focus their efforts on reimagining everything from business models to product experiences.

For instance, many time-consuming activities like reordering, inventory tracking, and setting price points can be automated. Analysis of traffic patterns and demographics between stores can help select the right mix of products for each store. Other ways operations can be improved include:

Intelligent supply chain and micro-warehousing: Keeping items stocked at the right levels and anticipating demand surges is critical to a retailer’s success. Microsoft inventory management solutions streamline and accelerate supply chain and inventory management processes to improve efficiency, agility, and cost management. IoT sensors can track inventory levels in real time and send alerts when levels dip.
Workforce empowerment and efficiency: Many repetitive tasks can be shifted from associates to IoT-enabled systems. Shelf compliance monitoring is one task that can be automated to free up staff time. Companies are even using robots to scan entire stores and help employees take immediate action based on their findings. This enables associates to be brand ambassadors rather than shelf checkers.
Increased security: Some customer-focused improvements, like Mobile POS within the store, bring with them increased risk of shrinkage, both intentional and unintentional. Video surveillance of mobile checkout transactions is challenging, but partners like Genetec have created solutions so that all checkout activity can be located and recorded, which is especially important as more transactions occur away from fixed terminals.

Register for the IoT in Action Webinar

To explore IoT for retail in more detail, be sure to register for our retail-focused IoT in Action webinar on January 8, 2019. You will get insights into how IoT can help you delight customers, improve the effectiveness of your associates, and increase the efficiency of your operations.

You can also learn how intelligent edge and intelligent cloud IoT solutions can transform your retail business by signing up for Microsoft’s in-person IoT in Action event in New York City on January 14, 2019.

Finally, you can take a deep dive into building retail IoT solutions at our upcoming 2-day Virtual Bootcamp in late January and early February.
Quelle: Azure

Azure Marketplace new offers – Volume 28

We continue to expand the Azure Marketplace ecosystem. From November 17 to November 30, 2018, 80 new offers successfully met the onboarding criteria and went live. See details of the new offers below:

Virtual machines

CloudflareAzure: Cloudflare speeds up and protects millions of websites, APIs, Software-as-a-Service solutions, and more. Our offering will allow you to leverage the benefits provided by Cloudflare services without the need to reconfigure components of your Azure setup.

CrystalBall: Machinesense's industrial IoT app for data monitoring and analytics offers onboarding and fleet management of sensors and machines, along with machine health data visualization, energy management, and a maintenance, operations, and repairs log.

HPE StoreOnce VSA 3.18.7: The HPE StoreOnce virtual appliance enables you to reduce backup data storage costs with high-performance data deduplication.

InterSystems IRIS Data Platform: IRIS by InterSystems is a complete data platform that gives developers the freedom to choose the language and data model best suited for rapidly developing their applications.

JAMS V7 (BYOL) – Server 2016: JAMS is an enterprise batch-scheduling and workload automation solution. Automate jobs on Windows, Linux, UNIX, z/OS, System I, and OpenVMS, with support for jobs running on databases, ERP or CRM solutions, BI tools, and more.

Neoway Ubuntu Image: This Ubuntu image for a Neoway environment ensures security compliance provided by CIS controls, scripted processes, and/or custom apps. Define, manage, and monitor jobs through a graphical user interface, a REST or .NET API, or PowerShell cmdlets.

NFS 2016 Windows Storage File Server: Network File System (NFS) 2016 Server provides a file-sharing solution for enterprises that have heterogeneous environments that include Windows and non-Windows computers.

Postgres Pro Standard Database 11: Postgres Pro is a modern, PostgreSQL-based database with SQL and NoSQL features.

ProcessGold: ProcessGold is a process analytics platform for large organizations. ProcessGold transforms traditional audits by creating end-to-end process maps across applications and teams to highlight commercial and operational risk.

QoreStor™ 5.0.1: Break free from backup appliances, accelerate backup performance, and reduce storage requirements and costs with Quest's QoreStor, a software-defined secondary storage platform for backup.

RADIUS 2016 Server – Wireless Authentication NPS: This RADIUS server uses NPS to perform centralized authentication, authorization, and accounting for wireless, authenticating switches, remote access dial-up, or virtual private network (VPN) connections.

S2IX – Virtual Machine: Get it now in the Azure Marketplace.

Tiger Bridge: Tiger Bridge makes it easy to align data value with storage costs by seamlessly extending your NTFS or Tiger Store file system and performing transparent data migration between one storage tier and another.

VIP: Automate manual processes, letting bots perform repetitive tasks for you. VIP offers high-performance robotic process automation and end-to-end test automation.

Web applications

Bitbucket Data Center: Bitbucket Data Center by Atlassian is a self-managed solution for modern source code collaboration. Cluster multiple active servers to ensure users have uninterrupted access to Bitbucket Data Center in the event of unexpected node failure.

Citrix SD-WAN Center 10.2: Citrix SD-WAN Center is a centralized management solution for Citrix SD-WAN appliances. It provides visibility into application performance by exposing a rich set of reports and statistics across the Citrix SD-WAN network.

Corda Single Node: Create and deploy a Corda node that can join a network. You'll be able to supply your own CorDapps that you wish to be included in the node.

IPFS (beta): This offering enables the creation of permissioned networks of IPFS nodes to form a decentralized storage network. Users can select the size of network they would like to provision and share with others.

Microsoft Healthcare Bot (Preview): The Microsoft Healthcare Bot service empowers healthcare organizations to build and deploy AI-powered virtual assistants and chatbots to enhance their processes, self-service offerings, and cost-reduction efforts.

Provance ITSM Azure Connector: The Provance ITSM Azure Connector bridges the gap between your Azure infrastructure and Provance ITSM, letting you manage Azure resources like any other infrastructure within Provance ITSM.

SAP BusinessObjects to Alation: Information Asset has developed a solution to ingest reports from SAP BusinessObjects Web Intelligence and SAP Crystal Reports into Alation Data Catalog. This solution requires Java 8 and is packaged as an executable JAR file.

TIBCO Spotfire to Alation: This Information Asset solution imports dashboards from TIBCO Spotfire into BI Server in Alation. The library structure of TIBCO Spotfire is imported into Alation as well as the names and thumbnails of dashboards within the library.

Container solutions

Apache 2.4 Secured Alpine Container with Antivirus: Deploy an enterprise-ready container for Apache 2.4 on Alpine.

Kubectl Container Image: Kubectl is the Kubernetes command line interface. It allows you to manage a Kubernetes cluster by providing a wide set of commands to communicate with the Kubernetes API.

Consulting services

Azure Architecture: 1-Hr Briefing: At the end of this briefing by igroup, you’ll have a high-level overview of what Azure can do for your business, as well as an understanding of the timeline and costs.

Azure Cloud Assessment: 3-Day: In this assessment, igroup will provide a step-by-step guide to help you successfully achieve your cloud deployment, clearly listing the costs involved going forward and giving you a good understanding of the resources required.

Azure Cloud VDI 6-Day Proof of Concept: As part of this proof of concept, Meritum Cloud will build an Azure tenant for the customer, a basic VDI image, and configure the XenDesktop essentials according to best practices.

Azure Data Center Migration: 2-Week POC: Communication Square will devise a plan for your migration, help you set up your environment on Azure, lift and shift your workload to the cloud, and then test, analyze, and optimize the migration.

Azure Data Center Migrations: 10 Weeks Workshop: BUI's offer includes a cloud readiness assessment, a road map outlining a cloud implementation strategy, a hybrid architecture implementation, and full-time support for your IT team during the migration process.

Azure Health Assessment: 3-Day: In this engagement, igroup will spend one day on-site assessing your current infrastructure. The igroup team will then analyze the findings and create a remediation process with actions and recommendations.

Azure Hybrid Cloud 5-Day Proof of Concept: Meritum Cloud will brief the customer on Azure hybrid cloud services, carry out a discovery workshop, and build an Azure tenant, configuring the environment with security best practices.

Azure IaaS Proof of Concept: 5-Day: This proof of concept by igroup will allow you to assess if you are overcomplicating your environment or not using Azure services to their full potential.

Azure Marketplace On-boarding for ISVs: In this consulting offer, Spektra Systems will help you assess your application/solution for Azure Marketplace onboarding and define the next steps to take.

Azure Migration Assessment: 2-Day Assessment: CDW will work with you to deploy an assessment tool in your environment, ensure the tool is configured properly, run the tool, and then review and help you interpret the results.

Azure Migration Planning: 5 Day Implementation: Atech Support can provide certified Microsoft Azure solutions architects and certified Microsoft Azure engineers to design, plan, and migrate your on-premises infrastructure to Azure.

BearingPoint – Digital Process Twin: 4 Weeks PoC: The Digital Process Twin connects business processes, devices, and systems through real-time horizontal data integration to improve reliability for logistics processes. This proof of concept will realize an IoT use case.

Cloud Architecture Tech Consulting: 4-Wk Briefing: Azure MVPs will consult with you about your challenges, addressing architecture design and technical issues.

Data Center Migration Assessment Service: 4 Weeks: PC Solutions will provide an Azure knowledge session, a business assessment of your datacenter, a technical assessment of your datacenter, an assessment with tools, a recommendations report, and a presentation.

HIPAA Compliant Cloud Solutions: 1-Hour Briefing: For any healthcare organization, achieving HIPAA compliance is very important. In this briefing by Communication Square, you will learn about HIPAA-compliant Microsoft cloud solutions.

HIPAA Compliant Cloud Solutions: 4-Week Imp.: Trying to get your IT solutions to match HIPAA requirements from scratch can be a losing battle. In this engagement, Communication Square will implement HIPAA-compliant Microsoft cloud solutions.

How to move to Azure: 1-Hr Briefing: At the end of this briefing by igroup, you’ll have a high-level overview of how to move to Azure, as well as an understanding of the timeline and costs. Our specialist will answer any questions and explain the best route for your business.

Implementing Managing Azure Infrastructure 4 days: In this workshop, Emm&mmE Informatica will introduce you to Azure services, including virtual machines, storage, and containers.

Infrastructure manage service:4week Implementation: In this offer from PC Solutions, get end-to-end managed services on up to 10 virtual machines.

Migrate MySQL to Azure 2-weeks Implementation: Bring scalability, flexibility, security, and performance to your open-source database workloads while reducing the cost of infrastructure, maintenance, and support when you deploy on Azure.

Migrate PostgreSQL to Azure 2-weeks Implementation: Ascent Technology will lift and shift your PostgreSQL Database to Azure, providing availability, security, and performance at a fraction of on-premises costs.

Move Your Current ERP into Azure: 1-Day Assessment: This assessment by Altron Karabina involves understanding your ERP infrastructure, defining troubles with your existing environment, identifying migration steps, discussing your ERP road map, and more.

POPIA Compliance & Readiness: 10 Week Assessment: POPIACheck allows organizations to rapidly perform POPIA assessments and generate corrective actions to maintain compliance with the Protection of Personal Information Act of South Africa.

Power BI Adoption Tracker – 1 Day Assessment: Altron Karabina allows you to gain insight into your Microsoft Power BI adoption. The adoption tracker will provide users with the tools they need to make more informed business decisions.

Project Execution Sprint: 2-Wk Implementation: Clientek's implementation will focus on developing, testing, and delivering predetermined features and project objectives.

RESCAN AD Audit Reporting: 4 Week Assessment: BUI's RESCAN service will provide easy-to-read audit reporting on sources like Active Directory by using Microsoft Azure.

SQL Consulting Services: 1 Week Implementation: This offer is for customers in Belgium. A Denny Cherry & Associates expert will work with your IT team to install, configure, and tune your Azure virtual machines running SQL Server for up to one week.

SQL Consulting Services: 1 Week Implementation: This offer is for customers in Germany. A Denny Cherry & Associates expert will work with your IT team to install, configure, and tune your Azure virtual machines running SQL Server for up to one week.

SQL Consulting Services: 1 Week Implementation: This offer is for customers in France. A Denny Cherry & Associates expert will work with your IT team to install, configure, and tune your Azure virtual machines running SQL Server for up to one week.

SQL Consulting Services: 1 Week Implementation: This offer is for customers in Europe. A Denny Cherry & Associates expert will work with your IT team to install, configure, and tune your Azure virtual machines running SQL Server for up to one week.

SQL Consulting Services: 1 Week Implementation: This offer is for U.S. customers. A Denny Cherry & Associates expert will work with your IT team to install, configure, and tune your Azure virtual machines running SQL Server for up to one week.

SQL Consulting Services: 1 Week Implementation: This offer is for customers in Canada. A Denny Cherry & Associates expert will work with your IT team to install, configure, and tune your Azure virtual machines running SQL Server for up to one week.

SQL Consulting Services: 1 Week Implementation: This offer is for U.K. customers. A Denny Cherry & Associates expert will work with your IT team to install, configure, and tune your Azure virtual machines running SQL Server for up to one week.

SQL Consulting Services: 1 Week Implementation: This offer is for customers in Australia. A Denny Cherry & Associates expert will work with your IT team to install, configure, and tune your Azure virtual machines running SQL Server for up to one week.

SQL Server Always On: 5 Day Implementation: This offer is for customers in Australia. In this engagement, Denny Cherry & Associates Consulting will build a new SQL Server Always On availability groups solution within your Azure environment.

SQL Server Always On: 5 Day Implementation: This offer is for customers in Canada. In this engagement, Denny Cherry & Associates Consulting will build a new SQL Server Always On availability groups solution within your Azure environment.

SQL Server Always On: 5 Day Implementation: This offer is for U.K. customers. In this engagement, Denny Cherry & Associates Consulting will build a new SQL Server Always On availability groups solution within your Azure environment.

SQL Server Always On: 5 Day Implementation: This offer is for U.S. customers. In this engagement, Denny Cherry & Associates Consulting will build a new SQL Server Always On availability groups solution within your Azure environment.

SQL Server Health Check: 3-Day Assessment: This offer is for customers in Australia. Denny Cherry & Associates Consulting will review your SQL Server environment and provide a list of changes, which, if made, will improve the performance of your SQL Server.

SQL Server Health Check: 3-Day Assessment: This offer is for customers in Belgium. Denny Cherry & Associates Consulting will review your SQL Server environment and provide a list of changes, which, if made, will improve the performance of your SQL Server.

SQL Server Health Check: 3-Day Assessment: This offer is for customers in New Zealand. Denny Cherry & Associates Consulting will review your SQL Server environment and provide a list of changes, which, if made, will improve the performance of your SQL Server.

SQL Server Health Check: 3-Day Assessment: This offer is for customers in Canada. Denny Cherry & Associates Consulting will review your SQL Server environment and provide a list of changes, which, if made, will improve the performance of your SQL Server.

SQL Server Health Check: 3-Day Assessment: This offer is for customers in Europe. Denny Cherry & Associates Consulting will review your SQL Server environment and provide a list of changes, which, if made, will improve the performance of your SQL Server.

SQL Server Health Check: 3-Day Assessment: This offer is for customers in the Netherlands. Denny Cherry & Associates Consulting will review your SQL Server environment and provide a list of changes, which, if made, will improve the performance of your SQL Server.

SQL Server Health Check: 3-Day Assessment: This offer is for customers in France. Denny Cherry & Associates Consulting will review your SQL Server environment and provide a list of changes, which, if made, will improve the performance of your SQL Server.

SQL Server Health Check: 3-Day Assessment: This offer is for U.K. customers. Denny Cherry & Associates Consulting will review your SQL Server environment and provide a list of changes, which, if made, will improve the performance of your SQL Server.

SQL Server Health Check: 3-Day Assessment: This offer is for customers in Denmark. Denny Cherry & Associates Consulting will review your SQL Server environment and provide a list of changes, which, if made, will improve the performance of your SQL Server.

SQL Server Health Check: 3-Day Assessment: This offer is for customers in Italy. Denny Cherry & Associates Consulting will review your SQL Server environment and provide a list of changes, which, if made, will improve the performance of your SQL Server.

SQL Server Health Check: 3-Day Assessment: This offer is for customers in Germany. Denny Cherry & Associates Consulting will review your SQL Server environment and provide a list of changes, which, if made, will improve the performance of your SQL Server.

SQL Server Health Check: 3-Day Assessment: This offer is for customers in Japan. Denny Cherry & Associates Consulting will review your SQL Server environment and provide a list of changes, which, if made, will improve the performance of your SQL Server.

SQL Server Health Check: 3-Day Assessment: This offer is for U.S. customers. Denny Cherry & Associates Consulting will review your SQL Server environment and provide a list of changes, which, if made, will improve the performance of your SQL Server.

Two Node SQL Server Cluster: 2 Day Implementation: This offer is for U.S. customers. During this two-day process, the expert team at Denny Cherry & Associates Consulting will create and build a new SQL Server failover cluster within your Microsoft Azure environment.

Two Node SQL Server Cluster: 2 Day Implementation: This offer is for customers in Australia. During this two-day process, the expert team at Denny Cherry & Associates Consulting will create and build a new SQL Server failover cluster within your Microsoft Azure environment.

Two Node SQL Server Cluster: 2 Day Implementation: This offer is for U.K. customers. During this two-day process, the expert team at Denny Cherry & Associates Consulting will create and build a new SQL Server failover cluster within your Microsoft Azure environment.

Two Node SQL Server Cluster: 2 Day Implementation: This offer is for customers in Canada. During this two-day process, the expert team at Denny Cherry & Associates Consulting will create and build a new SQL Server failover cluster within your Microsoft Azure environment.

XS VM Lift & Shift: 1-Day Implementation: With this offer, you choose the server to migrate and Beacon42 experts move it to Azure.

Quelle: Azure

Anatomy of a secured MCU

Secure silicon

Azure Sphere is an end-to-end solution containing three complementary components that provide a secured IoT platform. They include an Azure Sphere microcontroller unit (MCU), an operating system optimized for IoT scenarios that is managed by Microsoft, and a suite of secured, scalable online services. Microsoft provides over a decade of support for the operating system as well as use of the security service for a single per device fee to simplify business planning.

Microsoft built its name in software, but our expertise in silicon runs deep. Over the last 15 years, Microsoft has deeply invested in hardware-based security by designing custom silicon for various Microsoft products. Azure Sphere’s silicon architecture is a culmination of all those years of experience, and our Pluton Security Subsystem is the heart of our security story. In this blog post, I’ll drill down a layer to discuss what puts the “secured” in a secured Azure Sphere MCU. Specifically, I’ll dive into Pluton’s design details, as well as some other general silicon security improvements.

Broadly, any MCU-based device belongs in one of two categories – devices that may connect to the Internet and devices designed to never connect to the Internet. Until recently, virtually all MCU-based devices were disconnected, which led to a security model that considered the value of the device, the physical threat model (Is the device locked in a cage? Does the general public interact with it?), and the risk of the device being attacked (e.g., an automatic paper towel dispenser versus an MRI machine). Therefore, security had a simple model – pay more, get more.

Connecting an MCU-based device to the Internet is a watershed moment because any MCU can become a potential general-purpose digital weapon in the hands of an attacker. The Mirai botnet was composed of only 100,000 connected cameras. If you’ve never thought about a distributed denial of service (DDOS) attack launched by 100 million paper towel dispensers, consider this blog post your moment of clarity.

Internet connectivity means that the table stakes for security must be changed. We authored the white paper on the seven properties of highly secured connected devices to reset the conversation around security. However, Azure Sphere certified MCUs go beyond a typical hardware root of trust used in an MCU.

Pluton Key management

Pluton generates its own key-pairs in silicon during the manufacturing process. Other secured chips often depend on a hardware security module (HSM) on the factory floor to generate keys. Pluton goes further by generating its keys privately in silicon, and then persistently storing those keys into e-fuses. The private keys are never visible to software. Even the most trusted firmware on the device does not have access to the private keys. Pluton generates two different public/private elliptic curve cryptography (ECC) key-pairs. One is used exclusively for remote-attestation (more on that later) and the other is available for general purpose cryptography. The chips’ public keys, but not the private keys, are sent to Microsoft from the silicon manufacturer, which means Microsoft knows about and establishes a trust relationship with every Azure Sphere chip from the point the chip is manufactured.

Pluton’s random number generator

Pluton implements a true random number generator. Pluton’s random number generator collects entropy (i.e., randomness) from the environment to generate random numbers. This random number generator is critical to each MCU generating its own keys during the manufacturing process and is therefore a critical attack surface that must be defended. Pluton’s true random number generator measures the entropy it collects, and if the entropy does not meet a certain standard defined as part of its design, the random number generator refuses to deliver random numbers.

Pluton’s cryptographic helpers

Pluton accelerates common cryptographic tasks. Pluton accelerates cryptographic tasks such as hashing (via SHA2), ECC, and advanced encryption standard (AES) cryptographic operations.

The benefits of secure boot

Pluton minimizes supply chain risk. ECDSA is an algorithm for checking digital signatures with ECC key-pairs. Every piece of software on an Azure Sphere device must be signed by Microsoft. Microsoft uses its own proven signing infrastructure, the same infrastructure that protects the private keys of some of Microsoft’s most valuable products, and therefore ensures that private keys are kept in secure HSMs and that every use of this key follows a strict and documented process.

Leveraging remote attestation

Pluton implements support for measured boot and remote attestation in silicon. When an Azure Sphere device connects to the Azure Sphere Security Service (AS3), it completes server authentication by using a locally-stored certificate. However, AS3 must also authenticate the device itself. It does that via a protocol called remote attestation:

1. AS3 sends the device a nonce, which is combined with the measured boot value that consists of the cryptographic hashes of the software components that have been booted.

2. The device signs these values with Pluton’s private ECC attestation key and sends them back to AS3.

3. AS3 already has the device’s public ECC attestation key and can therefore determine whether the device is authentic, was booted with genuine software, and if the genuine software is trusted.

It is critical that this measured-boot value cannot be forged or changed; therefore, it is kept in an accumulation register that can only be reset if the entire chip is reset. Since signing happens in silicon, even a device with fully compromised software cannot forge an attestation value.

4. If a device is up to date, it is given a client certificate that can be presented to any online service and an AS3 certificate that is used only with AS3 services. If the certificate chain is valid, it represents AS3 vouching for the health of the chip. The certificate is valid for roughly a day, which means the device is forced to attest to its health on a regular basis if it wants to maintain a connection to the Internet.

If the device is not healthy, the client certificate is not issued, effectively allowing the device to connect only to AS3 and perform a software update.

Silicon-based attestation with continual renewal prevents device impersonation and forgeries. A future blog post will provide a more thorough deep dive on this topic, but it all starts with Pluton.

Silicon security beyond Pluton

Security doesn’t stop at the silicon fabric that makes up Pluton. In fact, Azure Sphere MCUs go beyond Pluton by implementing additional security features that improve the platform’s defense in depth strategy. Let’s use the MT3620, a chip composed of five cores, as an example. One core is dedicated to the runtime that invokes operations within Pluton. Next is a core dedicated to Wi-Fi that interacts with the Wi-Fi RF components of the chip. The A7 core is dedicated to running the Azure Sphere operating system, and two M4 cores are available for real time processing. The A7 core leverages Arm’s Trust Zone technology, the Azure Sphere security monitor runs in Secure World, while the Linux kernel and other OS components run in Normal World. 

Between all the cores and peripherals are what we call “firewalls.” These are not firewalls in the network sense, but instead a set of mappings of resources to cores. In fact, internally we call this the core mapping feature. All resources on the device are mapped this way and by default all resources (A7 SRAM, peripherals, and flash) are mapped only to Secure World, denying access to software executing in Normal World. Secure World can selectively grant access to peripherals, and those selections are “sticky.” Sticky is not a term you often see when discussing silicon. In this case we mean that a core mapping is “locked” once set, which means that even if an attacker compromises the code that programs the firewalls, the attacker cannot get access to resources that were not originally assigned to the core in which it’s executing.

Limited peripheral access reduces the surface area that could potentially be subject to attack. Sticky selection further reduces the surface area. After the device is deployed, an attacker cannot exploit the code to communicate with a rogue web service or to control another part of the device. The result is greater security for individual devices as well as through the entire supply chain.

These are just a few of the silicon features that make an Azure Sphere MCU unique, and that make it more difficult for an attacker to take control of an Azure Sphere device. Each additional feature provides one more complication that an attacker must overcome to compromise a device’s functionality, and Pluton provides a rich set of silicon security features that are not often present in a hardware root of trust. Microsoft believes in the benefits of Pluton so strongly that it is licensing Pluton, royalty free, to any silicon manufacturer that wants to make an Azure Sphere chip. Better security is not foolproof, but raising the bar in IoT makes it that much harder to compromise a device.
Quelle: Azure

Connect Azure Data Explorer to Power BI for visual depiction of data

Do you want to analyze vast amounts of data, create Power BI dashboards and reports to help you visualize your data, and share insights across your organization? Azure Data Explorer (ADX), a lightning-fast indexing and querying service helps you build near real-time and complex analytics solutions for vast amounts of data. ADX can connect to Power BI, a business analytics solution that lets you visualize your data and share the results across your organization. The various methods of connection to Power BI allow for interactive analysis of organizational data such as tracking and presentation of trends.

Simple and intuitive native connector

The native connector to Power BI unlocks the power of Azure Data Explorer in only a minute. In a very intuitive process, add your cluster name and let the connector take care of the rest. Provide the database and table name to focus your analysis on specific data. You can use import mode for snappy interaction with the data or direct query mode for filtering large datasets and near real-time updates. To use the native connector method read our documentation, “Quickstart: Visualize data using the Azure Data Explorer connector for Power BI.”

Imported query

A specific Azure Data Explorer query can also be used to import data to Power BI. When copying a query from Kusto Explorer or Kusto Web UI, paste it in the blank query connector window and load the query’s data to Power BI. To use the imported query method read our documentation, “Quickstart: Visualize data using a query imported into Power BI.”

General purpose SQL (MS-TDS) connector

If you prefer running SQL queries to analyze your data, use the Azure SQL Database connector to connect to Azure Data Explorer. You can use the import mode or direct query mode to bring just the data needed for analysis. To use the SQL connector read our documentation, “Quickstart: Visualize data using the Azure Data Explorer connector for Power BI.”

Example

The following example depicts how the native Power BI connector is used to query GitHub public data that was pulled into the Demo11 ADX cluster and stored in the GitHub database.

Once loaded to Power BI, you can build any report or dashboard to analyze and visually represent the GitHub event data.

Next steps

In this blog, we depict the various ways to query data from Azure Data Explorer to Power BI. Additional connectors and plugins to analytics tools and services will be added in the weeks to come. Stay tuned for more updates.

To find out more about Azure Data Explorer you can:

Try Azure Data Explorer in preview now.
Find pricing information for Azure Data Explorer.
Access documentation for Azure Data Explorer.

Quelle: Azure

Transforming your data in Azure SQL Database to columnstore format

We are excited to reveal a public preview of a new feature in Azure SQL Database, both in logical server and Managed Instance, called CLUSTERED COLUMNSTORE ONLINE INDEX build. This operation enables you to migrate your data stored in row-store format to the columnstore format and maintain your columnstore data structures with the minimal downtime on your workload.

Why columnstore format?

Azure SQL Database enables you to fine-tune and optimize data structures and indexes in your database to get the best performance of your queries depending on your workload and size of data. Relational data in Azure SQL Database can be organized in two formats:

Row-store format, which is an ideal option for OLTP workloads where the queries are accessing individual rows or set of rows in the table. This is the general-purpose table format used for most of the data in relational databases.
Columnstore format, which is optimized for analytical queries and high compression of data (up to 100x). This format is perfect for the large data sets that can be efficiently compressed using this format and the analytical queries with complex calculations that use subset of the table columns.

In some cases, you might notice that your existing table that is organized in row-store format is not suitable for the queries that are executed on the table. Also, in some cases, you might want to apply high compression in columnstore tables to minimize the size of your table. In that case, you would need to transform your data into column-store format to compress the data and boost the performance of your analytical queries.

Transforming data to columnstore format

You can transform the existing row-store tables into the column-store format by creating CLUSTERED COLUMNSTORE INDEX on a table. Clustered columnstore index will take your original dataset from the table, organize it by columns, and apply efficient high compression algorithms to minimize the size of your data.

A Transact-SQL statement that creates CLUSTERED COLUMNSTORE INDEX on a table and transforms the data to columnstore format is shown in the following example:

CREATE CLUSTERED COLUMNSTORE INDEX cci ON Sales.Orders

Clustered columnstore index created on a table will reorganize data in the table and convert rows into the columns with high compression.

The limitation of this operation is that all incoming transactions that are trying to update the rows in the table that is going to be transformed from row-store to columnstore format must be blocked until the transformation finishes. This is known as offline index build shown on the following picture:

All incoming transactions are blocked while the table is transformed from the row-store to the column-store format. This process might cause some downtime in your workload while the table is transformed so you would need to choose the time when to transform the data to the columnstore format.

Online transition to columnstore format

The latest release of Azure SQL Database enables you to transform your row-store tables to the columnstore format without blocking incoming transactions using the online version of columnstore index build (currently in public preview).

You can use the following T-SQL syntax to transform your row-store table into the columnstore format:

CREATE COLUMNSTORE INDEX cci ON Sales.Orders WITH ( ONLINE = ON )

Create clustered index operation with the option WITH (ONLINE = ON) will take all incoming transactions and continuously include the data changes into the target columnstore data structure while the original data is transformed:

Online clustered columnstore index build enables you to optimize and compress your data with minimal downtime without major blocking operations on the queries that are executing while you are transforming the data.

Besides online transformation from row-store to columnstore format, the following online features in clustered columnstore indexes are available in Azure SQL Database:

Existing clustered columnstore indexes can be rebuilt in the online mode, meaning that workload that is working with the table don’t need to be blocked while you are performing this index maintenance operation.
Non-clustered indexes on the tables organized in columnstore format can be rebuilt using the online option, without blocking the workload.

ONLINE CLUSTERED COLUMNSTORE index build operation would help you to perform data transformation and maintenance operations on clustered columnstore indexes with minimal downtime on the incoming workload. This feature is currently in preview in all flavors of Azure SQL Database including logical servers, elastic pools, and Managed Instances.

For more information, please see the columnstore indexes documentation page.
Quelle: Azure

Best practices for queries used in log alerts rules

Queries can start with either a table name like “search” or “union *” operators. These commands are useful during data exploration and for searching terms over the entire data model. However, these operators are not efficient for productization in alerts. Log alerts rules queries in Log Analytics and Application Insights should always start with table(s), this is to define a clear scope for the query execution to specific table(s). It also improves both query performance and relevance of the results. You can learn more by visiting our documentation, “Query best practices.”

Note that using cross-resource queries in log alerts rules is not considered inefficient although “union” operator is used. The “union” in cross-resource queries is scoped to specific resources and tables as shown in this example, while the query scope for “union *” is the entire data model.

Union

app('Contoso-app1').requests,

app('Contoso-app2').requests,

workspace('Contoso-workspace1').Perf

After data exploration and query authoring, you may want to create a log alert using that query. These examples show how you can modify queries and avoid “search” and “union *” commands.

Example 1

You want to create log alert on the following query.

search ObjectName == 'Memory' and (CounterName == '% Committed Bytes In Use' or CounterName == '% Used Memory') and TimeGenerated > ago(5m)

| summarize Avg_Memory_Usage =avg(CounterValue) by Computer

| where Avg_Memory_Usage between(90 .. 95)

| count

To author a valid alert query without the use of “search” operator, follow these steps:

1. Identify the table that the properties are hosted in.

search ObjectName == 'Memory' and (CounterName == '% Committed Bytes In Use' or CounterName == '% Used Memory')

| summarize by $table

The result indicates that these properties belong to Perf table.

 

 

 

2. Since the properties used in the query are from Perf table, the query should start with it and scope the query execution to that table.

Perf

| where ObjectName == 'Memory' and (CounterName == '% Committed Bytes In Use' or CounterName == '% Used Memory') and TimeGenerated > ago(5m)

| summarize Avg_Memory_Usage=avg(CounterValue) by Computer

| where Avg_Memory_Usage between(90 .. 95)

| count

Example 2

You want to create log alert on the following query.

search (ObjectName == 'Processor' and CounterName == '% Idle Time' and InstanceName == '_Total')

| where Computer !in ((union * | where CounterName == '% Processor Utility' | summarize by Computer)) | summarize Avg_Idle_Time = avg(CounterValue) by Computer, CounterPath | where Avg_Idle_Time < 5 | count

To modify the query, follow these steps:

1. Since the query makes a use of both “search” and “union *” operators, you need to identify the tables hosting the properties in two stages.

search (ObjectName == 'Processor' and CounterName == '% Idle Time' and InstanceName == '_Total')

| summarize by $table

The properties of the first part of the query belong to Perf table.

 

 

 

Note, the “withsource = table” command adds a column that designates the table name that hosts the property.

union withsource = table * | where CounterName == '% Processor Utility'

| summarize by table

The properties of the second part of the query also belong to Perf table.

 

2. Since the properties used in the query are from Perf table, both outer and inner queries start with Perf table and scope the query execution to that table.

Perf

| where ObjectName == 'Processor' and CounterName == '% Idle Time' and InstanceName == '_Total'

| where Computer !in ((Perf | where CounterName == '% Processor Utility' | summarize by Computer))

| summarize Avg_Idle_Time = avg(CounterValue) by Computer, CounterPath

| where Avg_Idle_Time < 5

| count
Quelle: Azure

How to migrate from AzureRM to Az in Azure PowerShell

On December 18, 2018, the Azure PowerShell team released the first stable version of “Az,” a new cross-platform PowerShell module that will replace AzureRM. You can install this module by running “Install-Module Az” in an elevated PowerShell prompt.

Since January 2018, PowerShell has been a cross-platform product with the introduction of PowerShell Core. Therefore, it has also become a priority for Azure PowerShell to have cross-platform support. Because of the changes required to support running Azure PowerShell cross-platform, we decided to create a new module rather than make modifications to the existing AzureRM module. Moving forward, all new functionality will be added to the Az module, while AzureRM will only be updated with bug fixes.

Configure Az in your environment

Because both Az and AzureRM use the same dependencies with different versions, it is not possible to run Az and AzureRM side by side in the same PowerShell session. Thus, Az and AzureRM cmdlets cannot be used together in scripts and in interactive sessions. To ensure that a script does not try to import both Az and AzureRM modules in the same session, if you do not have many existing scripts that use AzureRM, we recommend that you remove all AzureRM modules from your machine after installing Az. For your convenience, we have created the “Uninstall-AzureRm” cmdlet, located in in the new Az module. To use this cmdlet, please ensure that all PowerShell sessions in which AzureRM modules are imported have been closed, then run “Uninstall-AzureRm” in an elevated PowerShell session.

What about your existing Azure PowerShell scripts?

If you would like to continue using AzureRM for your existing scripts while also writing new scripts using Az then you have two possible options.

Option 1 – Install PowerShell Core 6

One option is to install Az on PowerShell Core 6 while continuing to use AzureRM on Windows PowerShell 5.1. This will allow you to run your existing AzureRM scripts on Windows PowerShell 5.1 without the possibility of running into issues where AzureRM and Az are imported in the same session.

Option 2 – Explicit module loading

Alternatively, if you cannot install PowerShell 6 on your machine, please ensure that you explicitly require either the Az or the AzureRM modules you intend to use at the beginning of each script, making certain that the modules required are not from both AzureRM and Az. To turn off warnings about the side by side installation of AzureRM and Az, please add “$env:SkipAzInstallationChecks=true” to your PowerShell profile.

Using the AzureRM aliases with Az

To simplify and normalize our cmdlet names, we have changed the prefix from AzureRM and Azure to Az for every cmdlet in the new Az modules. To enable preexisting scripts that were written for AzureRM to successfully execute in Az, we have written a cmdlet, “Enable-AzureRmAlias,” to create aliases to the old cmdlet names. This cmdlet includes a Scope parameter, which allows you to select whether the aliases should be created for only the current session or for all future sessions. Additionally, you can import aliases for specific modules using the “Module” parameter. Once your scripts have been converted to use “Az” prefixes, aliases can be turned off using the “Disable-AzureRmAlias” cmdlet. All new cmdlets added to the Az modules will not have these AzureRM aliases, so all new scripts should use the “Az” prefix syntax.

Az on CloudShell

If you are looking for a quick way to test out the new Az modules interactively, CloudShell will now be shipping with the new Az modules. CloudShell is a great option as it runs everywhere and doesn’t require an install so you can keep your environment untouched while trying out all the new Az features. Simply navigate to the PowerShell tab within a CloudShell session, and all Az modules will be automatically installed into your session.

Try it out

We want the new Az module to enable you to be more productive and efficient in managing Azure from any platform or operating system. Therefore, we would like to invite you to try out the new cross-platform module and we look forward to getting your feedback, suggestions or issues via the built-in “Send-Feedback” cmdlet, which is available in both AzureRM and the new Az module. Alternatively, you can always open an issue in our GitHub repository.
Quelle: Azure