Native Python support on Azure App Service on Linux: new public preview!

We’re excited to officially announce the public preview of the built-in Python images for Azure App Service on Linux, a much requested feature by our customers. Developers can get started today deploying Python Web Apps to the cloud, on a fully-managed environment running on top of the Linux operating system.

This new preview runtime adds to a list of growing stacks supported by Azure App Service on Linux, which includes also Node.js, .NET Core, PHP, Java SE, Tomcat, and Ruby. With the choice of Python 3.7, 3.6 and soon 2.7, developers can get started quickly and deploy Python applications to the cloud, including Django and Flask, and leverage the full suite of features of Azure App Service on Linux. This includes support for deployments via “git push”, and the ability to deploy and debug live applications using Visual Studio Code (our free and open source editor for macOS, Linux, and Windows).

When you use the official images for Python on App Service on Linux, the platform automatically installs the dependencies specified in the requirements.txt​ file. Additionally, it detects common Flask and Django application structures and hosts them using gunicorn, and includes the necessary modules for connecting to Azure DB for PostgreSQL.

While the underlying infrastructure of Azure App Service on Linux has been generally available (GA) for over a year, at the moment we’re releasing the runtime for Python in public preview, with GA expected in a few months. In addition to using the built-in images, Python developers can deploy their applications using a custom Docker container on Web Apps for Containers.

Learn more about Python on Azure and Visual Studio Code

Carlton Gibson, Django Software Foundation fellow and core maintainer of the Django project, recently joined our developer advocate Nina Zakharenko for a video series on using Python/Django on Visual Studio Code, Azure, and Azure DevOps.

The full walkthrough is available on the Microsoft + Open Source blog.

Next steps

Try out Python on App Service on Linux using the Azure CLI.
Get started experience using Visual Studio Code.

Let us know your feedback!
Quelle: Azure

Azure Monitor for containers now generally available

We are happy to announce that Azure Monitor for containers is now generally available. Azure Monitor for containers monitors the health and performance of Kubernetes clusters hosted on Azure Kubernetes Service (AKS). Since the launch of the public preview at Build in May 2018, we have seen a lot of excitement from customers. Customers love the fact that you can enable monitoring as soon as you create an AKS cluster and get all the monitoring telemetry in a centralized location in Azure without having to login to containers or rely on other tools. Since the public preview, we have been adding more capabilities and refining the experience based on your feedback. Let’s look at some of the recent changes.

Multi-cluster view – You often have multiple AKS clusters to manage. Wouldn’t it be great to view and manage all your clusters together? The multi-cluster view discovers all AKS clusters across subscriptions, resource group, and workspaces, and provides you a health roll up view. You can even discover clusters that aren’t being monitored and with just few clicks start monitoring them. 

Drill down further into AKS cluster with Performance Grid view – To investigate further, you can drill down to performance grid view that shows the health and performance of your nodes, controllers, and containers. From the node view tab, you can easily spot the noisy neighbor issue on the pod and drill further to see the controller it is part of. You can further see the controller limits, request setting, and actual usage to determine if you have configured your controller correctly. You can continue investigating by looking at the Kubernetes event logs associated to that controller.

Live debugging – We all know the importance of verifying that your application is working as expected, especially after you deploy an update. With live logs you get a real time, live stream of your container logs directly in your Azure portal. You can pause the live stream and search within the log file for errors or issues. Unlike the Azure Monitor logs, the live stream data is ephemeral and is meant for real time troubleshooting.

Onboarding – In addition to the Azure portal, we have added more ways for you to automate onboarding Azure Monitor for containers.

Azure CLI and ARM template – With the add-on option you can onboard Azure Monitor for containers with a single command. The command will automatically create the default Log Analytics workspace and deploy the agent for you.

For new AKS clusters:

az aks create –resource-group myAKSCluster –name myAKSCluster –node-count 1 –enable-addons monitoring –generate-ssh-keys 

For existing AKS clusters:

az aks enable-addons -a monitoring -n MyExistingAKSCluster -g MyExistingAKSClusterRG 

You can also enable monitoring for your containers by using Azure Resource Manager (ARM) template. To learn more, please review the detailed instructions for onboarding using Azure CLI and ARM template.

Terraform – Similar to ARM template, if you are using Terraform to deploy AKS clusters, you can enable monitoring right from the template. To learn more read the documentation from Terraform on setting up AKS cluster, Log Analytics solution, and workspace.

We would like to conclude with some inspiring words from one of our customers, Hafslund, a Nordic power company, with whom we recently published a case study:

“We found it easy to get Azure Monitor up and running for containers. The metrics and charts right out of the Monitor box are perfect to help us quickly tune our clusters and services and resolve technical issues.”

– Ståle Heitmann, CTO, Hafslund Nett AS

To learn more about Azure Monitor for containers, read our documentation, “Azure Monitor for containers overview.” Thank you for your feedback during the public preview and we look forward to your continued support as we add more exciting features and capabilities to Azure Monitor for containers.
Quelle: Azure

KubeCon North America 2018: Serverless Kubernetes and community led innovation!

Welcome to KubeCon North America 2018, and welcome to Seattle. It’s amazing to get the chance to welcome you to my hometown, and the site of Kubernetes birth. It was barely five years ago that Joe, Craig, and I had the first small ideas and demos that eventually turned into the amazing project and community. I’m honored that all of you over the years have chosen to invest your time, energy, and enthusiasm in Kubernetes, whether this is your first KubeCon or you’ve been here since the first one in San Francisco four years ago, welcome!

For the Azure Kubernetes team, KubeCon is especially exciting. It’s been a busy and fulfilling year, Azure Kubernetes Service (AKS) has been the fastest growing service in the history of Azure Compute, that’s been quite a ride! With KubeCon here, it’s a great chance to meet up with our customers and community collaborators to celebrate all the incredible things.

For the Azure Kubernetes Service, we started with the journey of "how to make Kubernetes easier for our customers." For example, by letting Azure take care of deployment, operations, and management of Kubernetes APIs and leveraging integrated tools, Maersk was able to free their engineers and talents to focus on things that makes the most business impact. Furthermore, by taking advantage of a fully-managed runtime environment provided by AKS, Siemens Healthineers realized shorter release cycles and achieved its desired continuous delivery approach in highly regulated environment.

We're seeing more and more Java customers port their existing Java application stacks to AKS with little or no changes. Xerox, for example, was able to run their Java apps in containers with no code modifications and leveraged Helm chart to automate customer onboarding. As a result, for their DocuShare Flex Content Management platform they were able to reduce the provisioning time from 24 hours to less than 10 minutes, accelerating sales and customer onboarding.

While we’re discussing Azure Kubernetes Service, it’s great to see more and more Azure services bring their strengths to Kubernetes. Here at KubeCon, we’re announcing the general availability (GA) of the Azure Monitor for containers. The Azure Cognitive Services have also announced containerization of their cognitive APIs, allowing users to take advantage of core cognitive technology on-premise, at the edge or wherever your data lives. For the Azure Kubernetes team, it’s been an exceptionally busy month, starting with the announcement, at KubeCon Shanghai, of AKS in Azure’s China region. Just last week in Las Vegas, we announced the public preview of AKS virtual nodes which together with Azure Container Instances (ACI) helps customers realize and take advantage of a serverless container infrastructure.

But honestly, the service that we build is only one (albeit very important) piece of what we work on as a team. Of equal importance is the work that we do in the open source community to work with others to develop novel solutions to our customers problems. With help from the community, like the great folks at the open policy agent framework, we launched an open source policy controller for Kubernetes. This policy agent installs on Kubernetes clusters anywhere and can provide enterprises with assurances that developers will successfully build reliable and compliant systems. We also are announcing the Osiris open source project that enables efficient “scale-to-zero" for Kubernetes containers. This technology can power Functions as a Service, or any programming paradigm where you need rapid scale-up in response to customer traffic.

With Docker, Bitnami, Hashicorp, and others we’ve announced the Cloud Native Application Bundle (CNAB) specification. CNAB is a new distributed application package that combines Helm or other configuration tools with Docker images to provide a complete, self-installing cloud applications. To see what CNAB can do for you, imagine being able to hand out a USB key to KubeCon attendees that could install your complete application. Finally, we’re celebrating the adoption of the Virtual Kubelet project into the CNCF sandbox, as we continue to work with VMWare, AWS, hyper.sh, and others in the community to make nodeless Kubernetes a reality.

At KubeCon Shanghai, I talked about my thoughts on serverless Kubernetes and the evolution of cloud native development. It’s a future driven by our mission of “Kubernetes for Everyone,” this includes reducing the complexity of Kubernetes operations by running your API for you in AKS and developing ‘nodeless’ Kubernetes with virtual nodes. It also means working on tools like Draft, and the Kubernetes extension for Visual Studio Code, which has been installed by nearly 175 thousand people that make Kubernetes a more integrated, easy to use experience.

At KubeCon North America, I’m taking off my forward-looking cap, and instead talking about the development and maintenance of the Java, .NET, TypeScript, and Python clients for Kubernetes. Whether you’re interested in talking about the future of cloud computing, or adding features like port-forwarding to the TypeScript client. I’ll be around the conference all week at the Azure booth and in the hallway track.

When it comes to explaining Kubernetes, one of my favorites is the Children’s Illustrated Guide to Kubernetes. For this KubeCon, I’m incredibly excited to announce that Microsoft is donating the likeness of Phippy, and all of your favorites from the book to the CNCF. To celebrate, we’re sharing a special second episode of the Children’s guide to Kubernetes. You can learn about the core concepts of Kubernetes in a fun way!

Whether you’re joining us in Seattle for KubeCon, or watching the talk streams from afar, we’ve got some great resources to get you started with Kubernetes, including the recently published best practices we’ve gathered from our customers and a webinar I will be sharing on structuring Kubernetes project in production.

Welcome to Seattle!

–brendan
Quelle: Azure

A hybrid approach to Kubernetes

We’re excited to see everyone at Kubecon this week! We’ve been working with our customers to understand how they’re thinking about Kubernetes and what we can do to make it easier for them. Azure Stack unleashes new hybrid capabilities for developing applications. You design, develop, and maintain your applications just like you do with Azure and you can deploy to any of the Azure clouds. Your application’s location becomes a configuration parameter rather than a design constraint.

So how does Azure Stack work with containers exactly? The way that containers and hybrid cloud work together can allow you to solve many problems. You can create a set of apps in containers using the languages you love like NodeJS, Python, Ruby, and many others. You can also take advantage of the wide array of tooling available, including Visual Studio Code. You can deploy your container or set of containers to a mix of environments that meet your user’s requirements. For instance, you can keep your sensitive data local in Azure Stack and access current functionality such as Azure Cognitive Services in global Azure. Or you can develop your apps in global Azure where you developers are and then deploy the containerized apps to a private cloud in Azure Stack that becomes completely disconnected on board a submarine. The possibilities are endless.

Azure Stack allows you to run your containers on-premise in pretty much the same you as you do with global Azure. You can choose the best place for your containers depending on data gravity, data sovereignty, or other business needs. Containers let you use Azure Services from your host running on-premise and lets you take advantage of the secure infrastructure, integrated Role Based access Control, and seamless DevOps tools allowing you to create a single pipeline targeting multiple Azure clouds. Your containers and supporting services are hosted in a secure infrastructure that integrates with your corporate network.

The Kubernetes Marketplace item available in Preview for Azure Stack is consistent with Azure since the template is generated by the Azure Container Service Engine, the resulting cluster will run the same containers as in AKS. It also complies with the Cloud Native Foundation.

Your developers can also use Open Shift Container Platform in Azure Stack. Open Shift provides a consistent container experience across Azure, Azure Stack, bare-metal, Windows, and RHEL. Open Shift brings together Microsoft and Red Hat developer frameworks and partner ecosystems as previously announced in September.

When you take your containers across Azure, Azure Stack, and Azure sovereign clouds, you should also consider that your application architecture likely depends on more than containers. Your application likely depends on numerous resources with different, specific versions. To make it easier to manage this, we recently announced the Cloud Native Application Bundles, a new open source package format specification created in close partnership with Docker and broadly supported by HashiCorp, Bitnami, and more. With Cloud Native Application Bundles, you can manage distributed applications using a single installable file, reliably provision application resources in different environments, and easily manage your application lifecycle without having to use multiple tools.

This week is KubeCon and if you are attending you can see Kubernetes and Azure Stack in action in the Expo Hall. Please drop by our booth #P18 to see great demos of the technologies I mentioned in this post.

I hope you find the information in this post useful! Stay tuned for new topics around developing hybrid applications and feel free to follow me on Twitter.

To learn more about hybrid application development, read the previous post in this series: "What you need to know when writing hybrid applications."
Quelle: Azure

Azure Marketplace new offers – Volume 26

We continue to expand the Azure Marketplace ecosystem. During September and October, 149 new consulting offers successfully met the onboarding criteria and went live. See details of the new offers below:

Consulting Services

 
1-Day Big Data Cloud Migration Workshop: This one-day workshop from Hashmap is for business and technical leaders and key stakeholders, and it's held at the client's facility. Receive guidance and assistance in defining services and the overall solution.

 
2-Week Azure Assessment: Assess your current datacenter environment (up to 100 VMs) and receive optimization recommendations for a cloud migration. An Azure architect from Catapult Systems will review the results of an assessment tool and provide recommendations.

 
3 Day Cloud Adoption Workshop: Black Marble's on-site cloud adoption workshop is free-form and wide-ranging, usually covering technology and architecture; development, deployment, and testing; monitoring and maintenance; security and compliance; and more.

 
3 Day Integration Health Check: A Black Marble consultant will spend one day with the customer, assessing the integration solution and producing a report based on the findings. Two days will be spent off-site to document system performance and make recommendations.

 
3-Day Office 365 Security Assessment: This assessment by ProServeIT is for any business stakeholder responsible for customer data and security. It includes a kickoff presentation, a questionnaire, and a presentation with a list of Office 365 security recommendations.

 
5 Day Azure DevOps Migration Workshop: A Black Marble consultant will deliver a two-day on-site workshop to discuss options for migrating on-premises TFS to Azure DevOps (formerly VSTS), followed by a three-day report covering discussions, recommendations, and next steps.

 
Active Directory Health Check: 5-Day Assessment: Tallan will work with you and your team to make documented suggestions to improve the administration, health, monitoring, auditing, alerting, backup, and replication of Active Directory.

 
AgileDataCenter: 2-Week Assessment: The Azure Readiness Assessment Planning offer is intended to assist a company's internal IT department in preparing for a datacenter migration to Microsoft's Infrastructure-as-a-Service platform.

 
AI and Insights Ideation Workshop: Catapult Systems will lead you through a co-creation event. Our AI ideation workshops guide organizations through the art of the possible so that we build a practical and actionable AI adoption road map.

 
AI Innovation: 2-Week Workshop: An innovation sprint helps clients think through the art of the possible using AI technologies and develop a set of ideas and a business case that can be rapidly built and tested. A small BJSS team works with client teams to answer critical questions.

 
ANSYS Cloud HPC Proof of Concept: This two-week proof of concept is designed to demonstrate how ANSYS can be configured and scaled on Azure HPC. Azure HPC is a scalable modern infrastructure solution that can expand or shrink based on your engineers’ needs.

 
API Economy – 1-Week Proof of Concept: For many organizations, APIs are now a critical component of solutions that impact the bottom line. The API Economy Platform Blueprint takes away the risk and steep learning curve of bringing an API-based proposition to market.

 
Application Modernization: 2 Week Trial Workshop: This engagement is designed to demonstrate the ease and value that Microsoft Azure and Docker containers deliver as a core part of an application modernization strategy.

 
Artificial Intelligence: 1-Hour Briefing: This briefing will articulate the benefits of a product-centric approach to designing and building AI solutions, from two-week proof-of-concept sprints and the rapid delivery of testable alphas to fully integrated enterprise-scale solutions.

 
Ask the Azure Expert – 3-Day Assessment: Ask the Azure Expert is your easy, fast, and free way to receive technical recommendations from the Topcoder community. Topcoder gives you on-demand access to a global network of developers ready to tackle your Azure-related questions.

 
Automated Lead Generation with AI Implementation: Week 1 will involve requirements engineering, a workshop, model training, infrastructure, and implementation of the solution. Week 2 will involve quality assurance, documentation, training, and change management.

 

Aztek CloudCare 1~30 Days Implementation: Aztek CloudCare delivers an array of support services for Microsoft Azure. Aztek CloudCare will supply business continuity, let you manage your budget wisely, and enable your organization to keep up with the latest cloud technology.

 

Azure – SQL Database, Disaster Recovery Assessment: CodeCenters International will review your SQL server, Azure database, and/or Analysis Services solution and provide an end-to-end backup and disaster recovery plan, including timelines and Azure cost projections.

 

Azure Active Directory: 1/2 Day Virtual Workshop: In this one-on-one workshop from 360 Visibility, you will gain an understanding of what Azure Active Directory is and how it can provide a more secure solution for your organization.

 
Azure ASM to ARM Planning and Migration: 2-Weeks: Presidio's team will review every aspect of your Azure ASM Resource Pool and produce a plan to provide a seamless transition to ARM-based solutions.

 
Azure Backup – 5 Days Assessment: This on-premises assessment by Programmer's will define a backup/archiving plan for a Microsoft Azure storage environment.

 
Azure Backup and Site Recovery: 1/2 Day Workshop: This one-on-one session from 360 Visibility will turn a complex subject into a simple, comprehensive, and robust solution that can be customized to meet your needs.

 
Azure Backup Recovery Planning and Delivery: 2-Weeks: Presidio's team will review your on-premises VMware and physically hosted application pool and produce a plan to provide advanced backup, disaster recovery, and business continuity plan failover solutions.

 
Azure Cloud Introduction: 1/2 Day Virtual Workshop: In this one-on-one session, 360 Visibility will develop a plan to make your transition to Microsoft Azure as easy as possible while mitigating potential challenges so you can maximize your return on investment.

 
Azure Data Center Migration: 1-Hour Briefing: This briefing from Communication Square will introduce Microsoft Azure and detail how you can benefit from it, covering cloud security, backups, disaster recovery, and a look at different industries using Azure.

 
Azure Data Center Migration: 2-Day Assessment: Communication Square will assess your environment and the cloud migration processes that fit the needs of your organization, then devise a road map for the migration.

 
Azure DevOps Hackathon: 3-Day Workshop: This workshop will cover the latest Azure technologies and foster team building. Teams will be provided quick-start materials so they can dive right into the challenge rather than spending time provisioning resources.

 
Azure Financial Management: 3-Day Workshop: This 3-day workshop by Catapult Systems is designed to help customers put processes in place for Azure financial awareness and optimization, departmental chargebacks, and account and subscription resource tags.

 
Azure Governance Solution: When paired with governance and security measures established and maintained via a proven governance model, the benefits of the public cloud can be fully realized.

 
Azure Governance: 1-Wk Workshop: By the end of this workshop by Thomas Duryea Logicalis, clients will understand their security profile, cost savings/management strategies, billing and reporting, and optimization through automation.

 
Azure Governance: 2-Day Workshop: This workshop by BrainScale will help you understand your governance requirements, presenting overall cloud governance pillars, covering compliance and billing, and developing an implementation plan.

 
Azure Governance: 3-Day Workshop: This three-day workshop by Catapult Systems is designed to help customers understand the role of governance in provisioning and managing Azure subscriptions and services.

 
Azure HPC Linux Cluster Implementation: 2-Weeks: Presidio's team will review your on-premises VMware and physically hosted application pool and produce a plan to provide high-performance computing solutions at a lower cost than on-premises infrastructure alone.

 
Azure Hybrid Cloud 5-Day Proof of Concept: During this proof-of-concept engagement, Meritum Cloud will carry out a discovery workshop and build an Azure tenant for the customer, configuring the environment based on security best practices.

 
Azure IaaS 10-Day Workshop: Whether you are new to the cloud, looking to enhance productivity, or wanting to streamline costs, this 10-day workshop by Tallan will help you understand what Azure Infrastructure-as-a-Service can do for you.

 
Azure IaaS Proof of Concept: 3-Day Engagement: In this engagement by Foundation IT, you will gain a quick and focused understanding of the benefits that Microsoft Azure Infrastructure-as-a-Service can bring to your organization.

 
Azure Migration Evaluation: Half-Day Workshop: This free workshop by Sysgain will educate business and IT professionals about migrating workloads to Azure. The workshop will be delivered remotely by Sysgain cloud experts and will be customized to each customer's needs.

 
Azure Migration Plan and Funding: 4-Hour Assessment: In this free engagement, Xerillion will assess your on-premises servers and networking, then create a proposal for a migration or a proof-of-concept project that can qualify for funding by Microsoft.

 
Azure OMS and Log Analytics Implementation: 2-Weeks: Presidio's team will review your Azure subscription and produce a plan to provide advanced automation, orchestration, logging analytics, and cloud-based network operation center solutions.

 
Azure Readiness Workshop and Report: LANET will evaluate the customer’s environment to determine a solution and provide an estimate of cost and timing, then present a workshop online or on-site and follow up with an assessment report.

 
Azure Resilience: 3 Days Proof of Concept: If you are a U.K. company looking to become certified in the British Standards Institute’s ISO 22301 for Business Continuity Management, then Nero Blanco's Azure Resilience proof of concept is well worth reviewing.

 
Azure Resilience: Implementation > 1 Week: This Azure Resilience implementation from Nero Blanco leverages our expertise in Azure Site Recovery, Azure Storage, and Azure Backup. We will implement a disaster recovery solution for your workloads.

 
Azure Site Recovery Planning and Delivery: 2-Weeks: Presidio's team will review your on-premises VMware and physically hosted application pool and produce a plan to provide advanced replication, high availability, and business continuity planning solutions.

 
Azure Site Recovery to Azure:10-Wk Implementation: Azure Site Recovery to Azure replicates workloads to Azure without adding secondary datacenter costs. This implementation by Infront Consulting Group will enable true business continuity and disaster recovery.

Azure SQL Cluster and HA: 2-Weeks: Presidio's team will review your on-premises SQL environment and produce a plan to provide advanced replication, high availability, and business continuity planning solutions at a lower cost than on-premises infrastructure.

 
Azure Visual Studio Dev/Test: 5-Day Workshop: Ready to move to the cloud? During this workshop, Tallan experts will discuss virtual networks, virtual machines in Azure, load testing, coded UI tests, automated deployment scenarios, and more.

 

AzureFactory Cloud Migration Assessment 1-week: The AzureFactory Cloud Migration Assessment by Cubesys leverages the power of software and AI to map your journey to Azure and provide detailed information for your optimized migration.

 
AzureFactory DR as a Service 3-week Implementation: AzureFactory Disaster Recovery-as-a-Service (DRaaS) is built on Microsoft Azure. It provides a fast and reliable way to protect your critical workloads. Let Cubesys take care of your disaster recovery plan.

 
AzureFactory Foundations 1-Week Implementation: AzureFactory Foundations by Cubesys gives you complete control of your Azure subscription by providing out-of-the-box governance and connectivity.

 
AzureFactory Foundations Assessment 3-Days: The AzureFactory Foundations Assessment by Cubesys delivers a vital assessment of your Azure foundations. You'll be provided with a clear report indicating the current state and targets for remediation.

 

AzureFactory Migration 4-Week Implementation: Cubesys’ AzureFactory Migration allows you to migrate to Azure, build secure cloud foundations, and optimize your workloads.

 
Bitnami Stacksmith 5-Day Proof-of-Concept: In five days, Bitnami will package and deploy your application to Azure and teach you how to maintain your app to keep it up to date and secure.

 
BizTalk Data Migration to Azure+PowerBI: 3-Day PoC: Migrate your BizTalk operations data from desktop tools to Azure and empower your users to view the status of business processes and interactions in real time without custom programming.

 

Bot in a Day: 1-Day Workshop: Tallan’s Bot in a Day workshop will give developers training on the basics of building and understanding AI bot technology while giving upper management insight to the art of the possible.

 
Briefing- Azure On Ramp: This briefing by Catapult Systems will include configuring and monitoring Azure, optimizing costs, adhering to governance policies, and personalizing cloud best practices.

 

Broadband Asset Assessment: Tilson will develop a checklist of potentially useful broadband assets that may be present in the region, identifying which are present and where. Tilson uses ESRI ArcGIS and QGIS for its geographic information system work.

 
Build and Enable a Self-Service Data Culture: In one hour, Decisive Data will discuss how we can work with you to build and enable a self-service data platform and culture utilizing a match made in the cloud: Microsoft Azure plus Alteryx plus Tableau.

 
Business Intelligence Roadmap 4-week assessment: For decision-makers struggling to obtain information to make the best decisions, a business intelligence road map is a good way to take stock of your current situation and define your needs and objectives.

 
Capstone Windows Server Migration Workshop: This engagement by Capstone Consulting will demonstrate a modernization process designed to migrate legacy Windows Server and SQL Server applications to Azure.

 
Chat Bot Concept Workshop and Report: A designer and a developer at Black Marble will deliver a one-day workshop that will include an introduction to bots, a demo, a Q&A session, a creative session for identifying key requirements, and a road map of next steps.

 
Chatbot Proof of Concept with BotStack: Black Marble will explore the potential of bots, delivering a workshop, creating a high-level report, and providing a proof-of-concept bot application aligned with the requirements identified at the concept stage.

 
Citadel-IX: Citadel Group has a flexible model to meet your needs, and we can migrate Citadel Information Exchange (Citadel-IX) as an on-premises solution to Azure or do implementations directly into Azure.

 
Cloud Adoption Assessment: 3-week assessment: Eduserv's Cloud Adoption Assessment examines your IT services and business objectives against the benefits, costs, and risks of moving to the cloud. Eduserv will also outline a migration plan and a development road map.

 
Cloud Adoption: 1-Day Assessment: This one-day session by Tallan is part education, part brainstorm. Our cloud architects will work with you to identify your specific business needs and challenges, recommend next steps, and put a plan in place to avoid common adoption pitfalls.

 
Cloud Architectural Review: 8-day assessment: Eduserv's Cloud Architectural Review assesses your public cloud estate and operational processes against your business objectives from the perspectives of operations, security, reliability, performance, and cost optimization.

 
Cloud Assure ISV QuickStart: 3-Day Assessment: Grey Matter’s Cloud Assure ISV QuickStart service will help you, as an ISV or application builder, deploy your application to Microsoft Azure regardless of the existing deployment model.

 
Cloud IaaS Migration: 3-Day Workshop: This three-day workshop by Thomas Duryea Logicalis is for technical leaders and key stakeholders and is held on-site at the client’s facility. It will focus on Azure governance, migration, deployment, compatibility, and costs.

 
Cloud Infrastructure Transformation 4 Week Impl-UK: New Signature's Cloud Infrastructure Transformation will help you rapidly establish a secure, scalable virtual datacenter in Azure based on Microsoft best practices and New Signature’s extensive expertise.

 
Cloud Managed Services :- 2 Weeks Briefing: Aricent's cloud offering provides a comprehensive set of services and in-house frameworks to efficiently deploy and migrate workloads from on-premises environments to the cloud.

 
Cloud Migration Cost Assessment: 6-Weeks: 2nd Watch’s six-week assessment simplifies and accelerates the path from on-premises to the cloud with instance rightsizing and total cost of ownership analysis using TSO Logic.

 
Cloud Readiness Assessment: 1-Day Evaluation: This service by Foundation IT provides a one-day review of the client's IT infrastructure and a report detailing an Azure migration approach and recommendations.

 
Cloudhouse Containers: 10 Day Migration Service: Cloudhouse's migration services help businesses prepare to move an application to the cloud. This 10-day engagement is for technical and business leaders and will be delivered partly on-site and partly via webinars.

 

Configuring and Managing Virtual Networks: This workshop by Dynamics Edge teaches IT professionals to create and manage virtual machines and to configure and manage Azure virtual networks (VNets). It also covers basic network configuration tasks.

 

Containers and DevOps – 1 Day Workshop: Spektra Systems' one-day training with a hands-on lab guides you through the process of building and deploying Docker images to the Kubernetes platform hosted on Azure Kubernetes Service.

 
CoPilot for Azure: As organizations adopt Azure at scale, CoPilot allows clients to continue optimizing their environments with the help of AHEAD's team of certified Azure engineers.

 
CST Cloud Consultation 1-Hour Briefing: This one-hour online briefing by UberCloud will teach you how Microsoft Azure can benefit your CST simulations. By running CST Studio Suite on Azure, you can take advantage of scalable, state-of-the-art hardware resources in the cloud.

 
Customer Service  intelligent Chatbots Implement: In this engagement, adesso AG implements customer service chatbots. The benefits of digital customer service include constant availability, cost reductions, and integration into the Microsoft landscape.

 

Cybersecurity Assessment with GDPR – 2 weeks: This assessment by Halian will result in a comprehensive Microsoft Power BI report that will give you specific recommendations on where and how you can achieve a higher level of security and GDPR compliance.

 
Data Analytics: 15-Days Proof-Of-Concept: In this three-week engagement, Programmer's will explore one business scenario at your organization and leverage data science to improve results.

 
Data Estate Modernisation 5-Day Assessment: Satalyst will assess your data environment, document the current state, define a high-level future-state data platform environment, estimate the costs of the new environment, and deliver a data estate modernization guide.

 

Data Quality Health Check: 4-Week Assessment: Sign up for a four-week engagement with data governance architects from Hitachi Solutions, who will give you step-by-step, personalized guidance on operational and analytics master data for a given domain.

 
Data Science Operationalization-1 Week Implementation: With endjin, design and develop flexible, extensible, scalable, and multi-tenant polyglot data processing pipelines to power your intelligent solutions.

 
Data Strategy and Modernization :5-Day Assessment: Through a series of workshops and whiteboard sessions, CloudMoyo's experts will assess your data layer, ETL layer, and reporting/analysis interface to help you determine how your BI environment is performing.

 
DB Health Check – 1 Week Assessment: Ascent Technology will take an inventory of your database system and check key performance indicators to provide you with a comprehensive health report with suggestions for maintenance and architecture.

 
Demand for Broadband Assessment: In this assessment, Tilson will evaluate current and future demand for broadband services in the region. We will use Form 477 data to determine where broadband services are available and the network speeds advertised in those areas.

 
Deploying and Managing Virtual Machines: This one-day workshop is for Azure administrators and will help them manage the cloud services that span storage, networking, and compute capabilities, with a deep understanding of each service across the full IT lifecycle.

 

Deploying SCCM: 3-Week Implementation: Infront Consulting Group will deploy Microsoft System Center Configuration Manager (SCCM) and coach the customer (via informal knowledge transfer) on how to accomplish some basic tasks.

 

Designing a Data Platform Solution: This workshop by Dynamics Edge will compare Azure database options, identify data streaming options for large-scale data ingestion, and identify longer-term data storage options.

 
Designing an Infrastructure Strategy: This workshop will describe DNS IP strategies for virtual networks in Azure, compare connectivity options, distribute network traffic, and design a hybrid connectivity scenario between the cloud and an on-premises environment.

 
Designing for Deployment Migration and Integration: This workshop will cover deploying an ARM template to a resource group, authoring a complex deployment using Azure Building Blocks tools, and integrating an API or logic app with Azure API Management.

 

Designing for Identity and Security: In this workshop by Dynamics Edge, cloud solution architects will learn about security and identity management within the context of Azure and multiple Software-as-a-Service solutions.

 
Develop Azure Cognitive Services, Bot, and IoT Sol: In this workshop, Dynamics Edge will detail how to integrate Azure Cognitive Services, how to create bots using Bot Framework and Azure portal, and how to leverage Azure Time Series Insights, Stream Analytics, and IoT Hub.

 
Develop Azure Platform as a Service Solutions: This workshop covers Azure Service Fabric, Azure Media Services, Azure Kubernetes Service, creating web apps with Azure App Service, and managing bulk operations through the Batch Service API.

 
Develop for an Azure Cloud Model: This workshop by Dynamics Edge covers asynchronous processing, autoscaling, long-running tasks, distributed transactions, Azure Search, and how to ensure a solution meets performance expectations.

 
Develop for Azure Storage: In this workshop, Dynamics Edge will discuss developing solutions leveraging Azure Storage options, including blob, table, or file storage; Cosmos DB; relational databases; caching; and content delivery networks.

 

DevOps Acceleration Engine: 4-Month Delivery: The Sirrus7 DevOps Acceleration Engine is a tailored and seamlessly integrated DevOps pipeline coupled with in-depth consulting services to quickly get your enterprise shipping code with Azure-native output.

 
DevOps Assessment and PoC: BCS Technology experts will guide you through your DevOps journey, which includes setting up application pipelines, monitoring performance, utilizing insights, and adding manual and automated quality checks.

 
DevOps with OSS on Azure – 1 Day Workshop: With this one-day training and hands-on lab by Spektra Systems, learn about building a continuous integration/continuous delivery environment on Azure using your favorite open-source tools.

 
Digital Transformation: 1-Hour Briefing: Legacy technology can be the biggest barrier to digital transformation. This introductory briefing by BJSS will help customers plan their journey.

 
Discovery Detailed 15-Day Workshop: This collaborative workshop by Kiandra IT examines user experience and considers how your new software solution can provide a great user experience.

 
Discovery Lite 7-Day Workshop: Each of the activities in this Lite workshop are timeboxed and tailored to meet intended outcomes. From there, we’ll document and visualize the results from the workshop so that your team is empowered to make decisions on a path forward.

 
Discovery Standard 10-Day Workshop: With more time, the Discovery Standard workshop from Kiandra IT allows us to dig deeper, resulting in a more accurate, detailed outcome.

 

Docker 1-Day Workshop: By the end of this one-day course from Architech, developers will understand how to create, deploy, secure, and manage the lifecycle of Docker containers.

 

Enterprise Blockchain Deep Dive: 5-Day Assessment: Participants of this workshop by Envision Blockchain Solutions will learn about the business value and benefits of blockchain solutions. This deep dive defines both functional and technical requirements.

 
Enterprise Blockchain Immersion: 1/2 Day Workshop: This half-day workshop by Envision Blockchain Solutions will provide a relaxed guided tour of today’s blockchain and IoT technology, aiming to help clients find the "aha" moment they are seeking.

 
Fiber-to-the-Home Industry Overview Briefing: Tilson will provide an overview of the Fiber-to-the-Home (FTTH) industry, addressing dark and lit networks, network owners, consumer and enterprise markets, internet service providers, and incumbents vs. new entrants.

 
First Analytical Model 2-Week implementation: EXIA's First Analytical Model (FAM) solution includes the development of a personalized analytical model based on client data, enabling the optimization of a business process.

 
Free Azure Optimization Assessment: 5 Days: In this free assessment, Azure experts from The Henson Group will review (no tools) every aspect of your tenant and produce a list of recommendations to improve performance, lower costs, add availability, and improve security.

 
GP Migration to Azure: 1-Week assessment: In this engagement, Dynamic Consulting will migrate a Dynamics GP 2016 server from on-premises infrastructure to a virtual machine hosted on Microsoft Azure.

 
Hands-on Labs for Cloud Workshops: 1-Hour Session: Learn how to run successful trainings and events with hands-on lab environments for Microsoft cloud workshops. Spektra Systems' training expert will guide participants through the setup process and best practices.

 
Hybrid Cloud: 3-Day Implementation: Emm&mmE Informatica will deploy a hybrid solution to your on-premises environment and Azure subscription.

 
Implement Advanced Virtual Networking: This workshop by Dynamics Edge will teach IT professionals how to implement and configure Azure networking traffic distribution tools, including Azure Load Balancer, Azure Traffic Manager, and Azure Application Gateway.

 
Implement Azure Development Integration Solutions: This workshop will show participants how to integrate and manage APIs by using Azure API Management; how to configure a message-based integration architecture; and how to develop an application message model.

 
Implement Security in Azure Development Solutions: This workshop by Dynamics Edge details how authentication and authorization work in Azure, and how to implement secure data solutions with encryption, Azure Key Vault, and SSL and TLS communications.

 
Implementing and Managing Application: This workshop teaches IT professionals how to manage and maintain infrastructure for core web apps and services. Learn how Azure App Service is used as a Platform-as-a-Service and app service environment.

 
Implementing and Managing Storage: This one-day workshop will teach IT professionals about Azure storage solutions as well as basic data replication concepts and schemes. Azure Storage Explorer will be introduced.

 
Implementing Workloads and Security: This workshop for IT professionals will help them assess, plan, and implement a migration of on-premises resources and infrastructure to Azure. Azure Migrate and Azure Site Recovery on a Hyper-V will also be covered.

 
Intelligent Mail Management 3-Week Implementation: Intelligent Mail Management is an automated solution for the recognition and processing of letters. adesso AG will identify your business problems and domain model requirements, then implement its mail solution.

 
Intro to Machine Learning 1-Day Workshop: In this one-day workshop by Aware Group, participants will be introduced to the fundamentals of data science theory, tools, and practice. This course is suitable for anyone with basic computer skills.

 
ITSM360 for Financial Services: 1-Hour Workshop Demo: Get a one-hour demo of BDO Canada's ITSM360, a complete IT service management solution powered by Microsoft Office 365 and SharePoint Online. The service is tuned for the Canadian financial services industry.

 
Lift Oracle to Azure 2-weeks Implementation: This two-week engagement from Ascent Technology will lift and shift your Oracle database into Azure so you can achieve high performance and scalability while reducing your yearly Oracle licensing costs.

 
Linux Lift and Shift to Azure – 1 Day Workshop: By the end of this workshop from Spektra Systems, you will be able to configure Linux virtual machines and virtual machine scale sets in Azure for availability, storage, and connectivity.

 
Manage Identities: This workshop by Dynamics Edge teaches IT professionals how to use Azure Active Directory. Participants will also learn about Azure AD’s differences compared to Active Directory Domain Services and how to integrate it with Software-as-a-Service solutions.

 
Manage Subscriptions and Resources: This one-day workshop by Dynamics Edge will help IT professionals manage their Azure subscriptions, their cloud resources through user and group accounts, and their core monitoring tools.

 
Manufacturing website / e-commerce: 1/2-Day Workshop: This half-day workshop by Profound Works is aimed at manufacturing businesses focused on improving websites and e-commerce, from UX design to CMS choice to system integration.

 
Migrate on-Premise to Azure (SQL, SSAS, SSIS): A successful cloud migration plan starts with a clear, data-driven understanding of your infrastructure. This assessment by CodeCenters International will deliver a migration road map for your SQL databases, cubes, or SSIS.

 
Migrate Servers to Azure: This workshop by Dynamics Edge will teach IT professionals how to assess, plan, and implement a migration of on-premises resources and infrastructure to Azure Migrate, Azure Site Recovery, and Azure Site Recovery on Hyper-V.

 
Migrate TFS On-premise to Online VSTS – Azure: Cognosys will migrate your on-premises Team Foundation Server to Azure DevOps Services (formerly known as Visual Studio Team Services). Retain access to Team Foundation Server even after the move.

 

Migrate to Azure: 1-Day Implementation: In this implementation service, Intercept will confidently migrate your IT environment, infrastructure, applications, and workloads to Azure.

 
Migrate to Azure: 4-Wk Implementation: Techstern can conduct a smooth transition of your business-critical applications, websites, and Infrastructure-as-a-Service solutions to Azure.

 
Modern Enterprise Analytics Platform Assessment: Adastra will assess your current state, perform a gap analysis, establish a target architecture, and develop a road map toward a scalable, stable, secure, and high-performing enterprise analytics platform powered by Azure.

 
Retail website / e-commerce: 1/2 Day Workshop: This half-day workshop by Profound Works will help retail or fast-moving consumer goods businesses improve websites and e-commerce, from UX design to CMS choice to system integration.

 
Satalyst Enhanced Bot 2-Hour Workshop: Bots and AI present a huge opportunity for companies to streamline customer data collection. During the workshop, Satalyst experts will work with you to identify opportunities to use bots for automated information collection.

 
SC:Strategy: Hybrid Cloud Strategy and Business Case: Shaping Cloud's SC:Strategy provides customers with strategic direction and thought leadership on their journey to the cloud, building a business case for Microsoft Azure.

 
Security and Compliance Workshop – 1 Week Workshop: Risk management strategies require multiple layers of protection that limit the pathways that could result in a data loss or breach. This workshop by endjin will provide an end-to-end review of your application's risks.

 
Select the Appropriate Azure Technology Dvlpt Sol: This workshop by Dynamics Edge covers Azure architecture, design, and connectivity patterns, and it will help you choose the right storage solution for your development needs.

 
SMB Ascend 1 week assessment: For IT directors considering migrating to the cloud, EXIA’s SMB Ascend is a solution that lets you quickly obtain a portrait of the different migration scenarios, the related costs and savings, and the support options available.

 
Snowflake on Azure Consultation: 1-Hour Briefing: In this briefing, Decisive Data will cover a variety of topics related to utilizing Snowflake on Azure, including data migration, Snowflake data sharing, data warehousing, ETL/ELT, and best practices.

 
Software Project Discovery 1-Day Workshop: Find out exactly what type of software solution your business needs with this collaborative discovery workshop from Kiandra IT.

 
SQL Platform Modernization 10 Day Assessment: Adastra will apply an ideal migration strategy while taking into account security, network sensitivity, configuration, and backup recovery.

 
SQL Server Modernization – 1 Week Assessment: Trianz will help you migrate your legacy SQL Server database to a modern Azure database under a fixed-fee engagement model, a predefined project plan, and tried-and-tested methodologies.

 
Stratiform Azure Scaffold Offer 5-Day Assessment: In Stratiform's Azure Scaffold engagement, our experts will help you deploy your Azure environment, taking you from design to implementation while supplying you with best practices to optimize the environment.

 
Stratiform Identity Management: Is identity management restricting your move to the cloud? PCM Canada's identity management specialists can help. Give us five days, and we'll give you a secure path to the cloud.

 
TFS Migration from OnPremise To Azure IaaS: Cognosys will migrate Team Foundation Server from the client’s on-premises environment to Azure Infrastructure-as-a-Service.

 
Understanding Cloud Architect Technology Solutions: This workshop by Dynamics Edge will teach IT professionals how operations are done in parallel/asynchronously, how an enterprise system must be resilient when failures occur, and how deployments can be automated.

 
Understanding Your Users 5-Day Workshop: Through a series of user group sessions, Kiandra IT aims to deepen your understanding of your users, their motivations, their needs, and their goals.

Windows 2008 Azure Migration 4-Week Implementation: Don't let your infrastructure and applications go unprotected. The Cubesys team is here to help you migrate to Azure for greater security, performance, and innovation.

 
Windows Server/SQL 2008 and 2008R2: 2-day Workshop: This workshop from Thomas Duryea Logicalis is for technical leaders and is held at the client’s facility. It will identify workloads that are suitable to migrate and provide an understanding of cost-saving strategies.

Your Business Reviewal 5-Day Workshop: Kiandra IT will work closely with you to review your current digital offerings and processes, identifying roadblocks, capabilities, and opportunities.

Quelle: Azure

Automatic performance monitoring in Azure SQL Data Warehouse (preview)

Monitoring and managing the performance of your data warehouse is critical to the overall health of your data estate. With the increase in data and query velocities, tracking query metrics pertaining to usage frequency, resource consumption, or regressions can impact your ability to efficiently draw meaningful insights from your data.

To increase your efficiency, we’re excited to reveal the preview of Query Store for Azure SQL Data Warehouse for both our Gen1 and Gen2 offers. Query Store is designed to help you with query performance troubleshooting by tracking queries, query plans, runtime statistics, and query history to help you monitor the activity and performance of your data warehouse. Query Store is a set of internal stores and Dynamic Management Views (DMVs) that allow you to:

Identify and tune top resource consuming queries.
Identify and improve ad hoc workloads.
Evaluate query performance and impact to the plan by changes in statistics, indexes, or system size (DWU setting).
See full query text for all queries executed.

The Query Store contains three actual stores: a plan store for persisting the execution plan information, a runtime stats store for persisting the execution statistics information, and a wait stats store for persisting wait stats information. These stores are managed automatically by SQL Data Warehouse and provide an unlimited number of queries storied over the last 7 days at no additional charge.

Enabling Query Store

Enabling Query Store is as simple as running an ALTER DATABASE T-SQL statement:

ALTER DATABASE [Database Name] SET QUERY_STORE = ON;

Note: You can disable Query Store by running the ALTER DATABASE command specifying OFF.

Finding the full text for any query

With Query Store, you can retrieve the full text of any query executed over the last 7 days by using the sys.query_store_query and sys.query_store_query_text DMVs.

SELECT
q.query_id
, t.query_sql_text
FROM
sys.query_store_query q
JOIN sys.query_store_query_text t ON q.query_text_id = t.query_text_id;

The results show the query_id and the text of the query being executed.

Finding your top executing queries

When enabled, Query Store tracks all query executions. On a busy data warehouse, you may want to look at the top queries by execution count. Using the Query Store views, we can get the query execution count for the 10 commands executed most frequently.

SELECT TOP 10
q.query_id [query_id]
, t.query_sql_text [command]
, SUM(rs.count_executions) [execution_count]
FROM
sys.query_store_query q
JOIN sys.query_store_query_text t ON q.query_text_id = t.query_text_id
JOIN sys.query_store_plan p ON p.query_id = q.query_id
JOIN sys.query_store_runtime_stats rs ON rs.plan_id = p.plan_id
GROUP BY
      q.query_id
, t.query_sql_text
ORDER BY
3 DESC;

Finding the execution times for a query

Query Store also gathers runtime query statistics for you to help you focus on queries with high variance in execution. Using the sys.query_store_plan and sys.query_store_runtime_stats DMVs, we can gather:

SELECT
q.query_id [query_id]
, t.query_sql_text [command]
, rs.avg_duration [avg_duration]
, rs.min_duration [min_duration]
, rs.max_duration [max_duration]
FROM
sys.query_store_query q
JOIN sys.query_store_query_text t ON q.query_text_id = t.query_text_id
JOIN sys.query_store_plan p ON p.query_id = q.query_id
JOIN sys.query_store_runtime_stats rs ON rs.plan_id = p.plan_id
WHERE
q.query_id = 10
AND rs.avg_duration > 0;

Finding the queries with the highest variance in execution

WITH RawData AS
(
SELECT
q.query_id [query_id]
, t.query_sql_text [command]
, rs.avg_duration [avg_duration]
, rs.min_duration [min_duration]
, rs.max_duration [max_duration]
, (((rs.max_duration * 1.0)/ (rs.min_duration * 1.0)) – 1) * 100   [variance_pct]
FROM
sys.query_store_query q
JOIN sys.query_store_query_text t ON q.query_text_id = t.query_text_id
JOIN sys.query_store_plan p ON p.query_id = q.query_id
JOIN sys.query_store_runtime_stats rs ON rs.plan_id = p.plan_id
WHERE
rs.min_duration > 0
)
SELECT
*
FROM
RawData
ORDER BY
variance_pct DESC

Next steps

Query Store is available in all Azure regions with no additional charge. Azure SQL Data Warehouse continues to lead in the areas of security, compliance, privacy, and auditing. For more information, refer to the whitepaper, “Guide to enhancing privacy and addressing GDPR requirements with the Microsoft SQL platform,” on Microsoft Trust Center, or our documentation, “Secure a database in SQL Data Warehouse.”

For more information on Query Store  in Azure SQL Data Warehouse, refer to the article, “Monitoring performance by using the Query Store,” and the Query Store DMVs, such as sys.query_store_query.
For feature requests, please vote on our UserVoice.
To get started today, create an Azure SQL Data Warehouse.
To stay up-to-date on the latest Azure SQL Data Warehouse news and features, follow us on Twitter @AzureSQLDW.

Quelle: Azure

Cloud Commercial Communities webinar and podcast update

Welcome to the Cloud Commercial Communities monthly webinar and podcast update. Each month the team focuses on core programs, updates, trends, and technologies that Microsoft partners and customers need to know to increase success using Azure and Dynamics. Make sure you catch a live webinar and participate in live Q&A. If you miss a session, you can review it on demand. Also consider subscribing to the industry podcasts to keep up to date with industry news.

Happening in December

Webinars

Transform Your Business with AI at Microsoft

December 4, 2018 at 11:00 AM Pacific Time

Explore AI industry trends and how the Microsoft AI platform can empower your business processes with Azure AI Services including bots, cognitive services, and Azure machine learning.

Azure Marketplace and AppSource Publisher Onboarding and Support

December 11, 2018 at 11:00 AM Pacific Time

Learn the publisher onboarding process, best practices around common blockers, plus support resources available.

Build Scalable Cloud Applications with Containers on Azure

December 17, 2018 at 1:00 PM Pacific Time

Overview of Azure Container Registry, Azure Container Instances (ACI), Azure Kubernetes Services (AKS), and release automation tools with live demos.

Podcasts

Blockchain, Artificial Intelligence, Machine Learning – what does it mean for healthcare?

December 11, 2018

David Houlding, a Microsoft Principal Healthcare Program Manager, discusses topics such as Blockchain, Artificial Intelligence, and Machine Learning as they impact healthcare.

Real world insights working with Machine Learning projects

December 17, 2018

Jess Panni and David Starr share insights learned from machine learning projects and the use of Machine Learning Studio to get actionable insights from the data produced.

Recap for November

Webinars

Get Started with Azure Applications: Solution Template Offer on Azure Marketplace

November 7, 2018

Learn more about publishing a solution in Azure Marketplace. With Azure Applications, publishers can automate the provisioning of one or more VMs/Azure Services using Azure Resource Manager, provision networking and storage resources, and more.

How Barracuda Reached Enterprise Customers Through Private Offers on Azure Marketplace

November 13, 2018

Learn about Private Offers on Azure Marketplace, which helps publishers create exclusive offers for their customers, offer customized software and terms, and run limited beta releases. Also learn how Barracuda Networks (a Microsoft partner) leveraged Private Offers to win larger deals and optimize their procurement process.

Check out more Cloud + AI events and join in the discussions in the C+AI partner community.

Podcasts

IoT with Streaming Data and Analytics and a Little Design Thinking with Element

This IoT episode with element ranges from oil and gas to Raspberry Pi, real-time streaming analytics, and even design thinking.

SKU Assortment using AI with experts Neal Analytics

How retail and consumer goods focus on improving product availability leveraging AI and ML.

The digital transformation of insurance with Nick Leimer

Nick Leimer from Microsoft brings us an astounding view into the digital transformation occurring inside the insurance industry.

Connecting IoT data with artificial intelligence at scale

In this episode of the podcast, we visit with Hari Menon and Diego Tamburini about the intersection of IoT and artificial intelligence.

Emerging and transformative technologies in retail with Mariya Zorotovich

Mariya Zorotovich talks with Vince Menzione about emerging and disruptive technologies in retail with a focus on Artificial Intelligence.

Current state, disruptors and technology trends in financial services with Howard Bush

Howard Bush discusses the current industry climate and technology disruptions occurring in financial services.

Accelerating your AI in healthcare initiative with blueprints

David Houlding and Gururaj Pandurangi take us on a journey through the technical advances Artificial Intelligence is bringing to healthcare and help you get started.

Paul Maher expands on the Industry Experiences team at Microsoft and his journey to the cloud

Paul Maher discusses the Industry Experiences team, why it was created, and how it can help your organization.

Check out recent podcast episodes at the Microsoft Industry Experiences team podcast page.
Quelle: Azure

Extracting insights from IoT data using the cold path data flow

This blog continues our coverage of the solution guide published by Microsoft’s Industry Experiences team. The guide covers the following components:

Ingesting data
Hot path processing
Cold path processing
Analytics clients

We already covered the recommendation for processing data for an IoT application in the solution guide and suggested using Lambda architecture for data flow. To reiterate the data paths:

A batch layer (cold path) stores all incoming data in its raw form and performs batch processing on the data. The result of this processing is stored as a batch view. It is a slow-processing pipeline, executing complex analysis, combining data from multiple sources over a longer period (such as hours or days), and generating new information such as reports and machine learning models.
A speed layer and a serving layer (warm path) analyzes data in real time. This layer is designed for low latency, at the expense of accuracy. It is a faster-processing pipeline that archives and displays incoming messages, and analyzes these records, generating short-term critical information and actions such as alarms.

This blog post covers the cold path processing components of the solution guide.

We have covered timeseries analysis with Azure Time Series Insights (TSI) in detail in the solution guide. It is an analytics, storage, and visualization service for timeseries data. Please read the relevant section for the use of TSI.

As you may remember from previous blog posts, we are using the sample data published by the NIST SMS Test Bed endpoint. Our previous posts ended with the data pushed to separate Azure Event Hubs for “events” and “samples” data records.

Before we begin the rest of the discussion, we would like to emphasize that the solution of an “analytics” problem is dependent on each plant, line, machine, and so on. The data must be available and be what the business needs. We will cover two different approaches for organizing the data, but they are not exhaustive, and are meant as examples only.

Storing the raw data

Our sample implementation has a basic set of Azure Stream Analytics queries that takes the incoming data stream from the Event Hubs that the raw data is posted to and copies it into Azure Storage blobs and tables. As an example, the queries look like the following:

SELECT
*
INTO
[samplesTable]
FROM
[EventHubIn]

One table is for samples and another is for events. As we were flattening the incoming data in the custom component, we added a property for the hour window the incoming data stream was in, using the following C# code snippet to help us more easily organize the data on the processing pipelines:

HourWindow =

new DateTime(
sample.timestamp.Year,
sample.timestamp.Month,
sample.timestamp.Day,
sample.timestamp.Hour,
0,
0),

This data record field is especially useful in organizing the records on the Azure Storage Table, simply by using it as the partition key. We are using the sequence number of the incoming record as the row key. The object model for the storage tables are covered in the documentation, “Understanding the Table Service Data Model.” Please also see the documentation, “Designing a Scalable Partitioning Strategy for Azure Table Storage,” for the recommendations on the storage table design.

The Azure Blob Storage blobs generated by the ASA job are organized in containers for each hour, as a single blob for the data for the hour, in the comma separated values (CSV) format. We will be using these in the future for artificial intelligence (AI) needs.

Loading data into Azure SQL Database

We will be covering a basic way to incrementally load the records to an Azure SQL Database and later discuss potential ways for further processing them to create new aggregations and summary data.

Our goal is to provide a barebones approach to show how data can flow into data stores and demonstrate the technologies useful for this. Any analytics solution depends heavily on the context and requirements, but we will attempt to provide basic mechanisms to demonstrate the related Azure services.

Azure Data Factory (ADF) is a cloud integration service to compose data storage, movement, and processing services in automated data pipelines. We have a simple ADF pipeline that demonstrates the incremental loading of a table using a storage table as the source.

The pipeline has a lookup activity that performs the following query on the SQL Database:

select
CONVERT(
char(30),
case when max(SampleTimestamp) is null then '1/1/2010 12:00:00 AM'
else max(SampleTimestamp) end, 126) as LastLoad
from [Samples]

The style used in the CONVERT function, 126, denotes the timestamp value to be formatted as “yyyy-mm-ddThh:mi:ss.mmm,” which matches the string representation of the partition key value on the storage table. The query returns the last record that was transferred to the SQL database. We can then pass that value to the next activity to query the table storage to retrieve the new records.

Next is a “Copy Data” activity, which simply uses the returned value from the lookup activity, which is the value of the “LastLoad,” and makes the following table query for the source. Please refer to Querying Tables and Entities for details on querying storage tables.

SampleTimestamp gt datetime'@{formatDateTime(activity('LookupSamples').output.FirstRow.LastLoad, 'yyyy-MM-ddThh:mm:ss.fffZ')}'

Later, this activity maps the storage table columns (properties) to SQL Database table columns. This pipeline is scheduled to run every 15 minutes, thus incrementally loading the destination SQL Database table.

Processing examples

Further processing the raw data depends on the actual requirements. This section covers two potential approaches for processing and organizing the data to demonstrate the capabilities.

Let’s first start looking at the data we collect to discover the details. Notice that the raw data on the samples table is in the form of name/value pairs. The first query will give us the different sample types recorded by each machine.

SELECT DeviceName, ComponentName, SampleName, COUNT(SampleSequence) AS SampleCount
FROM Samples
GROUP BY DeviceName, ComponentName, SampleName
ORDER BY DeviceName ASC, ComponentName ASC, SampleName ASC, SampleCount DESC

We observe there are eight machines, and each one is sending different sets of sample types. Following is the partial result of the preceding query. We analyzed the result a bit further in Microsoft Excel to give an idea of the relative counts of the samples:

We may conclude that the best way to aggregate and summarize the results is first to organize the results by machine — for example, a raw data table per machine.

We will go step by step to demonstrate the concepts here. Some readers will surely find more optimized ways to implement some queries, but our goal here is to provide clear examples that demonstrate the concepts.

We may wish to process the data further by first transposing the raw data, which is in name/value pairs, as follows:

We can use the following query to create a new table and transpose whole rows. This query assumes that we do not differentiate any of the components and see the machine as a whole:

; WITH Machine08SamplesTransposed AS
(
SELECT * FROM
(
SELECT SampleTimestamp, sampleName, CAST(sampleValue AS NUMERIC(20,3)) AS sampleValueNumeric
FROM Samples
WHERE
DeviceName = 'Machine08' and ISNUMERIC(sampleValue) != 0
) AS S

PIVOT(
MAX(sampleValueNumeric)
FOR SampleName IN ([S2temp],
[Stemp],
[Zabs],
[Zfrt],
[S2load],
[Cfrt],
[total_time],
[Xabs],
[Xload],
[Fact],
[Cload],
[cut_time],
[Zload],
[S2rpm],
[Srpm],
[auto_time],
[Cdeg],
[Xfrt],
[S1load])
) AS PivotTable
)

SELECT * INTO Machine08Samples
FROM Machine08SamplesTransposed

We can bring this query into the ADF pipeline by moving it into a stored procedure with a parameter to query the raw table so that only the latest loaded rows are brought in, and modifying “SELECT * INTO …” to “INSERT * INTO …”. We recommend relying on stored procedures as much as possible to use SQL database resources efficiently.

The resulting table looks like the following (some columns removed for brevity).

One way to process this interim data set is to fill in the null values of samples from the last received value, as shown below.

We should emphasize that we are not recommending this solution for every business case and for every sample value. This approach makes sense for the values that are meaningful together. For example, in a certain case, grouping Fact (actual path feed-rate) and Zfrt (Z axis feed-rate) may make sense. However, for another case Xabs (absolute position on X axis) and Zfrt on one record, grouped this way, may not make sense. Grouping of the sample values must be done on a case-by-case basis, depending on the business need.

Or another way is to put the individual records into time buckets, and apply an aggregate function in that group:

Let’s give a small example for achieving the first option. In the preceding example, we received V1.1 at t1, and received V2.2 at t2. We want to fill in the Sample1 value for t2 with t1s, V1.1.

;WITH NonNullRank AS
(
SELECT SampleTimestamp, S2temp, cnt = COUNT(s2temp) OVER (ORDER BY SampleTimestamp)
FROM Machine08Samples
),

WindowsWithNoValues AS
(
SELECT SampleTimestamp, S2temp,
r = ROW_NUMBER() OVER (PARTITION BY cnt ORDER BY SampleTimestamp ASC) – 1
FROM NonNullRank
)

SELECT SampleTimestamp, S2temp,
S2tempWithValues= ISNULL(S2temp, LAG(S2temp, r) OVER (ORDER BY SampleTimestamp ASC))
FROM WindowsWithNoValues

When we dissect the preceding queries, the first common table expression (CTE), NonNullRank, gives us the rank of the non-null values of S2temp sample values among the received data records.

The second CTE, WindowsWithNoValues, gives us windows of samples with the received value at the top, and the order of null values within the windows (column r).

The concluding query fills in the null values using the LAG analytic function by bringing in the received value from the top of the window to the current row.

The second option we mentioned previously is to group the received values and apply an aggregate function within the group.

;WITH With30SecondBuckets AS
(
SELECT *,
(dateadd(second,(datediff
(second,'2010-1-1',[SampleTimestamp])/(30))*(30),'2010-1-1'))
AS [SampleTimestamp30Seconds]
FROM Machine08Samples
)

SELECT SampleTimestamp30Seconds, AVG(S2Temp)
FROM With30SecondBuckets GROUP BY SampleTimestamp30Seconds
ORDER BY SampleTimestamp30Seconds

We can put these queries in a stored procedure to generate new aggregate and summary tables as necessary to be used by the analytics solution.

We would like to repeat our opening argument here once more. The solution to an analytics problem depends on the available data, and what business needs. There may not be one single solution, but Azure provides many technology options for implementing a given solution.

Next steps

Complete the ADF tutorial for transforming the data in the cloud by using a Spark activity and an on-demand Azure HDInsight linked service for a different example.
Get the larger picture for extracting insights from IoT data from the solution guide.

Quelle: Azure

An Azure Function orchestrates a real-time, serverless, big data pipeline

Although it’s not a typical use case for Azure Functions, a single Azure function is all it took to fully implement an end-to-end, real-time, mission-critical data pipeline for a fraud detection scenario. And it was done with a serverless architecture. Two blogs recently described this use case, “Considering Azure Functions for a serverless data streaming scenario,” and “A fast, serverless, big data pipeline powered by a single Azure Function.”

Pipeline requirements

A large bank wanted to build a solution to detect fraudulent transactions. The solution was built on an architectural pattern common for big data analytic pipelines, with massive volumes of real-time data ingested into a cloud service where a series of data transformation activities provided input for a machine learning model to deliver predictions. Latency and response times are critical in a fraud detection solution, so the pipeline had to be very fast and scalable. End-to-end evaluation of each transaction had to complete and provide a fraud assessment in less than two seconds.

Requirements for the pipeline included the following:

Ability to scale and efficiently process bursts of event activity totaling 8+ million transactions daily.
Daily parsing and processing of 4 million complex JSON files.
Events and transactions had to be processed in sequential order with assurances that duplicates would not be processed.
Reference data and business rules could change dynamically and the pipeline needed to accommodate these updates.
A deployed architecture that could easily integrate with a CI/CD and DevOps process.

Pipeline solution

The pipeline starts and ends with an Azure Function. A single function orchestrates and manages the entire pipeline of activities, including the following:

Consuming, validating, and parsing massive numbers of JSON files.
Invoking a SQL stored procedure to extract data elements from JSON files, with data used to build real-time behavioral profiles for bank accounts and customers, and to generate an analytics feature set.
Invoking a machine learning model to evaluate and score each individual transaction.
Posting the fraud score back to an on-premises API for integration to a case management solution (a separate solution that lets users examine and unblock transactions).

Recommended next steps

If you are designing a real-time, serverless data pipeline and seek the flexibility of coding your own methods for integration with other services, or to deploy through continuous integration, consider using Azure Functions to orchestrate and manage the pipeline.

Read the “Mobile Bank Fraud Solution Guide” to learn details about the architecture and implementation. Read more about the pipeline technology decision and implementation in these two blogs, “Considering Azure Functions for a serverless data streaming scenario,” and “A fast, serverless, big data pipeline powered by a single Azure Function.” We hope you find this helpful and we welcome your feedback.
Quelle: Azure

Deploying Apache Airflow in Azure to build and run data pipelines

Apache Airflow is an open source platform used to author, schedule, and monitor workflows. Airflow overcomes some of the limitations of the cron utility by providing an extensible framework that includes operators, programmable interface to author jobs, scalable distributed architecture, and rich tracking and monitoring capabilities. Since its addition to Apache foundation in 2015, Airflow has seen great adoption by the community for designing and orchestrating ETL pipelines and ML workflows. In Airflow, a workflow is defined as a Directed Acyclic Graph (DAG), ensuring that the defined tasks are executed one after another managing the dependencies between tasks.

A simplified version of the Airflow architecture is shown below. It consists of a web server that provides UI, a relational metadata store that can be a MySQL/PostgreSQL database, persistent volume that stores the DAG files, a scheduler, and worker process.

The above architecture can be implemented to run in four execution modes, including:

Sequential Executor – This mode is useful for dev/test or demo purpose. It serializes the operations and allows only a single task to be executed at a time.
Local Executor – This mode supports parallelization and is suitable for small to medium size workload. It doesn’t support scaling out.
Celery Executor – This is the preferred mode for production deployments and is one of the ways to scale out the number of workers. For this to work, an additional celery backend which is a RabbitMQ or Redis broker is required for coordination.
Dask Executor – This mode also allows scaling out by leveraging the Dask.distributed library, allowing users to run the task in a distributed cluster.

The above architecture can be implemented in Azure VMs or by using the managed services in Azure as shown below. For production deployments, we recommend leveraging managed services with built-in high availability and elastic scaling capabilities.

Puckel's Airflow docker image contains the latest build of Apache Airflow with automated build and release to the public DockerHub registry. Azure App Service for Linux is integrated with public DockerHub registry and allows you to run the Airflow web app on Linux containers with continuous deployment. Azure App Service also allow multi-container deployments with docker compose and Kubernetes useful for celery execution mode.

We have developed the Azure QuickStart template, which allows you to quickly deploy and create an Airflow instance in Azure by using Azure App Service and an instance of Azure Database for PostgreSQL as a metadata store.

The QuickStart template automatically downloads and deploys the latest Docker container image from puckel/docker-airflow and initializes the database in Azure Database for PostgreSQL server as shown in the following graphic:

The environment variables for the Airflow docker image can be set using application settings in Azure App Service as shown in the following graphic:

The environment variables used in the deployment are:

AIRFLOW__CORE__SQL_ALCHEMY_CONN – Sets the connection string for web app to connect to Azure Database for PostgreSQL.
AIRFLOW__CORE__LOAD_EXAMPLES – Set to true to load DAG examples during deployment.

The application setting WEBSITES_ENABLE_APP_SERVICE_STORAGE is set to true which can be used as a persistent storage for DAG files accessible to scheduler and worker container images.

After it is deployed, you can browse the web server UI on port 8080 to see and monitor the DAG examples as shown in the following graphic:

Next steps

You are now ready to orchestrate and design data pipelines for ETL and machine learning workflows by leveraging the Airflow operators. You can also leverage Airflow for scheduling and monitoring jobs across fleet of managed databases in Azure by defining the connections as shown below.

If you are looking for exciting challenge, you can deploy the kube-airflow image with celery executor with Azure Kubernetes Services using helm charts, Azure Database for PostgreSQL, and RabbitMQ. Let us know if you have developed it and we would be happy to provide link it to this blog.

Acknowledgements

Special thanks to Mark Bolz and Jim Toland for their contributions to the postings.
Quelle: Azure