Azure.Source – Volume 77

Preview | Generally available | News & updates | Technical content | Azure shows | Events | Customers, partners, and industries

Now in preview

Announcing the Azure Functions Premium plan for enterprise serverless workloads

We are pleased to announce the Azure Functions Premium plan in preview, our newest Functions hosting model. This plan enables a suite of long requested scaling and connectivity options without compromising on event-based scale. With the Premium plan you can use pre-warmed instances to run your app with no delay after being idle, you can run on more powerful instances, and you can connect to VNETs, all while automatically scaling in response to load.

Windows Server 2019 support now available for Windows Containers on Azure App Service

We are happy to announce Windows Server 2019 Container support in public preview. Using a custom Windows container in App Service lets you make OS changes that your app needs, so it's easy to migrate on-premises app that requires custom OS and software configuration. Windows Container support is available in our West US, East US, West Europe, North Europe, East Asia, and East Australia regions. Windows Containers are not supported in App Service Environments at present.

Web application firewall at Azure Front Door service

We have heard from many of you that security is a top priority when moving web applications onto the cloud. Today, we are very excited to announce our public preview of the Web Application Firewall (WAF) for the Azure Front Door service.  By combining the global application and content delivery network with natively integrated WAF engine, we now offer a highly available platform helping you deliver your web applications to the world, secure and fast!

Azure Media Services: The latest Video Indexer updates from NAB Show 2019

After sweeping up multiple awards with the general availability release of Azure Media Services’ Video Indexer, including the 2018 IABM for innovation in content management and the prestigious Peter Wayne award, our team has remained focused on building a wealth of new features and models to allow any organization with a large archive of media content to unlock insights from their content; and use those insights improve searchability, enable new user scenarios and accessibility, and open new monetization opportunities. At NAB Show 2019, we are announcing a wealth of new enhancements to Video indexer’s models and experiences.

Now generally available

Extending Azure security capabilities

As more organizations are delivering innovation faster by moving their businesses to the cloud, increased security is critically important for every industry. Azure has built-in security controls across data, applications, compute, networking, identity, threat protection, and security management so you can customize protection and integrate partner solutions. Microsoft Azure Security Center is the central hub for monitoring and protecting against related incidents within Azure. We love making Azure Security Center richer for our customers, and were excited to share some great updates last week at Hannover Messe 2019. Read on to learn about them.

Event-driven Java with Spring Cloud Stream Binder for Azure Event Hubs

Spring Cloud Stream Binder for Azure Event Hubs is now generally available. It is now easier to build highly scalable event-driven Java apps using Spring Cloud Stream with Event Hubs, a fully managed, real-time data ingestion service on Azure that is resilient and reliable service for any situation. This includes emergencies, thanks to its geo-disaster recovery and geo-replication features.

Fast and optimized connectivity and delivery solutions on Azure

We’re announcing the availability of innovative and industry leading Azure services that will help the attendees of the National Association of Broadcasters Show realize their future vision to deliver for their audiences:  Azure Front Door Service (AFD), ExpressRoute Direct and Global Reach, as well as some cool new additions to both AFD and our Content Delivery Network (CDN). April 6-11, Microsoft will be at NAB Show 2019 in Las Vegas, bringing together an industry centered on the ability to deliver richer content experiences for audiences around  the word.

Azure Front Door Service is now generally available

We’re announcing the general availability of Azure Front Door Service (AFD) which we launched in preview last year – a scalable and secure entry point for fast delivery of your global applications. AFD is your one stop solution for your global website/application. Azure Front Door Service enables you to define, manage, and monitor the global routing for your web traffic by optimizing for best performance and instant global failover for high availability. With Front Door, you can transform your global (multi-region) consumer and enterprise applications into robust, high-performance personalized modern applications, APIs, and content that reach a global audience with Azure.

News and updates

Unlock dedicated resources and enterprise features by migrating to Service Bus Premium

Azure Service Bus has been the Messaging as a Service (MaaS) option of choice for our enterprise customers. We’ve seen tremendous growth to our customer base and usage of the existing namespaces, which inspires us to bring more features to the service. We recently expanded Azure Service Bus to support all Azure regions with Availability Zones to help our customers build more resilient solutions. We also expanded the Azure Service Bus Premium tier to more regions to enable our customers to leverage many enterprise ready features on their Azure Service Bus namespaces while also being closer to their customers.

Device template library in IoT Central

With the new addition of a device template library into our Device Templates page, we are making it easier than ever to onboard and model your devices. Now, when you get started with creating a new template, you can choose between building one from scratch or you can quickly select from a library of existing device templates. Today you’ll be able to choose from our MXChip, Raspberry Pi, or Windows 10 IoT Core templates. We will be working to improve this library by adding more device templates which provide customer value.

Azure Updates

Learn about important Azure product updates, roadmap, and announcements. Subscribe to notifications to stay informed.

Technical content

Step up your machine learning process with Azure Machine Learning service

The Azure Machine Learning service provides a cloud-based service you can use to develop, train, test, deploy, manage, and track machine learning models. With Automated Machine Learning and other advancements available, training and deploying machine learning models is easier and more approachable than ever. Automated machine learning helps users of all skill levels accelerate their pipelines, leverage open source frameworks, and scale easily. Automated machine learning, a form of deep machine learning, makes machine learning more accessible across an organization.

Schema validation with Event Hubs

Event Hubs is fully managed, real-time ingestion Azure service. It integrates seamlessly with other Azure services. It also allows Apache Kafka clients and applications to talk to Event Hubs without any code changes. Apache Avro is a binary serialization format. It relies on schemas (defined in JSON format) that define what fields are present and their type. Since it's a binary format, you can produce and consume Avro messages to and from the Event Hubs. Event Hubs' focus is on the data pipeline. It doesn't validate the schema of the Avro events.

SheHacksPurple: Changes to Azure Security Center Subscription

In this short video Tanya Janca will describe recent changes to Azure Security Center Subscription coverage; it now covers storage containers and app service.

PowerShell Basics: Finding the right VM size with Get-AzVMSize

Finding the right virtual machine for your needs can be difficult especially with all of the options available. New options seem to come around often so you may need to regularly check the VMs available within your Azure region. Using PowerShell makes it quick and easy to see all of the VM sizes so you can get to building your infrastructure, and Az-VM will help you determine the VM sizes you can deploy in specific regions, into availability sets, or what size a machine in your environment is running.

Hands-on Lab: Creating an IoT Solution with Kotlin Azure Functions

Dave Glover walks through building an end-to-end IoT Solution with Azure IoT Hub, IoT Hub, Kotlin based Azure Functions and Azure SignalR.

An Ambivert’s Guide to Azure Functions

Chloe Condon will walk you through how to use Azure Functions, Twilio, and a Flic Button to create an app to trigger calls/texts to your phone.

Making Machine Learning Approachable

Often we hear about machine learning and deep learning as a topic that only researchers, mathematicians, or PhDs can be smart enough grasp. It is possible to explain seemingly complex fundamental concepts and algorithms of machine learning without using cryptic terminology or confusing notation.

Azure shows

Episode 273 – Application Patterns in Azure | The Azure Podcast

Rasmus Lystrøm, a Senior Microsoft consultant from Denmark, shares his thoughts and ideas around building applications that take advantage of Azure and allow developers to focus on the business problem at hand.

HTML5 audio not supported

Azure Blob Storage on Azure IoT Edge | Internet of Things Show

Azure Blob Storage on IoT Edge is a light-weight Azure Consistent module which provides local Block blob storage. It comes with configurable abilities to: Automatically tier the data from IoT Edge device to Azure; Automatically delete the data from IoT edge device after specified time.

Azure Pipelines | Visual Studio Toolbox

In this episode, Robert is joined by Mickey Gousset, who takes us on a tour of Azure Pipelines. He shows how straightforward it is to automate your builds and deployments using Azure Pipelines. They are a great way to started on your path to using DevOps practices to ship faster at higher quality.

Deploy WordPress with Azure Database for MariaDB | Azure Friday

Learn how to deploy WordPress backed by Azure Database for MariaDB. It is the latest addition to the open source database services available on the Azure platform and further strengthens Azure's commitment to open source and its communities. The service offers built-in high availability, automatic backups, and scaling of resources to meet your workload's needs.

Hybrid enterprise serverless in Microsoft Azure | Microsoft Mechanics

Apply serverless compute securely and confidently to any workload with new enterprise capabilities. Jeff Hollan, Sr. Program Manager from the Azure Serverless team, demonstrates how you can turn on managed service identities and protect secrets with Key Vault integration, control virtual network connectivity for both Functions and Logic Apps, build apps that integrate with systems inside your virtual network using event-driven capabilities and set cost thresholds to control how much you want to scale with the Azure Functions Premium plan.

Virtual node autoscaling and Azure Dev Spaces in Azure Kubernetes Service (AKS) | Microsoft Mechanics

Recent updates to the Azure Kubernetes Service (AKS) for developers and ops. Join, Program Manager for Azure Kubernetes Service, Ria Bhatia as she shows you the new autoscaling options using virtual nodes as well as how you can use Azure Dev Spaces to test your AKS apps without simulating dependencies. Also, check out the and new ways to troubleshoot and monitor your Kubernetes apps with Azure Monitor.

How to host a static website with Azure Storage | Azure Tips and Tricks

In this edition of Azure Tips and Tricks, learn how you can host a static website running in Azure Storage in a few steps.

How to use the Azure Activity Log | Azure Portal Series

The Azure Activity Log informs you of the who, the what and the when for operations in your Azure resources. In this video of the Azure Portal “How To” Series, learn what activity logs are in the Azure Portal, how to access it, and how to make use of them.

Ted Neward on the ‘Ops’ Side of DevOps | Azure DevOps Podcast

Ted Neward and Jeffrey Palermo are going to be talking about the ‘Ops’ (AKA the operations) side of DevOps. They discuss how operations is implemented in the DevOps movement, the role of operations, how Dev and Ops should work together, what companies should generally understand around the different roles, where the industry is headed, and Ted’s many recommendations in the world of DevOps.

HTML5 audio not supported

Episode 5 – CodeCamping with Philly.NET founder Bill Wolff | AzureABILITY

Philly.NET founder and coding-legend Bill Wolff visits the podcast to talk about both the forthcoming Philly Code Camp 2019.1 and the user-group experience in general.

HTML5 audio not supported

Events

Welcome to NAB Show 2019 from Microsoft Azure!

At NAB Show 2019 this week in Las Vegas we’re announcing new Azure rendering, Azure Media Services, Video Indexer and Azure Networking capabilities to help you achieve more. We’ll also showcase how partners such as Zone TV and Nexx.TV are using Microsoft AI and Azure Cognitive Services to create more personalized content and improve monetization of existing media assets.

Deliver New Services | Hannover Messe 2019

With intelligent manufacturing technology, you can deliver new services, innovate faster to reduce time to market, and increase your margins. At the Hannover Messe 2019 event, discover how Microsoft and partners are empowering companies to create new business value with digital services to develop data-driven and AI-enhanced products and services.

Database administrators, discover gold in the cloud

Data is referred to these days as “the new oil” or “black gold” of industry. If the typical Fortune 100 company gains access to a mere 10 percent more of their data, that can result in increased revenue of millions of dollars. Recently, one of our teams discovered new technology that enables us to do more with less—like agile development helping us deploy new features and software faster to market, and DevOps ensuring it was done with less impact to mission-critical systems. To learn more, attend a free webinar where we’ll be sharing more on the many advantages of managing data in the cloud, and how your company’s “black gold” will make you tomorrow’s data hero.

Customers, partners, and industries

IoT in Action: Enabling cloud transformation across industries

The intelligent cloud and intelligent edge are sparking massive transformation across industries. As computing gets more deeply embedded in the real world, powerful new opportunities arise to transform revenue, productivity, safety, customer experiences, and more. According to a white paper by Keystone Strategy, digital transformation leaders generate eight percent more per year in operating income than other enterprises. Here we lay out a typical cloud transformation journey and provide examples of how the cloud is transforming city government, industrial IoT, and oil and gas innovators.

Enabling precision medicine with integrated genomic and clinical data

Kanteron Systems Platform is a patient-centric, workflow-aware, precision medicine solution. Their solution to data in silos, detached from the point of care integrates many key types of healthcare data for a complete patient longitudinal record to power precision medicine including medical imaging, digital pathology, clinical genomics, and pharmacogenomic data.

Spinnaker continuous delivery platform now with support for Azure

Spinnaker is an open source, multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence. It is being chosen by a growing number of enterprises as the open source continuous deployment platform used to modernize their application deployments. With this blog post and the recent release of Spinnaker (1.13), we are excited to announce that Microsoft has worked with the core Spinnaker team to ensure Azure deployments are integrated into Spinnaker.

 

Azure Stack HCI solutions, Premium Block Blob Storage and new capabilities in the Azure AI space! | Azure This Week – A Cloud Guru

This time on Azure This Week, Lars discusses Microsoft’s hybrid cloud strategy which gets another push with hyper-converged infrastructure, Azure Premium Block Blob Storage is now generally available, and AI developers get more goodies on the Azure platform.

Be sure to check out the new series from A Cloud Guru, Azure Fireside Chats.
Quelle: Azure

How Skype modernized its backend infrastructure using Azure Cosmos DB – Part 3

This is a three-part blog post series about how organizations are using Azure Cosmos DB to meet real world needs, and the difference it’s making to them. In part 1, we explored the challenges Skype faced that led them to take action. In part 2, we examined how Skype implemented Azure Cosmos DB to modernize its backend infrastructure. In this post (part 3 of 3), we cover the outcomes resulting from those efforts.

Note: Comments in italics/parenthesis are the author's.

The outcomes

Improved throughout, latency, scalability, and more

Using Azure Cosmos DB, Skype replaced three monolithic, geographically isolated data stores with a single, globally distributed user data service that delivers better throughput, lower latencies, and improved availability. The new PCS service can elastically scale on demand to handle to handle future growth, and gives the Skype team ownership of its data without the burden of maintaining its own infrastructure—all at less than half what it cost to maintain the old PCS system. Development of the solution was fast and straightforward thanks to the extensive functionality provided by Azure Cosmos DB and the fact that it’s a fully-hosted service.

Better throughout and lower latencies

Compared to the old solution, the new PCS service is delivering improved throughput and lower latency—in turn enabling the Skype team to easily meet all its SLAs. “Easy geographic distribution, as enabled by Azure Cosmos DB, was a key enabler in making all this possible,” says Kaduk. “For example, by enabling us to put data closer to where its users are, in Europe, we’ve been able to significantly reduce the time required for the permission service that’s used to setup a call—and meet our overall one-second SLA for that task.”

Higher availability

The new PCS service is supporting its workload without timeouts, deadlocks, or quality-of-service degradation—meaning that users are no longer inconvenienced with bad data or having to wait. And because the service runs on Azure Cosmos DB, the Skype team no longer needs to worry about the availability of the underlying infrastructure upon which its new PCS service runs. 

“Azure Cosmos DB provides a 99.999 percent read availability SLA for all multiregion accounts, with built-in helps protect against the unlikely event of a regional outage,” says Kaduk. “We can prioritize failover order for our multiregion accounts and can even manually trigger failover to test the end-to-end availability of our app—all with guaranteed zero data-loss.”

Elastic scalability

With Azure Cosmos DB, the Skype team can independently and elastically scale storage and throughput at any time, across the globe. All physical partition management required to scale is fully managed by Azure Cosmos DB and is transparent to the Skype team. Azure Cosmos DB handles the distribution of data across physical and logical partitions and the routing of query requests to the right partition—all without compromising availability, consistency, latency, or throughput. All this enables the team to pay for only the storage and throughput it needs today, and to avoid having to invest any time, energy, or money in spare capacity before it’s needed.

“The ability of Azure Cosmos DB to scale is obvious,” says Kaduk. “We planned for 100 terabytes of data 18 months ago and are already at 140 terabytes, with no major issues handling that growth.

Full ownership of data – with zero maintenance and administration

Because Azure Cosmos DB is a fully managed Microsoft Azure service, the Skype team doesn’t need to worry about day-to-day administration, deploy and configure software, or deal with upgrades. Every database is automatically backed up, protected against regional failures, and encrypted, so you the team doesn’t need to worry about those things either—leaving it with more time to focus on delivering new customer value.

“One of the great things about our new PCS service is that we fully own the data store, whereas we didn’t before,” says Kaduk. “In the past, when Skype was first acquired by Microsoft, we had a team that maintained our databases. We didn’t want to continue maintaining them, so we handed them off to a central team. Today, that same user data is back under our full control and we’re still not burdened with day-to-day maintenance—it’s really the best of both worlds.”

Lower costs

Although Kaduk’s team wasn’t paying to maintain the old PCS databases, he knows what that used to cost—and says that the monthly bill for the new solution running on Azure Cosmos DB is much lower. “Our new PCS data store is about 40 percent less expensive than the old one was,” he states. “We pay that cost ourselves today, but, given all the benefits, it’s well worth it.”

Rapid, straightforward implementation

All in all, Kaduk feels the migration to Azure Cosmos DB was “pretty simple and straightforward.” Development began in May 2017, and by October 2017, all development was complete and the team began migrating all 4 billion Skype users to the new solution. The team consisted of eight developers, one program manager, and one manager.

“We had no prior experience with Azure Cosmos DB, but it was pretty easy to come up to speed,” he states. “Even with a few lessons learned, we did it all in six months, which is pretty impressive for a project of this scale. One reason for our rapid success was that we didn’t have to worry about deploying any physical infrastructure. Azure Cosmos DB also gave us a schema-free document database with both SQL syntax and change feed streaming capabilities built-in, all under strict SLAs. This greatly simplified our architecture and enabled us to meet all our requirements in a minimum amount of time.”

Lessons learned

Looking back at the project, Kaduk recalls several “lessons learned.” These include:

Use direct mode for better performance – How a client connects to Azure Cosmos DB has important performance implications, especially with respect to observed client side latency. The team began by using the default Gateway Mode connection policy, but switched to a Direct Mode connection policy because it delivers better performance.
Learn how to write and handle stored procedures – With Azure Cosmos DB, transactions can only be implemented using stored procedures—pieces of application logic that are written in JavaScript that are registered and executed against a collection as a single transaction. (In Azure Cosmos DB, JavaScript is hosted in the same memory space as the database. Hence, requests made within stored procedures execute in the same scope of a database session, which enables Azure Cosmos DB to guarantee ACID for all operations that are part of a single stored procedure.)
Pay attention to query design – With Azure Cosmos DB, queries have a large impact in terms of RU consumption. Developers didn’t pay much attention to query design at first, but soon found that RU costs were higher than desired. This led to an increased focus on optimizing query design, such as using point document reads wherever possible and optimizing the query selections per API.
Use the Azure Cosmos DB SDK 2.x to optimize connection usage – Within Azure Cosmos DB, the data stored in each region is distributed across tens of thousands of physical partitions. To serve reads and writes, the Azure Cosmos DB client SDK must establish a connection with the physical node hosting the partition. The team started by using the Azure Cosmos DB SDK 1.x, but found that its lack of support for connection multiplexing led to excessive connection establishment and closing rates. Switching to the Azure Cosmos DB SDK 2.x, which supports connection multiplexing, helped solve the problem —and also helped mitigate SNAT port exhaustion issues.

The following diagram shows connection status and time_waits when using SDK 1.x.

And the following shows the same after the move to SDK 2.x.

Quelle: Azure

How Skype modernized its backend infrastructure using Azure Cosmos DB – Part 1

This is a three-part blog post series about how organizations are using Azure Cosmos DB to meet real world needs, and the difference it’s making to them. In this post (part 1 of 3), we explore the challenges Skype faced that led them to take action. In part 2, we’ll examine how Skype implemented Azure Cosmos DB to modernize its backend infrastructure. In part 3, we’ll cover the outcomes resulting from those efforts.

Note: Comments in italics/parenthesis are the author's.

Scaling to four billion users isn’t easy

Founded in 2003, Skype has grown to become one of the world’s premier communication services, making it simple to share experiences with others wherever they are. Since its acquisition by Microsoft in 2010, Skype has grown to more than four billion total users, more than 300 million monthly active users, and more than 40 million concurrent users.

People Core Service (PCS), one of the core internal Skype services, is where contacts, groups, and relationships are stored for each Skype user. The service is called when the Skype client launches, is checked for permissions when initiating a conversation, and is updated as the user’s contacts, groups, and relationships are added or otherwise changed. PCS is also used by other, external systems, such as Microsoft Graph, Cortana, bot provisioning, and other third-party services.

Prior to 2017, PCS ran in three datacenters in the United States, with data for one-third of the service’s 4 billion users represented in each datacenter. Each location had a large, monolithic SQL Server relational database. Having been in place for several years, those databases were beginning to show their age. Specific problems and pains included:

Maintainability: The databases had a huge, complex, tightly coupled code base, with long stored procedures that were difficult to modify and debug. There were many interdependencies, as the database was owned by a separate team and contained data for more than just Skype, its largest user. And with user data split across three such systems in three different locations, Skype needed to maintain its own routing logic based on which user’s data it needed to retrieve or update.
Excessive latency: With all PCS data being served from the United States, Skype clients in other geographies and the local infrastructure that supported them (such as call controllers), experienced unacceptable latency when querying or updating PCS data. For example, Skype has an internal service level agreement (SLA) of less than one second when setting up a call. However, the round-trip times for the permission check performed by a local call controller in Europe, which reads data from PCS to ensure that user A has permission to call user B, made it impossible to setup a call between two users in Europe within the required one-second period.
Reliability and data quality: Database deadlocks were a problem—and were exacerbated because data used by PCS was shared with other systems. Data quality was also an issue, with users complaining about missing contacts, incorrect data for contacts, and so on.

All of these problems became worse as usage grew, to the point that, by 2017, the pain had become unacceptable. Deadlocks were becoming more and more common as database traffic increased, which resulted in service outages, and weekly backups were leaving some data unavailable. “We did the best with what we had, coming up with lots of workarounds to deal with all the deadlocks, such as extra code to throttle database requests,” recalls Frantisek Kaduk, Principal .NET Developer on the Skype team. “As the problems continued to get worse, we realized we had to do something different.”

In addition, the team faced a deadline related to General Data Protection Regulation (GDPR); the system didn’t meet GDPR requirements, so there was a deadline for shutting down the servers.

The team decided that, to deliver an uncompromised user experience, it needed its own data store. Requirements included high throughput, low latency, and high availability—all of which had to be met regardless of where users were in the globe.

An event-driven architecture was a natural fit, however, it would need to be more than just a basic implementation that stored current data. “We needed a better audit trail, which meant also storing all the events leading up to a state change,” explains Kaduk. “For example, to handle misbehaving clients, we need to be able to replay that series of events. Similarly, we need event history to handle cross-service/cross-shard transactions and other post-processing tasks. The events capture the originator of a state change, the intention of that change, and the result of it.”

Continue on to part 2, which examines how Skype implemented Azure Cosmos DB to modernize its backend infrastructure.
Quelle: Azure

How Skype modernized its backend infrastructure using Azure Cosmos DB – Part 2

This is a three-part blog post series about how organizations are using Azure Cosmos DB to meet real world needs, and the difference it’s making to them. In part 1, we explored the challenges Skype faced that led them to take action. In this post (part 2 of 3), we examine how Skype implemented Azure Cosmos DB to modernize its backend infrastructure. In part 3, we’ll cover the outcomes resulting from those efforts.

Note: Comments in italics/parenthesis are the author's.

The solution

Putting data closer to users

Skype found the perfect fit in Azure Cosmos DB, the globally distributed NoSQL database service from Microsoft. It gave Skype everything needed for its new People Core Service (PCS), including turnkey global distribution and elastic scaling of throughput and storage, making it an ideal foundation for distributed apps like Skype that require extremely low latency at global scale.

Initial design decisions

Prototyping began in May 2017. Some early choices made by the team included the following:

Geo-replication: The team started by deploying Azure Cosmos DB in one Azure region, then used its pushbutton geo-replication to replicate it to a total of seven Azure regions: three in North America, two in Europe, and two in the Asia Pacific (APAC) region. However, it later turned out that a single presence in each of those three geographies was enough to meet all SLAs.
Consistency level: In setting up geo-replication, the team chose session consistency from among the five consistency levels supported by Azure Cosmos DB. (Session consistency is often ideal for scenarios where a device or user session is involved because it guarantees monotonic reads, monotonic writes, and read-your-own-writes.)
Partitioning: Skype chose UserID as the partition key, thereby ensuring that all data for each user would reside on the same physical partition. A physical partition size of 20GB was used instead of the default 10GB size because the larger number enabled more efficient allocation and usage of request units per second (RU/s)—a measure of pre-allocated, guaranteed database throughput. (With Azure Cosmos DB, each collection must have a partition key, which acts as a logical partition for the data and provides Azure Cosmos DB with a natural boundary for transparently distributing it internally, across physical partitions.)

Event-driven architecture based on Azure Cosmos DB change feed

In building the new PCS service, Skype developers implemented a micro-services, event-driven architecture based on change feed support in Azure Cosmos DB. Change feed works by “listening” to an Azure Cosmos DB container for any changes and outputting a sorted list of documents that were changed, in the order in which they were modified. The changes are persisted, can be processed asynchronously and incrementally, and the output can be distributed across one or more consumers for parallel processing. (Change Feed in Azure Cosmos DB is enabled by default for all accounts, and it does not incur any additional costs. You can use provisioned RU/s to read from the feed, just like any other operation in Azure Cosmos DB.)

“Generally, an event-driven architecture uses Kafka, Event Hub, or some other event source,” explains Kaduk. “But with Azure Cosmos DB, change feed provided a built-in event source that simplified our overall architecture.”

To meet the solution’s audit history requirements, developers implemented an event sourcing with capture state pattern. Instead of storing just the current state of the data in a domain, this pattern uses an append-only store to record the full series of actions taken on the data (the “event sourcing” part of the pattern), along with the mutated state (i.e. the “capture state”). The append-only store acts as the system of record and can be used to materialize domain objects. It also provides consistency for transactional data, and maintains full audit trails and history that can enable compensating actions.

Separate read and write paths and data models for optimal performance

Developers used the Command and Query Responsibility Segregation (CQRS) pattern together with the event sourcing pattern to implement separate write and read paths, interfaces, and data models, each tailored to their relevant tasks. “When CQRS is used with the Event Sourcing pattern, the store of events is the write model, and is the official source of information capturing what has happened or changed, what was the intention, and who was the originator,” explains Kaduk. “All of this is stored on one JSON document for each changed domain aggregate—user, person, and group. The read model provides materialized views that are optimized for querying and are stored in a second, smaller JSON documents. This is all enabled by the Azure Cosmos DB document format and the ability to store different types of documents with different data structures within a single collection.” Find more information on using Event Sourcing together with CQRS.

Custom change feed processing

Instead of using Azure Functions to handle change feed processing, the development team chose to implement its own change feed processing using the Azure Cosmos DB change feed processor library—the same code used internally by Azure Functions. This gave developers more granular control over change feed processing, including the ability to implement retrying over queues, dead-letter event support, and deeper monitoring. The custom change feed processors run on Azure Virtual Machines (VMs) under the “PaaS v1” model.

“Using the change feed processor library gave us superior control in ensuring all SLAs were met,” explains Kaduk. “For example, with Azure Functions, a function can either fail or spin-and-wait while it retries. We can’t afford to spin-and-wait, so we used the change feed processor library to implement a queue that retries periodically and, if still unsuccessful after a day or two, sends the request to a ‘dead letter collection’ for review. We also implemented extensive monitoring—such as how fast requests are processed, which nodes are processing them, and estimated work remaining for each partition.” (See Frantisek’s blog article for a deeper dive into how all this works.)

Cross-partition transactions and integration with other services

Change feed also provided a foundation for implementing background post-processing, such as cross-partition transactions that span the data of more than one user. The case of John blocking Sally from sending him messages is a good example. The system accepts the command from user John to block user Sally, upon which the request is validated and dispatched to the appropriate handler, which stores the event history and updates the query able data for user John. A postprocessor responsible for cross-partition transactions monitors the change feed, copying the information that John blocked Sally into the data for Sally (which likely resides in a different partition) as a reverse block. This information is used for determining the relationship between peers. (More information on this pattern can be found in the article, “Life beyond Distributed Transactions: an Apostate’s Opinion.”)

Similarly, developers used change feed to support integration with other services, such as notification, graph search, and chat. The event is received on background by all running change feed processors, one of which is responsible for publishing a notification to external event consumers, such as Azure Event Hub, using a public schema.

Migration of user data

To facilitate the migration of user data from SQL Server to Azure Cosmos DB, developers wrote a service that iterated over all the user data in the old PCS service to:

Query the data in SQL Server and transform it into the new data models for Azure Cosmos DB.
Insert the data into Azure Cosmos DB and mark the user’s address book as mastered in the new database.
Update a lookup table for the migration status of each user.

To make the entire process seamless to users, developers also implemented a proxy service that checked the migration status in the lookup table for a user and routed requests to the appropriate data store, old or new. After all users were migrated, the old PCS service, the lookup table, and the temporary proxy service were removed from production.

Migration for production users began in October 2017 and took approximately two months. Today, all requests are processed by Azure Cosmos DB, which contains more than 140 terabytes of data in each of the replicated regions. The new PCS service processes up to 15,000 reads and 6,000 writes per second, consuming between 2.5 million and 3 million RUs per second across all replicas. A process monitors that RU usage automatically scaling allocated RUs up or down as needed.

Continue on to part 3, which covers the outcomes resulting from Skype’s implementation of Azure Cosmos DB.
Quelle: Azure

Azure Stack IaaS – part seven

It takes a team

Most apps get delivered by a team. When your team delivers the app through virtual machine (VMs), it is important to coordinate efforts. Born in the cloud to serve teams from all over the world, Azure and Azure Stack have some handy capabilities to help you coordinate VM operations across your team.

Identity and single sign-on

The easiest identity to remember is the one you use every day to sign in to your corporate network and check your email. If you are using Azure Active Directory, or your own active directory, your login to Azure Stack will be the same. This is something your admin sets up when the Azure Stack was deployed so you don’t have to learn and remember different credentials.

Learn more about integrating Azure Stack with Azure Active Directory and Active Directory Federation Services (ADFS).

Role-based access control

In the virtualization days my team typically coordinated operations through credentials to VMs and the management tools. The Azure Resource Manager include a very robust role-based access control (RBAC) system that not only allows you to identify who can access the system, but allows you to assign people to roles and set a scope of control to define what they are allowed to do to what.

More than just people in my organization

When you work in the cloud, you may need to collaborate with people from other organizations. As more and more things become automated, you might have to give a process, not a person, access to a resource. Azure and Azure Stack have you covered. The image below shows a VM where I have given access both to three applications (service principals) and a user from an external domain (foreign principal). 

Service principal

When an application needs access to deploy or configure VMs, or other resource in your Azure Stack, you can create a service principal which is a credential for the application. You can then delegate only the necessary permissions to that service principal.

As an example, you may have a configuration management tool that inventories VMs in your subscription. In this scenario, you can create a service principal, grant the reader role to that service principal, and limit the configuration management tool to read-only access.

Learn more about service principals in Azure Stack.

Foreign principal

A foreign principal is the identity of a person that is managed by another authority. For example, the team at Contoso.com might need to allow access to a VM for a contractor or a partner from Fabrikam.com. In the virtualization days we would create a user account in our domain for that user, but that was a management headache. With Azure and Azure Stack you can allow users that sign in with their corporate credentials to access your VMs.

Learn how to enable multi-tenancy in Azure Stack.

Activity logs

When your VM runs around the clock, you will have teams in at all times of the day. Fortunately, Azure and Azure Stack include an activity log that allows to track all changes that have been made to the VM and who initiated the action.

Learn more about Azure Activity Logs.

Locks

Sometimes people make errors, like deleting a production VM by mistake. A nice feature you will find in Azure and Azure Stack is the “lock.” A lock can be used to prevent any change or deletion on a VM or any other resource. When attempted, the user will get an error message until they manually remove the lock.

Learn more about locking VMs and other Azure resources.

Tags

The best place to store additional data about your VM is in the tool you manage the VM from. Azure and Azure Stack provide you that ability to add additional information about your VM through the Tags feature. You can use Tags to help your team keep track of the deployment environment, support contacts, cost center, or anything else important. You can even search for these tags in the portal to find the right resources quickly.

Learn more about tagging VMs and other Azure resources.

Work as a team, not individuals

The team features in Azure and Azure Stack allows your team to elevate its game to deliver the best virtual machine operations. Managing an Infrastructure-as-a-Service (IaaS) VM is more than stop, start, and login. The Azure platform powering Azure Stack IaaS allows you to organize, delegate, and track your team’s operations so you can deliver a better experience to your users.

In this blog series

We hope you come back to read future posts in this blog series. Here are some of our past and upcoming topics:

Azure Stack at its core is an Infrastructure-as-a-Service (IaaS) platform
Start with what you already have
Protect your stuff
Pay for what you use
Fundamentals of IaaS
Do it yourself
If you do it often, automate it
Build on the success of others
Journey to PaaS

Quelle: Azure

Azure AI does that?

Five examples of how Azure AI is driving innovation

Whether you’re just starting off in tech, building, managing, or deploying apps, gathering and analyzing data, or solving global issues —anyone can benefit from using cloud technology. Below we’ve gathered five cool examples of innovative artificial intelligence (AI) to showcase how you can be a catalyst for real change.

Facial recognition

You know that old box of photos you have sitting in the attic collecting cobwebs; the one with those beautifully embarrassing childhood photos half-covered by a misplaced thumb? How grateful would your family be if you could bring those back to life digitally, at the tip of your fingers? Manually scanning and downloading photos to all your devices would be a huge pain. And if those photos don’t have dates or the names of the people in them written on the back — forget it! But with AI algorithms, cognitive services, and facial recognition processes, organizing these photos by groups is super simple.

By utilizing Azure’s Face API, facial recognition algorithms can quickly and accurately detect, verify, identify, and analyze faces. They can provide facial matching, facial attributes, and characteristic analysis in order to organize people and facial definitions into groups of similar faces.

Handwriting analysis

Already spent hours manually sorting through those old photos? Not to worry, another helpful tool in the Computer Vision API is the ability to take the papers and handwritten notes you’ve compiled throughout your last project and create a cohesive document. No longer will you need to decipher those scribbles from your teammates and scratch your head whether that obscure symbol is a four or a “u.”

With Computer Vision API’s Recognizing Handwritten Text interface, you can conveniently take photos of handwritten notes, forms, whiteboards, sticky notes, that napkin you found, and anything in between. Rather than manually transcribing them, you can turn these documents into digital notes that are easy to comb through with a simple search. The interface can detect, extract, and digitally reproduce any type of handwriting—even Medieval Klingon! Imagine all the time and paper you will save!

Text analysis

A close cousin of the Handwriting API, the Text Analytics API allows for some pretty neat text analysis as well. Search through hundreds of documents, comb through customer reviews, tweets, and comments, and automatically identify posts for positive or negative sentiment by inputting just a few parameters. The API can also detect up to 120 different languages and identify things like if “times” refers to The New York Times or Times Square. Pretty cool, right?

Translate languages

Speaking of detecting different languages, the Translator Text API allows you to communicate with your colleagues from all over the map better than ever before. Start typing “Hello, it’s nice to meet you” into your app and the API can translate you and your colleagues’ entire conversation.

The Translator Text API can show text in different alphabets, translate Chinese characters to PinYin, display any of the supported transliteration languages in the Latin alphabet, and even show words written in the Latin alphabet in non-Latin characters such as Japanese, Hindi, or Arabic, all with some simple code. The API can be integrated into your apps, websites, tools, and solutions and allows you to add multi-language user experiences in more than 60 languages. This API is used by companies, like eBay, worldwide for website localization, e-commerce, customer support, messaging applications, bots, and more to provide quick and automatic translations for all their worldly customers.

Translator Text can also translate languages in real time through video/audio input so you can seamlessly communicate with colleagues around the world via video chat. It even converts video to written text, which makes content accessible for those who are hearing or visually impaired.

AI for Good

While all these services are great for automating business and personal projects, they can be used for much more. Last fall, Microsoft announced AI for Humanitarian Action: a new $40 million, five-year program that uses the power of AI to help the world recover from disasters, address the needs of children, protect refugees and displaced people, and promote respect for human rights. Part of this initiative is the AI for Good Suite, a five-year commitment to solve society’s biggest challenges using AI fundamentals.

One of those challenges is being addressed by long-time Microsoft partner Operation Smile, a nonprofit dedicated to repairing cleft lips and palates across the globe. Through the use of machine vision AI and facial modeling, surgeons can compare pre- and post-surgery outcomes, rank the most optimal repairs, and provide that data back to Operation Smile. From there, the organization can identify their top-performing surgeons and enable them to teach others how to improve their cleft repair techniques through videos that can be accessed around the globe.

Operation Smile is supercharging their doctors’ talents with technology to increase quality of life throughout the world. By utilizing AI, Operation Smile can help more children than ever before!

With AI, the sky is the limit. And who knows—you just might discover the next best innovation in AI technology.

Learn more

Learn more about what you can do with Cognitive Services

Get certified as an Azure AI Engineer
Quelle: Azure

Azure Security Center exposes crypto miner campaign

Azure Security Center discovered a new cryptocurrency mining operation on Azure customer resources.
This operation takes advantage of an old version of known open source CMS, with a known RCE vulnerability (CVE-2018-7600) as the entry point, and then after using the CRON utility for persistency, it mines “Monero” cryptocurrency using a new compiled binary of the “XMRig” open-source crypto mining tool.

Azure Security Center (ASC) spotted the attack in real-time, and alerted the affected customer with the following alerts:

Suspicious file download – Possible malicious file download using wget detected
Suspicious CRON job – Possible suspicious scheduling tasks access detected
Suspicious activity – ASC detected periodic file downloads and execution from the suspicious source
Process executed from suspicious location

The entry point

Following the traces the attacker left behind, we were able to track the entry point of this malware and conclude it was originated by leveraging a remote code execution vulnerability of a known open source CMS – CVE-2018-7600.

This vulnerability is exposed in an older version of this CMS and is estimated to impact a large number of websites that are using out of date versions. The cause of this vulnerability is insufficient input validation within an API call.

The first suspicious command line we noticed on the effected Linux machines was:

Decoding the base64 part of the command line reveals a logic of download and execution of a bash script file periodically, using the CRON utility:

The URL path also includes reference to the CMS name – another indication for the entry point (and for a sloppy attacker as well).

We also learned, from the telemetries collected from the harmed machines, that this first command line executes within “apache” user context, and within the relative CMS working directory.

We did an examination on the affected resources and discovered that all of them were running with an unpatched version of the relative CMS, which is exposed to a highly critical security risk that allows an attacker to run malicious code on the exposed resource.

Malware analysis

The malware uses the CRON utility (Unix job scheduler) for persistency by adding the following line to the CRON table file:

This results with the download and execution of a bash script file at every minute and allows the attacker to command and control using bash scripts.

The bash file (as we captured it in this time) downloads the binary file and executes it (As seen in the image above).
The binary check if the machine is already compromised, and downloads using the HTTP 1.1 POST method, or another binary file depending on the number of processors the machine has.

On first sight, the second binary seems to be more difficult to investigate since it’s clearly obfuscated. Luckily, the attacker chose to use UPX packer which focuses on compression and not on obfuscation.

After de-packing the binary, we found a compilation of the open-source cryptocurrency miner “XMRig” in version 2.6.3. The miner compiles with the configuration inside it, and pulls the mining jobs from the mining proxy server, therefore we were unable to estimate the number of clients and earnings of the attacker.

The big picture

By analyzing the behavior of several crypto miners, we have noticed 2 strong indicators for crypto miner driven attacks:

1. Killing competitors – Many crypto-attacks assume that the machine is already compromised, and try to kill other computing power competitors. It does this by observing the process list, focusing on:

Process name – From popular open source miners to less known mining campaigns
Command line arguments such as known pool domains, crypto hash algorithms, mining protocol, etc.
CPU usage consumption

Another common method we identified is to reset the CRON tab – which in many cases is in use as a persistence method for other compute power competitors.

2. Mining pools ­- Crypto mining jobs are being managed by the mining pool, which is responsible for gathering multiple clients to contribute and share the revenue across the clients. Most of the attackers use public mining pools which are simple to deploy and use, but once the attacker is exposed, his account might be blocked. Lately we noticed an increasing number of cases where attackers used their own proxy mining server. This technique helps the attacker stay anonymous, both from detection by a security product within the host (such as Azure Security Center Threat detection for Linux) and from detection by the public mining pool.

Conclusion and prevention

Preventing this attack is as easy as installing the latest security updates. A preferred option might be using SaaS (Software as a service) instead of maintaining a full web server and software environment.

Crypto-miner activity is easy to detect most of the time since it consumes significant resources.
Using a cloud security solution such as Azure Security Center, will continuously monitor the security of your machines, networks, and Azure services and will alert you when unusual activity is detected.
Quelle: Azure

McKesson chooses Google Cloud to help it chart a course to the future

From centralizing data management to using artificial intelligence (AI) to make healthcare predictions, advances in technology are transforming all medical disciplines. And as healthcare organizations strive to keep up with increasing patient expectations, many are looking to the cloud to find new ways to deliver quality, affordable services to patients, members and customers.Today, we are thrilled to announce that McKesson has selected Google Cloud as its preferred cloud provider. A Fortune 6 company, McKesson is a global leader in healthcare supply chain management solutions, retail pharmacy, community oncology and specialty care, and healthcare information technology. Its aim is to deliver more value to its customers and the healthcare industry—quickly and efficiently—through common platforms and resources.McKesson will take advantage of Google Cloud in numerous ways. The company will use Google Cloud Platform’s managed services, as well as healthcare-specific services such as theCloud Healthcare API, to help enhance its platforms and applications. It will use analytics on Google Cloud to make data-driven decisions for product manufacturing, specialty drug distribution, and pharmacy retail operations. Also, McKesson will migrate and modernize the mission critical SAP environment it uses to run its business to Google Cloud. Through the power of the cloud, McKesson hopes to create and modernize next generation solutions to deliver better healthcare—one patient at a time.“This partnership will support our continued digital transformation,” said Andrew Zitney, senior vice-president, CTO of McKesson Technology. “It will not only accelerate and expand our strategic objectives, it will also help fuel next generation innovation by driving new technologies, advancing new business models and delivering insights.”As we evolve to a more digitally-based healthcare environment, cloud computing will change how healthcare providers deliver quality, affordable services to their patients, members and customers. We believe our collaboration with McKesson will bring significant value to the healthcare ecosystem by building on Google Cloud’s secure, flexible and connected infrastructure to create and deploy better healthcare solutions.
Quelle: Google Cloud Platform

Leveraging AI and digital twins to transform manufacturing with Sight Machine

In the world of manufacturing, the Industrial Internet of Things (IIoT) has come, and that means data. A lot of data. Smart machines, equipped with sensors, add to the large quantity of data already generated from quality systems, MES, ERP and other production systems. All this data is being gathered in different formats and at different cadences making it nearly impossible to use—or to deliver business insights. Azure has mastered ingesting and storing manufacturing data with services such as Azure IoT Hub and Azure Data Lake, and now our partner Sight Machine has solved for the other huge challenge: data variety. Sight Machine on Azure is a leading AI-enabled analytics platform that enables manufacturers to normalize and contextualize plant floor data in real-time. The creation of these digital twins allows them to find new insights, transform operations, and unlock new value.

Data in the pre-digital world

Manufacturers are aware of the untapped potential of production data. Global manufacturers have begun investing in on-premises solutions for capturing and storing factory floor data. But these pre-digital world methods have many disadvantages. They result in siloed data, uncontextualized data (raw machine data with no connection to actual production processes), and limited accessibility (engineers and specialists are required to access and manipulate the data). Most importantly, this data is only accessed in a reactive manner: it does not reflect real-time conditions. It can’t be used to address quality and productivity issues as they occur, or to predict conditions that might impact output.

Cloud-based manufacturing intelligence

Sight Machine’s Digital Manufacturing Platform—built on Azure—harnesses artificial intelligence, machine learning and advanced analytics. It can continuously ingest and transform enormous quantities of production data into actionable insight; such as identifying vulnerabilities in quality and productivity throughout the enterprise. The approach is illustrated in this graphic.

Sight Machine’s platform leverages the IoT capabilities of Azure to ingest data from machines (PLCs and machine data). Azure IoT Hub and Azure Stream Analytics process the data in real-time and store it in Azure Blob Storage. Sight Machine’s AI Data Pipeline dynamically integrates this data with other production sources. These sources can include ERP data from Dynamics AX, analyses generated by Azure’s Machine Learning Service and HDInsight stored in Azure’s Data Lake. By combining all this data, Sight Machine creates a digital twin of the entire production process. Their analytics and visualization tools leverage this digital twin to deliver real-time information to the user. Integration with Azure Active Directory ensures the right engineers can access the right data and analysis tools.

Digital twins = one source of truth

Somewhat contrary to the notion of “twins,” digital twins result in one source of truth—at least in the world of data. The idea is simple: take data from disparate sources and locations—then combine the information in the cloud into digital representations of every machine, line, part, and process. Once a digital twin has been created, it can be stored, managed, analyzed, and presented.

Sight Machine creates digital twins that represent every manufacturing machine, line, facility, supplier, part, batch, and process. Sight Machine’s AI Data Pipeline automates the process of blending and transforming streaming data into fundamental units of analysis, purpose-built for manufacturing. This approach combines edge compute, cloud automation, and management with AI. The benefits include classifying, mapping, data transformation, and unified data models that are configurable for every manufacturing environment.

Recommended next steps

To learn more about the company, go to the Sight Machine website. To try out the service, go to the Azure Marketplace listing and click Contact me.
Quelle: Azure

Introducing the App Service Migration Assistant for ASP.NET applications

This blog post was co-authored by Nitasha Verma, Principal Group Enginnering Manager, Azure App Service.

In June 2018, we released the App Service Migration Assessment Tool. The Assessment Tool was designed to help customers quickly and easily assess whether a site could be moved to Azure App Service by scanning an externally accessible (HTTP) endpoint. Today we’re pleased to announce the release of an updated version, the App Service Migration Assistant! The new version helps customers and partners move sites identified by the assessment tool by quickly and easily migrating ASP.Net sites to App Service. 

The App Service Migration Assistant is designed to simplify your journey to the cloud through a free, simple, and fast solution to migrate ASP.Net applications from on-premises to the cloud. You can quickly:

Assess whether your app is a good candidate for migration by running a scan of its public URL.
Download the Migration Assistant to begin your migration.
Use the tool to run readiness checks and general assessment of your app’s configuration settings, then migrate your app or site to Azure App Service via the tool.

Keep reading to learn more about the tool or start your migration now.​

Getting started

Download the App Service Migration Assistant. This tool works with ASP.Net version 7.0 and above and will migrate site content and configuration to your Azure App Service subscription using either a new or existing App Service Plan.

How the tool works

The Migration Assistant tool is a local agent that performs a detailed assessment and then walks you through the migration process. The tool performs readiness checks as well as a general assessment of the web app’s configuration settings.

Once the application has received a successful assessment, the tool will walk you through the process of authenticating with your Azure subscription and then prompt you to provide details on the target account and App Service plan along with other configuration details for the newly migrated site.

The Migration Assistant tool will then move your site to the target App Service plan while also configuring Hybrid Connections, should that option be selected.

Database migration and Hybrid Connections

Our Migration Assistant is designed to migrate the web application and associated configurations, but it does not migrate the database. There are two options for your database:

Use the SQL Migration Tool
Leave your database on-premises and connect to it from the cloud using Hybrid Connections

When used with App Service, Hybrid Connections allows you to securely access application resources in other networks – in this case an on-premises SQL database. The migration tool configures and sets up Hybrid Connections for you, allowing you to migrate your site while keeping your database on-premises to be migrated at your leisure.

Supported configurations

The tool should migrate most modern ASP.Net applications, but there are some configurations that are not supported. These include:

IIS version less than 7.0
Dependence on ISAPI filters
Dependence on ISAPI extensions
Bindings that are not HTTP or HTTPS
Endpoints that are not port 80 for HTTP, or port 443 for HTTPS
Authentication schemes other than anonymous
Dependencies on applicationhost.config settings made with a location tag
Applications that use more than one application pool
Use of an application pool that uses a custom account
URL Rewrite rules that depend on global settings
Web farms – specifically shared configuration

You can find more details on the what the tool supports, as well as workarounds for some unsupported sites on the documentation page.

You can also find more details on App Service migrations on the App Service Migration checklist.

What’s next

We plan to continue adding functionally to the tool in the coming months. With the most immediate priority being additional ASP.NET scenarios and support for additional web frameworks, such as Java and PHP.

If you have any feedback on the tool or would like to suggest improvements, please submit your feature requests on our GitHub page.
Quelle: Azure