Real-time serverless applications with the SignalR Service bindings in Azure Functions

Since our public preview announcement at Microsoft Ignite 2018, every month thousands of developers worldwide have leveraged the Azure SignalR Service bindings for Azure Functions to add real-time capabilities to their serverless applications. Today, we are excited to announce the general availability of these bindings in all global regions where Azure SignalR Service is available!

SignalR Service is a fully managed Azure service that simplifies the process of adding real-time web functionality to applications over HTTP. This real-time functionality allows the service to push messages and content updates to connected clients using technologies such as WebSocket. As a result, clients are updated without the need to poll the server or submit new HTTP requests for updates.

Azure Functions provides a productive programming model based on triggers and bindings for accelerated development and serverless hosting of event-driven applications. It enables developers to build apps using the programming languages and tools of their choice, with an end-to-end developer experience that spans from building and debugging locally, to deploying and monitoring in the cloud. Combining Azure SignalR Service with Azure Functions using these bindings, you can easily push updates to the UI of your applications with just a few lines of code. The source of those updates can be data coming from different Azure services, or any service able to communicate over HTTP, thanks to the triggers supported in Azure Functions that will start the execution of a script responding to an event.

A common scenario worth mentioning is updating the UI of an application based on modifications made on the database. Using a combination of Cosmos DB change feed, Azure Functions, and SignalR Service, you can automate these UI updates in real-time with just a few lines of code for registering the client that will receive those updates and pushing the updates themselves. This fully managed experience is a great fit for event-driven scenarios and enables the creation of serverless backends and applications with real-time capabilities, reducing development time and operations overhead.

Using the Azure SignalR Service bindings for Azure Functions you will be able to:

Use SignalR Service without dependency on any application server for a fully managed, serverless experience.
Build serverless real-time applications using all Azure Functions generally available languages: JavaScript, C#, and Java.
Leverage the SignalR Service bindings with all event triggers supported by Azure Functions to push messages to connected clients in real-time.
Use App Service Authentication with SignalR Service and Azure Functions for improved security and out-of-the-box, fully managed authentication.

Next steps

Check out the documentation, “Build real-time apps with Azure Functions and Azure SignalR Service.”
Follow the quickstart, “Create a chat room with Azure Functions and SignalR Service using JavaScript” to get started.
Check out more code samples on the GitHub repo.
Sign up for your Azure account for free.

We'd like to hear about your feedback and comments. You can reach the product team on the GitHub repo, or by email. 
Quelle: Azure

Microsoft opens first datacenters in Africa with general availability of Microsoft Azure

Today, I am pleased to announce the general availability of Microsoft Azure from our new cloud regions in Cape Town and Johannesburg, South Africa. Nedbank, Peace Parks Foundation, and eThekwini water are just a few of the organizations in Africa leveraging Microsoft cloud services today and will benefit from the increased computing resources and connectivity from our new cloud regions.

The launch of these regions marks a major milestone for Microsoft as we open our first enterprise-grade datacenters in Africa, becoming the first global provider to deliver cloud services from datacenters on the continent. The new regions provide the latest example of our ongoing investment to help enable digital transformation and advance technologies such as AI, cloud, and edge computing across Africa.

By delivering the comprehensive Microsoft Cloud — comprising Azure, Office 365, and Dynamics 365 — from datacenters in a given geography, we offer scalable, available, and resilient cloud services to companies and organizations while meeting data residency, security, and compliance needs. We have deep expertise in protecting data and empowering customers around the globe to meet extensive security and privacy requirements, including offering the broadest set of compliance certifications and attestations in the industry.

With 54 regions announced worldwide, more than any other cloud provider, Microsoft’s global cloud infrastructure will connect the new regions in South Africa with greater business opportunity, help accelerate new global investment, and improve access to cloud and Internet services across Africa.

Accelerating digital transformation in Africa

As we execute our expansion strategy, we consider the demand for locally delivered cloud services and the opportunity for digital transformation in the market. According to a study from IDC, spending on public cloud services in South Africa will nearly triple over the next five years, and the adoption of cloud services will generate nearly 112,000 net-new jobs in South Africa by the end of 2022. The increased utilization of public cloud services and the additional investments into private and hybrid cloud solutions will enable organizations in South Africa to focus on innovation and building digital businesses at scale.

Nedbank, a leading African bank that services a diverse client base in South Africa and the rest of Africa, is pursuing a transformation strategy with the Azure cloud platform to enable its digital aspirations. Microsoft has had a long relationship with Nedbank which has culminated in enabling its migration to the cloud to help increase its competitiveness, agility, and customer focus. Azure also provides compliance technologies that assist Nedbank to increase data privacy and security which are primary concerns of its customers, regulators, and investors. Nedbank has adopted a hybrid and multi-vendor cloud strategy in which Microsoft is an integral partner.

The Peace Parks Foundation, in collaboration with Cloudlogic, uses Azure to rapidly deploy infrastructure and solutions in far-flung protected spaces as well as to compute a considerable volume of data around at-risk species and wildlife in multiple conservation areas spanning thousands of kilometers. In efforts to sustain the delicate ecosystem and keystone species, such as the black and white rhinoceros, Peace Parks Foundation processes up to tens of thousands of images captured monthly on wildlife cameras in remote areas to monitor possible poaching activity. In the future, Peace Parks will leverage the new cloud infrastructure for radio over Internet protocol, a high-tech solution to a low-tech problem, to improve radio communication over remote and isolated areas.

eThekwini water is a unit of the eThekwini Municipality in Durban, South Africa responsible for the provision of water and sanitation services critical for sustaining life for 3.5 million residents in a 2,000+ square kilometer service area. In partnership with Cloudlogic, eThekwini water is using Azure for critical application monitoring as well as site failover and disaster recovery initiatives. It’ll benefit from locally delivered cloud services to improve performance of real-time reporting and monitoring of water infrastructure 24 hours a day, seven days a week.

Empowering people and organizations across Africa

Microsoft has long been working to support organizations, local start ups, and NGOs in Africa that have the potential to solve some of the biggest problems facing humanity, such as the scarcity of water and food as well as economic and environmental sustainability.

In 2013, we launched Microsoft 4Afrika investing in start-ups, partners, small-to-medium enterprises, governments, and youth on the African continent. The program is focused on delivering affordable access to the Internet, developing skilled workforces, and investing in local technology solutions. Africa has the potential to help lead the technology revolution; therefore, Microsoft is empowering organizations and people to drive economic development, inclusive growth, and digital transformation. 4Afrika is Microsoft’s business and market development engine on the continent, which is preparing the market to embrace cloud technology.

We have also extended FarmBeats, an end-to-end approach to help farmers benefit from technology innovation at the edge, to Nairobi, Kenya. FarmBeats strives to enable data-driven farming as we believe that data, coupled with the farmer’s knowledge and intuition about his or her farm, can help increase farm productivity and reduce costs. The new effort in Nairobi will be focused on addressing the specific challenges of farming in Africa with the intent of expanding to other African countries.

Bringing the complete cloud to Africa

The new cloud regions in Africa are connected with Microsoft’s other regions via our global network, one of the largest and most innovative on the planet, which spans more than 100,000 miles (161,000 kilometers) of terrestrial fiber and subsea cable systems to deliver services to customers. We’ve expanded our network footprint to reach Egypt, Kenya, Nigeria, and South Africa and will be expanding to Angola. Microsoft is bringing the global cloud closer to home for African organizations and citizens through our trans-Arabian paths between India and Europe, as well as our trans-Atlantic systems including Marea, the highest-capacity cable to ever cross the Atlantic.

Azure is the first of Microsoft’s intelligent cloud services to be delivered from the new datacenters in South Africa. Office 365, Microsoft’s cloud-based productivity solution, is anticipated to be available by the third quarter of calendar year 2019, and Dynamics 365, the next generation of intelligent business applications, is anticipated for the fourth quarter.

Follow these links to learn more about the new cloud services in South Africa and the availability of Azure regions and services across the globe.
Quelle: Azure

Rerun activities inside your Azure Data Factory pipelines

Data Integration is complex with many moving parts. It helps organizations to combine data and complex business processes in hybrid data environments. Failures are very common in data integration workflows. This can happen due to data not arriving on time, functional code issues in your pipelines, infrastructure issues etc. A common requirement is ability to rerun failed activities inside your data integration workflows. In addition to this, sometimes, you want to rerun activities to re-process the data due to some error upstream in data processing. Azure Data Factory now allows you to rerun activities inside your pipelines. You can rerun the entire pipeline or choose to rerun downstream from a particular activity inside your data factory pipelines.

Simply navigate to the ‘Monitor’ section in data factory user experience, select your pipeline run, click ‘View activity runs’ under the ‘Action’ column, select the activity and click ‘Rerun from activity <activityname>’

You can also view the rerun history for all your pipeline runs inside the data factory. Simply click on the toggle to ‘View All Rerun History’.

You can also view rerun history for a particular pipeline run by clicking ‘View Rerun History’ under the ‘Actions’ column. This allows you to see the different run attempts that you have made for your pipeline execution.

Learn more about rerunning activities inside your data factory pipelines.

Our goal is to continue adding features to improve the usability of Data Factory tools. Get started building pipelines easily and quickly using Azure Data Factory. If you have any feature requests or want to provide feedback, please visit the Azure Data Factory forum.
Quelle: Azure

Conversational AI updates for March 2019

We are thrilled to share the release of Bot Framework SDK version 4.3 and use this opportunity to provide additional updates for the Conversational AI releases from Microsoft.

New LINE Channel

Microsoft Bot Framework lets you connect with your users wherever your users are. We offer thirteen supported channels, including popular messaging apps like Skype, Microsoft Teams, Slack, Facebook Messenger, Telegram, Kik, and others. We have listened to our developer community and addressed one of the most frequently requested features – added LINE as a new channel. LINE is a popular messaging app with hundreds of millions of users in Japan, Taiwan, Thailand, Indonesia, and other countries.

To enable your bot in the new channel, follow the “Connect a bot to LINE” instructions. You can also navigate to your bot in the Azure portal. Go to the Channels blade, click on the LINE icon, and follow the instructions there.

SDK 4.3

In the 4.3 release, the team focused on improving and simplifying message and activities handling. The Bot Framework Activity schema is the underlying schema used to define the interaction model for bots. With the 4.3 release, we have streamlined the handling of some activity types in the Bot Framework Activity Schema, exposing a simple On* methods, thus simplifying the usage of such activities. On top of the activity handling improvements, for C# we have added MVC support, allowing developers to use the standard ASP.NET core application and ApiController. As with any release, we fixed a number of bugs, continue to improve LUIS and QnA integration, and further clean our engineering practices. There were additional updates across other areas like Language, Prompt and Dialogs, and Connectors and Adapters.

Review all changes that went into 4.3 in the detailed Change Log.
Stay up to date with the current list of all issues.

Simplify activity message handling

This release introduces a new way to handle incoming messages through a new class called ActivityHandler. An ActivityHandler receives incoming activities, as defined in the Bot Framework Activity Schema, then delegates the handling of each activity to one or more handler functions based on the activity’s type and other properties. For example, ActivityHandler exposes methods such as:

OnMessage – For dealing with all incoming messages
OnMembersAdded – For dealing with messages representing members being added
OnEvent – For generic event activities

You can find all the methods in the ActivityHandler.ts (for JavaScript) and ActivityHandler.cs (for .NET).

Using ActivityHandler, developers can handle events for incoming messages, application events, and a variety of conversation update events. This should make it easier to create common bot behaviors such as sending greetings and welcoming users.

This class provides an extensible base for handling incoming activities in an event-driven way. In JavaScript and TypeScript, the base ActivityHandler class can be used directly as main activity handler, as seen in the example code below. Developers can also derive subclasses from it to extend the core features.

Here is a small JavaScript code snippet example:

// Import the class from botbuilder sdk
const { ActivityHandler } = require('botbuilder');
// Create the bot “controller” object
const bot = new ActivityHandler();
server.post('/api/messages', (req, res) => {
adapter.processActivity(req, res, async (context) => {
// Route incoming activities to the ActivityHandler via the run() method
await bot.run(context);
});
});
// bind a handler for all incoming activities of type message
bot.onMessage(async (context, next) => {
// do stuff
await context.sendActivity(`Echo: ${ context.activity.text }`);
// proceed with further processing
await next();
});
// say hello when new members join
bot.onMembersAdded(async(context, next) => {
await context.sendActivity('Hello! I am a bot!');
await next();
});

Web API integration for .NET developers

A core tenant for the Bot Framework team is to drive parity across .NET and JS implementations. In that spirit, the .NET implementation of the ActivityHandler.cs exposes the same functionality with the given special programing language capabilities. However, ASP.NET Core provides a rich set of infrastructures supporting Web API, which can be easily integrated and used by bot developers. Therefore, in addition to the activity handling improvements, for C# we have added Web API support, allowing developers to use standard ASP.NET core application and ApiController.

Here is a simple code snippet for ASP.NET Web API Controller: 

[Route("api/messages")]
[ApiController]
public class BotController : ControllerBase
{
private IBotFrameworkHttpAdapter _adapter;
private IBot _bot;

public BotController(IBotFrameworkHttpAdapter adapter, IBot bot)
{
_adapter = adapter;
_bot = bot;
}

[HttpPost]
public async Task PostAsync()
{
// Delegate the processing of the HTTP POST to the adapter.
// The adapter will invoke the bot.
await _adapter.ProcessAsync(Request, Response, _bot);
}
}

Note, the _bot passed to _adapter.ProcesAsync method is the actual bot implementation and will handle any activity sent from the adapter, which has a very similar code to the above JS sample.

QnA Maker and Language Understanding

QnA Maker released Active Learning, which helps developers improve their knowledge base, based on real usage. Active learning helps identify and recommend question variations for any question and allows users to easily add them to their knowledge base.

For a user query, if QnA Maker returns top N answers where the difference in confidence score is low, Active Learning is triggered. Based on collective feedback across users, QnA Maker shows suggestions for alternate questions in your knowledge base.

To learn more about how QnA Maker Active Learning works and how to use it, read the documentation, “Use active learning to improve knowledge base.”

Templates and the Virtual Assistant Solution Accelerator

Templates and Solution Accelerators provide a mechanism to identify high growth opportunities for our Conversational AI, Speech, and broader Azure platform. These enable our customers and partners to accelerate delivery of advanced, transformational conversational experiences typically not viewed as possible or require too much effort to deliver a high-quality experience.

In this latest release we have provided significant updates to our Templates and Virtual Assistant solution. A high level summary of changes are covered in our Release Notes.

We are happy to share the availability of a JavaScript (Typescript) version of the Enterprise Template along with a Yeoman Generator. Work has started on the equivalent for the Virtual Assistant. We’ve also added coded unit tests to all Bots created by the templates providing a way to automate unit testing of dialogs along with further enhancements to the telemetry capabilities and the associated PowerBI dashboard.

We’ve also delivered a wide range of changes to the Virtual Assistant and Skills including a new template enabling Skills to be quickly created and added to a Virtual Assistant. There is also new support for proactive experiences, enabling the assistant and Skills to proactively reach out to a user or perform long running asynchronous operations.

Also, in this release are wide ranging improvements to the Productivity Skills including email, calendar, and to-do,  as well as the addition of FourSquare support to the Point of Interest Skill, and an enhanced WebChat test experience.

Web Chat 4.3

Web Chat is a popular component that lets developers add a messaging interface for their bot on the websites or mobile apps. Web Chat 4.3 release addresses the remaining accessibility issues and popular feature requests, like better indication of connectivity state for users with poor network connection.

To try Web Chat 4.3, follow the instructions on GitHub or explore code samples.

Get started

As we continue to improve our conversational AI tools and framework, we look forward to seeing what conversational experiences you will build for your customers. Get started today!
Quelle: Azure

Now available: Azure DevOps Server 2019

Following the launch of Azure DevOps in September, we’re pleased to announce the official release of Azure DevOps Server 2019! Previously known as Team Foundation Server (TFS), Azure DevOps Server 2019 brings the power of Azure DevOps into your dedicated environment. You can install Azure DevOps Server 2019 into any datacenter or sovereign, and determine when to apply updates.

About Azure DevOps Server

Azure DevOps includes developer collaboration tools which can be used together or independently, including Azure Boards (Work), Azure Repos (Code), Azure Pipelines (Build and Release), Azure Test Plans (Test), and Azure Artifacts (Packages). These tools support all popular programming languages, any platform (including macOS, Linux, and Windows) or cloud, as well as on-premises environments. Like with TFS, you control where you install Azure DevOps Server and when you apply updates. If you prefer to let us manage, use Azure DevOps Services which is available in more geographic regions than any other cloud hosted developer collaboration service.

Download Azure DevOps Server 2019

What’s new?

The release notes describe the major updates from TFS 2018 to Azure DevOps Server 2019, but my key highlights include:

The new navigation, which enables users to easily navigate between services, is more responsive and provides more space to focus on your work. But note this is a major UI overhaul – the largest we have done for several years, so please make sure your users are aware of the changes and update appropriate internal documentation as part of upgrading.
Azure Pipelines has been enhanced in many ways including new Build and Release pages, and support for YAML builds.
In addition to our existing integration between GitHub Enterprise and Azure Pipelines, which has been available in previous versions of TFS, Azure DevOps Server 2019 also enables integration of GitHub Enterprise commits and pull requests with work items in Azure Boards.
Organizations wishing to host Azure DevOps Server on their own virtual machines (VMs) on Azure can use Azure SQL Database instead of managing their own SQL Server VMs.
Azure Artifacts and Release Management licensing has evolved, making it simpler and more cost-effective for most customers.

Getting started

Whether your evaluating a new installation or planning an upgrade from a previous version of TFS, the following resources can help.

Azure DevOps Server 2019 Release Notes
Download Azure DevOps Server 2019
Product documentation (including the Installation Guide and Upgrade Guide)
System Requirements and Compatibility

Quelle: Azure

Azure Premium Blob Storage public preview

Today we are excited to announce the public preview of Azure Premium Blob Storage. Premium Blob Storage is a new performance tier in Azure Blob Storage, complimenting the existing Hot, Cool, and Archive tiers. Premium Blob Storage is ideal for workloads with high transactions rates or requires very fast access times, such as IoT, Telemetry, AI and scenarios with humans in the loop such as interactive video editing, web content, online transactions, and more.

Our testing shows that both average and 99th percentile server latency is significantly lower than our Hot access tier, providing faster and more consistent response times for both read and write across a range of object sizes. Your application should be deployed to compute instances in the same Azure region as the storage account to realize low latency End-to-End. For more details see, “Premium Blob Storage – a new level of performance.”

Figure 1 – Latency comparison of Premium and Standard Blob Storage

Premium Blob Storage is available with Locally-Redundant Storage (LRS) and comes with High-Throughput Block Blobs (HTBB), which provides very high and instantaneous write throughput when ingesting block blobs larger than 256KB.

You can store block blobs and append blobs in Premium Blob Storage. To use Premium Blob Storage you provision a new ‘Block Blob’ storage account in your subscription (see below for details) and start creating containers and blobs using the existing Blob Service REST API and/or any existing tools such as AzCopy or Azure Storage Explorer.

Pricing and region availability

Premium Blob Storage has higher data storage cost, but lower transaction cost compared to data stored in the regular Hot tier. This makes it cost effective and can be less expensive for workloads with very high transaction rates. Check out the pricing page for more details.

Premium Blob Storage public preview is available in US East, US East 2, US Central, US West, US West 2, North Europe, West Europe, Japan East and Southeast Asia regions.

Object tiering

At present data stored in Premium cannot be tiered to Hot, Cool or Archive access tiers. We are working on supporting object tiering in the future. To move data, you can synchronously copy blobs from using the new PutBlockFromURL API (sample code) or AzCopy v10, which supports this API. PutBlockFromURL synchronously copies data server side, which means that the data has finished copying when the call completes, and all data movement happens inside Azure Storage.

How to create a storage account (Azure portal)

To create a block blob storage account using the Azure Portal navigate to the ‘Create storage account’ blade and fill it in:

In Location choose one of the supported regions
In Performance choose Premium
In Account Kind choose Block Blob Storage (preview)

Example below:

Once you have created the account, you can manage the Premium Blob Storage account, including generating SAS tokes, review metrics, and more.

How to create a storage account (PowerShell)

To create a block blob account, you must first install the PowerShell AzureRm.Storage preview module.

Step 1: Ensure that you have the latest version of PowerShellGet installed.

Install-Module PowerShellGet –Repository PSGallery –Force

Step 2:  Open a new PowerShell console and install AzureRm.Storage module.

Install-Module Az.Storage –Repository PSGallery -RequiredVersion 1.1.1-preview –AllowPrerelease –AllowClobber –Force

Step 3: Open a new PowerShell console and login with your Azure account.

Connect-AzAccount

Once the PowerShell preview module is in place you can create a block blob storage account:

New-AzStorageAccount -ResourceGroupName <resource group> -Name <accountname> -Location <region> -Kind "BlockBlobStorage" -SkuName "Premium_LRS"

How to create a storage account (Azure CLI)

To create a block blob account, you must first install Azure CLI v. 2.0.46 or higher, then

Step 1: Login to your subscription

az login

Step 2: Add the storage-preview extension

az extension add -n storage-preview

Step 3:  Create storage account

az storage account create –location <location> –name <accountname> –resource-group <resource-group> –kind "BlockBlobStorage" –sku "Premium_LRS"

Feedback

We would love to get your feedback at premiumblobfeedback@microsoft.com.

Conclusion

We are very excited about being able to deliver Azure Blob Storage with low and consistent latency with Premium Blob Storage and look forward to hearing your feedback. To learn more about Blob Storage please visit our product page. Also, feel free to follow my Twitter for more updates.
Quelle: Azure

Secure server access with VNet service endpoints for Azure Database for MariaDB

This blog post was co-authored by Sumeet Mittal, Senior Program Manager, Azure Networking.

Ensure security and limit access to your MariaDB server with the virtual network (VNet) service endpoints now generally available for Azure Database for MariaDB. VNet service endpoints enable you to isolate connectivity to your logical server from a given subnet within your virtual network. The traffic to Azure Database for MariaDB from your VNet always stays within the Azure network. Preference for this direct route is over any specific ones that route Internet traffic through virtual appliances or on-premises.

There is no additional billing for virtual network access through VNet service endpoints. The current pricing model for Azure Database for MariaDB applies as is.

Using firewall rules and VNet service endpoints together

Turning on VNet service endpoints does not override firewall rules that you have provisioned on your Azure Database for MariaDB, both remain applicable.

VNet service endpoints don’t extend to on-premises. To allow access from on-premises, you can use firewall rules to limit connectivity only to your public (NAT) IPs.

To learn more about VNet protection view our documentation, “Use Virtual Network service endpoints and rules for Azure Database for MariDB.”

Turning on service endpoints for servers with pre-existing firewall rules

When you connect to your server with service endpoints turned on, the source IP of database connections switches to the private IP space of your VNet. Configuration is via the “Microsoft.Sql” shared service tag for all Azure Databases including Azure Database for MariaDB, MySQL, PostgreSQL, Azure SQL Database and Managed Instance, and Azure SQL Data Warehouse. If at the present time your server or database firewall rules allow specific Azure public IPs, then the connectivity breaks until you allow the given VNet/subnet by specifying it in the VNet firewall rules. To ensure connectivity, you can preemptively specify VNet firewall rules before turning on service endpoints by using the IgnoreMissingServiceEndpoint flag.

Support for ASE

As part of general availability, we support service endpoints for App Service Environment (ASE) subnets deployed into your virtual networks.

Next steps

Get started with the service by creating your first Azure Database for MariaDB server using the Azure portal or Azure CLI.
Learn how to configure VNet service endpoints for MariaDB using the Azure portal or Azure CLI.
Reach us by emailing our team AskAzureDBforMariaDB@service.microsoft.com.
File feature requests on UserVoice.
Follow us on Twitter @AzureDBMariaDB to keep up with the latest features.

Quelle: Azure

Scaling out read workloads in Azure Database for MySQL

For read-heavy workloads that you are looking to scale out, you can use read replicas, which are now generally available to all Azure Database for MySQL users. Read replicas make it easy to horizontally scale out beyond a single database server. This is useful in workloads such as BI reporting and web applications, which tend to have more read operations than write.

The feature supports continuous asynchronous replication of data from one Azure Database for MySQL server (the “master” server) to up to five Azure Database for MySQL servers (the “read replica” servers) in the same region. Read-heavy workloads can be distributed across the replica servers according to your preference. Replica servers are read-only except for writes replicated from data changes on the master.

What’s supported with read replicas?

You can create or delete replica servers based on your workload’s needs. A master server can support up to five replica servers within the same Azure region. Stopping replication to any replica server makes it a standalone read-write server.

You can easily manage your replica servers using the Azure portal and Azure CLI.

From the Azure portal:

Use Azure Monitor to track replication with the “replication lag in seconds” metric:

From the Azure CLI:

az mysql server replica create -n mydemoreplica1 -g myresourcegroup -s mydemomaster

Below are some application patterns used by our customers and partners that leverage read replicas for scaling workloads.

BI reporting

Data from disparate data sources is processed every few minutes and loaded into the master server. The master server is dedicated for loads and processing, not directly exposing it to BI users for reporting or analytics to ensure predictable performance. The reporting workload is scaled out across multiple read replicas to manage high user concurrency with low latency.

Microservices

In this architecture pattern, the application is broken into multiple microservices, with data modification APIs connecting to the master server while reporting APIs connect to read replicas. The data modification APIs are prefixed with “Set-”, while reporting APIs are prefixed with “Get-“. The load balancer is used to route the traffic based on the API prefix.

Next steps

Get started with the service by creating your first Azure Database for MySQL server using the Azure portal or Azure CLI.
Learn more about read replicas and how to create them in the Azure portal or Azure CLI.
Reach us by emailing our team AskAzureDBforMySQL@service.microsoft.com.
File feature requests on UserVoice.
Follow us on Twitter @AzureDBMySQL to keep up with the latest features.

Quelle: Azure

Azure Marketplace and Cloud Solution Provider updates – March 2019

In February, Microsoft shared an ambitious vision to continue innovating as a technology provider and to improve the experience for solution developers and service providers when engaging with Microsoft. Our partners are delivering more innovation in AI, expanding their business through more co-selling opportunities, and leveraging distribution options through our commercial marketplaces such as Azure Marketplace and AppSource.

Today, we’re very excited to begin rolling out an initial set of platform changes which open new opportunities for our partners to go to market with Microsoft. This work sets the stage for more enhancements coming this winter and spring that continue to drive partner business acceleration. Get a sneak peek on our public marketplace roadmap.

Microsoft makes Azure Marketplace offers available to CSP channel partners

Microsoft’s Cloud Solution Provider (CSP) partner program is the largest channel program in the industry with more than 60,000 channel partners serving millions of Microsoft customers worldwide. Starting today, ISVs can choose to make their transactable Azure Marketplace offer available for distribution through the CSP channel. Partners in the CSP program will be able to sell, deploy, and bundle Azure services with Azure-optimized ISV software from the marketplace to better serve customers and grow their managed services business.

Within Partner Center’s new marketplace page, CSP partners can discover, evaluate, and learn about all the Azure Marketplace solutions available through the channel. Software-as-a-Service (SaaS) subscriptions can be established through the standard purchase workflow in Partner Center, and Azure resources such as VM or container images may be deployed and procured through the Azure management portal. All of the transactions will now be available on a consolidated invoice making legal, billing, and account support much simpler.

Publisher partners can learn how to create or update offerings for CSP channel availability.

CSP partners can also explore the marketplace page in Partner Center.

Screenshot of new marketplace discovery experience in Partner Center under the “Sell” navigation pane

Expanding the market opportunity for partners with new geographies and business models

As customer adoption for marketplaces continues to grow around the world, we expanded our global marketplace reach. We are pleased to share that Azure Marketplace has expanded coverage to 53 new countries, allowing partners now to sell into a total of 141 countries with 17 currencies.

Last year, we launched SaaS in Azure Marketplace with the ability to pay per month. As enterprise adoption of SaaS continues to grow, customers are looking for a variety of billing options. Today, we are pleased to highlight a new annual billing option for SaaS offers. We are also releasing a new set of tools to reduce procurement complexity for customers and ISV partners. For example, the new standard contract allows ISVs to leverage a unified and common set of terms and conditions for end customers.

Have questions or feedback? See the new or updated resources below. We also invite you to join us in the Microsoft Partner Community for discussion!

Publisher resources

Become a publisher
Publisher Guide for Partners
New: Publish Offerings for Resellers
New: Marketplace Roadmap
Azure Marketplace FAQs
Marketplace Support for Publishers

Cloud Solution Provider resources

Partner Center Documentation
New: Marketplace Offerings in Partner Center
Partner Center API Documentation
CSP Operations Guide

Quelle: Azure

Azure Communications is hosting an “Ask Me Anything” session!

Have you ever wondered where those service notifications in the Azure Portal and the Azure Status page come from? Curious why some messages appear to have more information than others? Interested in learning more about what goes into an outage statement? This is your chance to find out!

The Azure Communications team will be hosting a special "Ask Me Anything" (AMA) session on Reddit and Twitter. The Reddit session will be held on Monday, March 11th, from 10:00 AM to noon PST. Customers can participate by posting to the /r/Azure subreddit when the AMA is live. The Twitter session will follow soon after on Wednesday, March 13th, from 10:00 AM to noon PST. Be sure to follow @AzureSupport before March 13th and tweet us during the event using the hashtag #AzureCommsAMA.

Who are we?

We are among the first responders during times of crisis – the ones who draft, approve, and publish most customer-facing communications when outages happen. You've probably seen our messages on the Azure Service Health blade in the Portal, the Azure Status webpage, or even on the @AzureSupport Twitter handle. We'd like to think we bridge the gap between customers and the action happening behind the scenes.

What kind of communications do we provide?

Our communications are very crisis-oriented. When there is any kind of service interruption with the Azure platform, we are responsible for making sure our customers are provided with timely and accurate information. This includes information about outages, maintenance updates, and other good-to-know information for customers. We do not, however, manage any advertising or promotional communications.

Where do we communicate?

We have three primary channels for communicating with customers: Azure Service Health in the Portal, the Azure Status webpage, and @AzureSupport on Twitter.

Azure Service Health – Azure Service Health provides personalized guidance directly to customers when issues with the Azure platform affect their resources. Customers can review a personalized dashboard in the Portal, set up targeted notifications (email, SMS, webhook, etc.), receive support, and share details easily.

Azure Status webpage – This public-facing webpage (which does not require signing in) is only used to provide updates for major incidents or when there are known issues preventing access to Azure Service Health in the Portal. Customers should only refer to this page if they are not able to access Azure Service Health.

@AzureSupport on Twitter – In many ways, the @AzureSupport Twitter handle is a dynamic component of our communications process. Twitter allows engineering and support teams to gauge the pulse of the Azure platform through customer engagements on Twitter and act as a complementary resource to the Azure Status webpage. If necessary, @AzureSupport can even be used as a communications medium when Azure Service Health or the Azure Status webpage are not available.

Why are we hosting an AMA?

We’re going to be honest – we want there to be as little a disconnect as possible between customers and the action happening behind the scenes. An AMA provides us with an opportunity to connect with customers on a more intimate, informal level, and allows us to receive feedback directly from customers about some of the real-time decisions that are made during times of crisis. Hosting a multi-channel AMA through both Reddit and Twitter allows us to connect with a broad social community while providing customers with an experience based on transparency that is second to none.

Who will be there? How can I participate?

If you’re on Reddit, subscribe to the /r/Azure subreddit by Monday, March 11th, at 10:00 AM PST. Pochian Lee, Drey Zhuk, and others from the Azure Communications team will be answering questions on Reddit until noon. Just log in and post your questions to get started. If you’re on Twitter, be sure to follow @AzureSupport before Wednesday, March 13th. Starting at 10:00 AM PST, the team will be answering questions on Twitter until noon. Just tweet your questions to @AzureSupport during the event and be sure to include the #AzureCommsAMA hashtag so we don't miss you! While we will only be answering questions live during the event, customers are encouraged to post or tweet their questions any time starting at 10:00 AM on March 8th. This gives customers in different time zones an opportunity to participate.

Questions you may already have:

Why am I impacted and still see green on the Azure Status webpage?

The majority of our outages impact a limited subset of customers and we reach out to the impacted customers directly via the Azure Service Health blade. The Azure Status webpage provides information regarding any major impacting event that customers should be aware of on a broader scale.

Do I raise a support case during an outage?

If after looking in the Azure Service Health blade to see if you have been impacted by an outage, you find that your problem doesn’t match the impact that we’re observing, we would recommend that you create an Azure support case.

If you do not see a notification within the Azure Service Health blade and believe that there is an outage, please create an Azure support case.

If I don't see a notification in the Azure Service Health blade during an outage, should I raise a support ticket?

If you believe you are impacted by an outage, then yes!

How do I get notifications via email, SMS, etc. during an outage?

Customers can receive additional notifications via Azure Service Health and Azure Resource Health.

Azure Service Health – To get notifications during an outage, you would need to setup alerts within Azure Service Health to your preference (i.e you want to be notified for any outage affecting my virtual machines in the eastern US). Find additional information, and learn how to request notifications in this documentation.

Azure Resource Health – To get notifications based on the health of your individual resources (i.e. I want to be notified about issues for these two specific Virtual Machines), customers can configure Resource Health alerts by following the steps in this article.

If I have Service Health alerts set up, why have I not received a notification of an outage?

We try our best to inform impacted customers of an outage using our telemetry and are actively working on improving our telemetry to make sure we alert all customers that are impacted. We’ve made great progress for certain scenarios where our automated alerting is triggered from high-fidelity monitoring within our system. We’re looking to further develop this telemetry to ensure that the right customers are informed in a timely manner.

Where can I find Azure Service Health communications?

Communications can be seen within the Azure Service Health blade.

Why do I get communications so late in the Portal?

As soon as we’re able to validate customer impact and the services involved, we inform customers immediately. We’re actively working on improving our automation and telemetry to make sure customers are aware in real-time.

Why aren't these communications more visible when I log into the Portal?

We have heard this feedback before and are currently collaborating with partner teams to improve the visibility of the communications in the Azure Portal.

Be sure to subscribe to /r/Azure and follow @AzureSupport on Twitter before March 11th, 2019, in order to participate. We look forward to answering your questions live during the event!
Quelle: Azure