Managing your resources with Azure Cloud Shell

Back in May this year we announced the public preview of Azure Cloud Shell. If you haven’t tried it out yet, Azure Cloud Shell gives you a new way to manage your resources in the Cloud. It’s a browser-based shell experience, which means it’s accessible from virtually anywhere. It authenticates with your Azure account so you can remotely access your Azure resources and even attaches to your Azure File Storage so you can always have your stored scripts at your fingertips, no matter which machine you use. This allows you to manage on-the-go from any browser or even the Azure Mobile App.

On today’s Microsoft Mechanics, Rick and I demonstrate how within your browser you can use BASH or PowerShell (currently in private preview) to troubleshoot or automate your most common management tasks.

Persisting your files and working from anywhere

In Azure there are thousands of containers with configured Cloud Shell environments waiting for you to connect. These are geo-diversified, so we assign you an instance in a region geographically close to you.

 

 

 

 

 

 

 

 

 

 

 

Once the connection is established, Cloud Shell attaches your specified Azure File storage containing all the scripts and PowerShell modules that you have saved there.

Using Cloud Shell you don’t need to worry about different versions of Azure CLI or installing anything on your machine. Microsoft maintains and updates Cloud Shell on your behalf and includes commonly used CLI tools such as kubectl, git, Azure tools, text editors, and more. Cloud Shell also includes language support for several popular programming languages such as Node.js, .NET, and Python.

Launching Cloud Shell from the browser or your phone

You can launch Cloud Shell while logged into the Azure Portal by clicking on the “>_” button in the upper right corner near your name, right between notifications and settings. I know it is calling out to you… We’ve even instrumented many of our tutorials on docs.microsoft.com with Cloud Shell so you try out the commands directly within those articles. And if you’re not near a computer, you can even launch Cloud Shell from the Azure Mobile App on your phone.

Try Cloud Shell today

If you have an Azure subscription, even a trial, you can try Cloud Shell today. The preview for BASH is enabled now and you can register for the PowerShell private preview simply by going to https://aka.ms/PSCloudSignup and answering six simple questions. Once you’re up and running, check out the show for a few samples and tips about what to try and let us know what you think.
Quelle: Azure

Azure Time Series Insights API, Reference Data, Ingress, and Azure Portal Updates

Today we are announcing the release of several updates to Time Series Insights based on customer feedback. Time Series Insights is a fully-managed analytics, storage, and visualization service that makes it simple to explore and analyze billions of IoT events simultaneously. It allows you to visualize and explore time series data streaming into Azure in minutes, all without having to write a single line of code. For more information about the product, pricing, and getting started, please visit the Time Series Insights website. We also offer a free demo environment to experience the product for yourself. 

Smarter environment management with ingress telemetry

We know that administrators want to plan for and manage their Time Series Insights environments with usage and health telemetry in the Azure Portal. To help enable them to do this more effectively, we have added ingress and storage monitoring at the Time Series Insights environment level in the Portal. We are also working on adding metric alerts, so you can be automatically informed of critical information related to the status of your environment. We will continue to add additional environment telemetry to the Azure Portal in the future – be on the lookout for updates in the coming months.

In the Overview page of the portal, you can now see the following stats:

Ingress received messages: Count of messages read from Event hubs and Azure IoT Hubs.

Ingress received bytes:  Count of raw bytes read from an event source(s). Raw count usually includes the property name and values.

Ingress stored bytes: Total size of events stored and available for query.

Ingress stored events: Count of flattened events stored and available for query.

Below is a look at the environment telemetry in the Azure Portal

Make data easier to visualize and analyze with better reference data management

We’ve also heard feedback from our customers that they need an easier way to augment their device telemetry with device metadata, but without lengthy documentation. Today, we are happy to announce that our new Reference Data API documentation now includes detailed samples showing how to configure, upload and update your reference data programmatically. By importing device metadata as reference data, these customers can tag and add dimensions to their data that make it easier to slice and filter. For customers who are not using our API, we are working hard to deliver a solution built into our UX to allow managing reference data visually to accomplish the same scenario described above.  Look for an update to the portal containing this functionality in September.

You can find links to documentation revisions below:

Create a reference data set for your Time Series Insights environment using the Azure Portal

Manage reference data for an Azure Time Series Insights environment by using C#

Add the power of Time Series Insights to your apps

Our customers are building both internal and external applications on top of Time Series Insights for a variety of scenarios. Similarly, Microsoft is also using Time Series Insights internally with innovative services like Microsoft IoT Central and Azure IoT’s Connected Factory PCS. One of the common asks in this area is to be able to use the query API to search relative time spans, like 'now, minus one minute,' avoiding the need to reset the search span with every query execution to ensure you are viewing your most recent data.

With this service update, we are improving search span functionality to allow you to define and run repeatable queries over your most recent data with a single query template. With dynamic search spans, we have added a “utcNow,” function that returns the current UTC time. We have also added “timeSpan” literals to allow you to define a period of time, in addition to a “sub” function that allows you to subtract time from datetime values.

Here’s an example of what a dynamic search span JSON will look like after the update:

{
   "searchSpan": {
     "from": {
       "sub": {
         "left": { "utcNow": {} },
         "right": { "timeSpan": "PT1M"}
       }},
     "to": { "utcNow": {} }
}}

For more information, visit our query syntax documentation page. 

Now supporting more data ingress formats

Finally, we’ve heard from our peers in Azure Stream Analytics that their customers want more flexibility when sending data as a multi-content JSON. The update today includes the ability to ingress multi-content JSON payloads, a useful JSON data format for customers who are optimizing for throughput (common in batching scenarios). For example, the following payload contains five concatenated segments of well-formed JSON:

{ "id":"device1","timestamp":"2016-01-08T01:08:00Z"}
{"id":"device2","timestamp":"2016-01-08T01:09:00Z"}
{ "id":"device1","timestamp":"2016-01-08T01:08:00Z"}
[
    {"id":"device2","timestamp":"2016-01-08T01:09:00Z"},
    { "id":"device3","timestamp":"2016-01-08T01:10:00Z"}
]
{ "id":"device4","timestamp":"2016-01-08T01:11:00Z"}

Now, customers can send any JSON format they want, including single JSON objects, JSON arrays, nested JSON objects/arrays, multiple JSON arrays, multi-content JSON, or any combination thereof. For more details on the JSON objects we support, visit documentation.

We are excited about these new updates, but we are even more excited about what’s to come, so stay up to date on all things Time Series Insights by following us on Twitter. Our peers in the Big Data Group are also working on some interesting things as they build the world’s most powerful platform for data analytics at scale. Learn more about their big data journey on their website. 
Quelle: Azure

Why your team needs an Azure Stack Operator

Azure Stack is an extension of Azure, bringing the agility and fast-paced innovation of cloud computing to on-premises environments. With the great power of Azure in your own datacenter, comes the responsibility of operating the cloud – Azure Stack.

At the Microsoft 2016 Ignite conference, we announced a set of Modern IT Pro job roles for the cloud era and resources to help organizations to transition to cloud. This year, with a more focused effort in accelerating customer’s readiness for Azure, we’ve published a set of Azure Learning Paths for Azure Administrator, Azure Solution Architect, Node.js Developer on Azure, and .NET Developer on Azure. Associated with each learning path, there is also a set of free online-based self-paced courses to assist you quickly pick up the skills you need to make an impact on the chosen job function.

With the introduction of Azure Stack, we’re adding a new Azure job role – Azure Stack Operator. This is the role who will manage the physical infrastructure of Azure Stack environments. Unlike Azure, where the operators of the cloud environment are Microsoft employees, in Azure Stack, organizations will need to have people with the right skills to run and operate their cloud environment. If you haven’t yet, read the Operating Azure Stack blog post to see what tasks this new role will need to master.

The following four modern IT Pro job roles are most relevant to the success of managing and operating an Azure Stack environment:

Azure Stack Operator: Responsible for operating Azure Stack infrastructure end-to-end – planning, deployment and integration, packaging and offering cloud resources and requested services on the infrastructure.
Azure Solution Architect: Oversees the cloud computing strategy, including adoption plans, multi-cloud and hybrid cloud strategy, application design, and management and monitoring.
Azure Administrator: Responsible for managing the tenant segment of the cloud (whether public, hosted, or hybrid) and providing resources and tools to meet their customers’ requirements.
DevOps: Responsible for operationalizing the development of line-of-business apps leveraging cloud resources, cloud platforms, and DevOps practices – infrastructure as code, continuous integration, continuous development, information management, etc.

In the above graph, the light-brown colored role names (Azure Solution Architect, Azure Administrator, and DevOps) are applicable to both Azure and Azure Stack environments. The role in blue box, Azure Stack Operator, is specially designed for Azure Stack. “Your Customers” encompasses two groups of Azure Stack users: one group is the Azure Admin, who manages subscriptions, plans, offers, etc. in your Azure Stack environment, and the other group is the tenant users of the cloud resources presented by Azure Stack. The tenant users can be DevOps users who either develop or operate the line-of-business applications hosted on an Azure Stack cloud environment. They can also be the tenant users of a service provider or an enterprise, accessing the customer applications hosted on Azure Stack.

As you may have realized, running an instance of a cloud platform requires a set of new skills. To help you speed up the knowledge acquisition process and skill development journey as Azure Stack Operator, we are working to enable multiple learning venues to assist:

We are in the process of developing a 5-day in-classroom training course – “Configuring and Operating a Hybrid Cloud with Microsoft Azure Stack”. This course is currently scheduled to be published in September 2017.
We also plan to release a set of free online courses in the next few months:

Azure Stack Fundamentals
Azure Stack Planning, Deployment and Configuration
Azure Stack Operations

If you want to know more about this exciting new job role, Azure Stack Operator, along with other Azure Stack related roles and their corresponding learning programs, come to Ignite 2017 and attend the theater session “THR2017 – Azure Stack Role Guide and Certifications”.

More information:

At Microsoft Ignite this year in Orlando we will have a series of sessions that will educate you on all aspects of Azure Stack. Be sure to review the planned sessions and register your spot today.

The Azure Stack team is extremely customer focused and we are always looking for new customers to talk too. If you are passionate about Hybrid Cloud and want to talk with team building Azure Stack at Ignite please sign up for our customer meetup.

If you have already registered for Microsoft Ignite but haven’t yet registered for the Azure Stack pre-day, you can add the pre-day to your activity list. And if you are still planning to register for Microsoft Ignite, now is the time to do so, the conference is filling up fast!
Quelle: Azure

Replicated tables now in preview for Azure SQL Data Warehouse

SQL Data Warehouse is a fully managed, petabyte-scale cloud service for data warehousing. SQL Data Warehouse is highly elastic, enabling you to provision in minutes and scale capacity in seconds. With the public preview release of Replicated tables, the ability to reason over large amounts of data on SQL Data Warehouse at lightning fast speeds, just got faster. Replicated tables is a feature designed to speed up queries by reducing data movement. Data movement happens in distributed data warehouse systems when tables are joined or aggregated in a manner inconsistent with how they are spread across Compute nodes. Data movement is reduced with Replicated tables because a full copy of data is stored on each Compute node. As a result, queries that required data movement steps to complete now run faster with Replicated tables.

Prior to the availability of Replicated tables dimension tables, such as a Date dimension, were typically implemented with a round-robin distribution type. When joining to fact tables, the query plan would implement a data movement step to align the date dimension with the fact table. That extra data movement step added runtime to the overall query execution. With Replicated tables, the join happens directly on the Compute node with distributed fact data and a full local copy of the dimension data. Early adopting customers have seen some query runtimes reduce by 40% and up to a 10x reduction in the number of steps required to complete a query.

How to Create a Replicated table

With Replicate tables available, on all data warehouses in preview, you can get can get started optimizing your queries today. Below is sample syntax for creating a Replicated table:

CREATE TABLE DimDate
(
DateKey int NOT NULL,
FullDateAlternateKey date NOT NULL,
DayNameOfWeek nvarchar(10) NOT NULL,
CalendarQuarter tinyint NOT NULL,
CalendarYear smallint NOT NULL,
CalendarSemester tinyint NOT NULL
)

WITH

(
DISTRIBUTION = REPLICATE,
CLUSTERED COLUMNSTORE INDEX
)

Get started with SQL Data Warehouse

Get started with provisioning your SQL Data Warehouse today in the Azure Portal or request an Extended Free Trial.

Visit the Replicated tables Design Guidance documentation for recommendations on designing your SQL Data Warehouse schema.

Learn More

Check out the many resources for learning more about SQL Data Warehouse, including:

What is Azure SQL Data Warehouse?
SQL Data Warehouse best practices
Video library
MSDN forum
Stack Overflow forum

Quelle: Azure

Plan Backup and Disaster Recovery for Azure IaaS Disks

We have recently posted an article explaining the Backup and DR for Azure Disks. We encourage Azure IaaS users to refer to this document for planning the right Backup and Disaster Recovery (DR) methodology for their disks. Following are a few considerations.

Durability of Azure Disks

We are excited to report that Azure has consistently delivered enterprise-grade durability for IaaS disks, with an industry-leading ZERO % Annualized Failure Rate. This means that Azure has never had a permanent failure of an IaaS disk. For Azure customers, it directly translates into lower cost, better reliability, and smoother operations for the critical applications running on Azure. For example, you don’t have to build costly RAID solutions with redundancy in most cases because Azure protects your data with three redundant copies stored locally. Even if two different hardware components that hold your disks fail at the same time, durability of your data is protected and Azure automatically spawns new replicas in the background to replace the lost copies. We built the Azure disks platform with the core consideration that protecting the data is critical to any storage platform. With Azure disks, you won’t have to constantly worry about losing your disks in the cloud.

Hardware faults can sometimes result in temporary unavailability of the VM, which is covered by Azure SLA for VM availability. Azure also provides an industry-leading SLA for single VM instances that use Premium Storage disks.

Backup and Disaster Recovery

While the platform has built-in protection for localized hardware failures, you must still plan for Backup and Disaster Recovery for safeguarding against major incidents which can cause large-scale outages. This includes catastrophic events like hurricane, earthquake, fire etc. For DR, you should plan the periodic backup of your data to a different geographic location. Please refer to Backup and DR for Azure Disks article for details.

An important thing to consider for Disk backups is to take “consistent backups”. The backups must be taken at a coordinated, consistent state of all the disks on a VM. When possible, this must be coordinated with the applications also to produce “application consistent” backups. This is necessary to make sure you can restore the VM and the application to a valid state at the time of recovery.

Azure Backup Service can be used as the Backup solution for your disks, and it works with Managed Disks as well as Unmanaged Disks. Backup Service handles the coordination of disks for consistent backup and offers GRS option for the vault which replicates the backup to a different geographic region for DR.

Another solution is to create “consistent snapshots” for the disks. In this case, you have to handle the coordination of disks for creating consistent backups, and the replication of backups to different geographic location.

Refer to Backup and DR for Azure Disks article for more details.
Quelle: Azure

Microsoft Azure expands with two new regions for Australia

I am delighted that Microsoft Azure will be expanding into two new regions in Australia. This increases the number of Azure regions announced across the globe to 42, which is more than any other major cloud provider. Microsoft will become the first major cloud provider to offer regions specifically focused on the needs of the government and their partners in Australia.

The two new regions, available in the first-half of 2018, are intended to be capable of handling sensitive Unclassified data as well as Protected data. Protected is a data classification for the first level of national security classified information in Australia. This is being achieved through a strategic partnership with the Australian-owned firm Canberra Data Centres (CDC). CDC are the preeminent specialist datacenter provider for secure government data in Australia with four modern Canberra-based facilities that hold the accreditations and security controls to handle even Top Secret classified data. Government customers currently using the secure Intra-Government Communications Network (ICON) will be able to directly connect to Azure in Canberra.

Microsoft Azure has announced 42 regions around the world – more than any other cloud provider

This announcement builds on recent news that dozens of Microsoft Azure services have received certification by Australian Signals Directorate, including services for machine learning, internet-of-things, cybersecurity, and data management. Along with Australian certifications for Office 365 and Dynamics 365, Microsoft is recognized as the most complete and trusted cloud platform in Australia. By comparison, other major cloud providers are only certified for basic infrastructure services or remain uncertified for use by the government.

Today, government, healthcare, and education organisations are already some of the most rapid adopters of Azure from existing regions in Sydney and Melbourne. 

The Australian Department of Immigration and Border Protection is using Azure for applications that help protect the country’s vast borders. 
Bendigo Hospital in Victoria is building the first hospital-in-the-cloud on Azure, connecting and analysing healthcare data to better care for patients.  
The government in Tasmania is working with an Australian start-up, The Yield, to build the internet of oysters on Azure. 
These are just a few of the many stories of innovation in the Australian public sector that are enabled by Azure.

New regions designed to cater for the needs of government, growing certifications from the Australian Signals Directorate, and a history of empowering the digital transformation of organizations is helping Microsoft become the most trusted, innovative cloud for Australia. 

You can read more details about this announcement at the Microsoft Australia News Center.
Quelle: Azure

Maven: Deploy Java Web Apps to Azure

We released a new Maven Plugin for Azure Web Apps, you can deploy or redeploy Web apps to Azure App Service Linux or Windows in one easy step.

Azure App Service provides a managed Web app environment for your app to run. That means that all you should worry about is your app code. App Service handles the provisioning, load balancing, auto-scaling, and app health monitoring for you. Even though App Service handles these aspects for you, you still have control over all the settings if you want to customize how your environment runs.

Get started right away

Let us start with a Spring Boot application. Clone a Spring Boot sample with configuration:

$ git clone -b private-registry https://github.com/microsoft/gs-spring-boot-docker

Change the directory:

$ cd gs-spring-boot-docker/complete

Add a Service Principal and your private Docker registry credentials to your Maven settings.xml.
 
Build the app and containerize like you always do, and deploy to Azure App Service:

$ mvn clean package docker:build -DpushImage azure-webapp:deploy

Open your Web app! That is it.

The new Maven plugin is an open source project – https://github.com/Microsoft/azure-maven-plugins

Try it

You can deploy a Spring Boot app or containerized Spring Boot app or any Web app to Azure App Service Linux or Windows:

Deploy a Spring Boot app to Azure App Service
Deploy a containerized Spring Boot app to Azure App Service
Deploy a containerized Spring Boot app to Azure App Service via Azure Container Registry

Give it a try and let us know what do you think (via e-mail or comments below). You can find plenty of additional info about Java on Azure at http://azure.com/java.
Quelle: Azure

Stretched Hyper-V Cluster on Azure using Express Route

Traditionally, in an on-premises environment we setup stretch clusters across regions to protect against region failure. Microsoft came up with Azure Site Recovery to give users the the capability to move their workloads into Azure during a site or workload down situation. In this article, we are going to talk about an alternate approach to protecting your workload if you have a Hyper-V Cluster. I will have a follow up article in which I will go into details regarding this approach.

With the new nested virtualization capabilities that have been added to Azure, it allows us to run Hyper-V on top of an Azure VM.

Learn more about Hyper-V nested virtualization.

The new series Dv3 and Ev3 allows us to run Hyper-V on them.

Here I have created a D4sV3 VM on which I installed Hyper-V. Also, the network on which the VM is placed is under my EXPRESSROUTE circuit. An on-premise environment consists of a two node Hyper-V Cluster managed by SCVMM 2016.

Now we can take two approaches to migrate our workloads in Azure.

Either create a new cluster on Azure and migrate your VM from on-prem cluster to one in Azure.
Add the Azure VM to the existing Cluster and setup a storage spaces cluster on Azure.

Approach 1:

Approach 1 is straight forward. We add the VM to the domain and then create a cluster in Azure. You can add an additional DC in Azure that can manage the DNS. Since we have Express Route connectivity between the locations, connectivity should not be a concern.

We can migrate the VM with the storage, sharing no information during the migration, on to the different cluster. We can also setup a replica broker role on both clusters and then start the Hyper-V replica for the VM On-premises. Depending on which Express Route Bandwidth you choose, replication may take some time. It took around 20 minutes for my 40 GB ubuntu VM to be replicated into Azure.

Hyper-V host on Azure shows up during VM migration.

VRHYP is the cluster on-prem and VRHYP2 is the one in Azure. This gives you the capability to have a Hyper-V as a service .

Ubuntu VM migrated into Azure and running seamlessly.

If required, you can always failback to the on-prem cluster.

Approach 2:

In Approach 2 we add the VM to the existing Hyper-V cluster and manage it with SCVMM. This approach will follow the standard stretched cluster without stretched VLAN, as that is not possible in Azure.

We have deployed the same logical switch across to the Hyper-V running in Azure. We can go in deeper in this scenario and create another VM and setup Hyper-V on that. Once we have the two nodes up we can add data disk to the nodes and create Storage Spaces Direct.

Learn more about S2D implementation on Azure.

Storage Spaces Direct can be set with different resiliency levels including 2x, 3x, or parity. At least 4 nodes are required to get parity, but you can go for two nodes if you are just getting started.

We must make sure that the CSV volume that we create is the same size of what we have on-prem. Now we will use the Storage Replica feature that is available with Server 2016. This way we will keep the replica copy of the CSV volume we have on-prem into Azure via ExpressRoute.

Learn more about Storage Replica Server 2016.

We performed this action so that if the on-prem cluster fails, the VM can seamlessly start on Hyper-V hosted on Azure. This way we do not have to depend on a third-party tool for storage replication.

In the end, I would suggest that this is an alternate approach to Azure Site Recovery. This approach gives you more control over the VM migration scenarios, but you are limited to Server 2016 and Hyper-V.

 

Vinayak Rattan

Partner Technical Consultant
Quelle: Azure

Azure Data Factory July new features update

We are glad to announce that Azure Data Factory has added more new features in July, including:

Preview for Data Management Gateway high availability and scalability
Skipping or logging incompatible rows during copy for fault tolerance
Service principal authentication support for Azure Data Lake Analytics

We will go through each of these new features one by one in this blog post.

Preview for Data Management Gateway high availability and scalability

You can now associate multiple data management gateway nodes that are installed on different machines with a single logical gateway, so as to avoid Data Management Gateway being the single point of failure. In addition, this helps to scale out to achieve better copy performance. You can also choose to scale up each gateway node based on your load. Moreover, Azure Data Factory now provides a richer monitoring experience on gateway status and resource utilization from Azure portal. Learn more from Data Management Gateway – High Availability and Scalability Preview.

Skipping or logging incompatible rows during copy for fault tolerance

When copying data using Azure Data Factory Copy Activity, you now have different options to deal with incompatible data between source and sink data stores. You can choose to either abort and fail the copy run upon encountering incompatible data (default behavior), or continue copying all the data by skipping those incompatible rows. Additionally, you also have the option to log the incompatible rows in Azure Blob so you can examine the cause for failure, fix the data on the data source and retry. The feature is available via both Copy Wizard and JSON editing. Learn more about supported scenarios and configuration from the documentation page, Copy Activity fault tolerance – skip incompatible rows.

Service principal authentication support for Azure Data Lake Analytics

To use U-SQL Activity, Azure Data Factory now supports service principal authentication for Azure Data Lake Analytics like we did for Azure Data Lake Store earlier, in addition to the existing user credential authentication. We recommend that you use service principal authentication to get rid of periodical token expiration behavior with user credentials, especially for a scheduled U-SQL execution. Learn more about supported authentication types and configuration from documentation, Transform data by running U-SQL scripts on Azure Data Lake Analytics.

 

Above are the new features we introduced in July. Do you have questions or feedback? Share your thoughts with us on Azure Data Factory forum or feedback site, we’d love to hear more from you.
Quelle: Azure

Using Azure Analysis Services over Azure SQL DB and DW

In April we announced the general availability of Azure Analysis Services, which evolved from the proven analytics engine in Microsoft SQL Server Analysis Services. The success of any modern data-driven organization requires that information is available at the fingertips of every business user, not just IT professionals and data scientists, to guide their day-to-day decisions. Self-service BI tools have made huge strides in making data accessible to business users. However, most business users don’t have the expertise or desire to do the heavy lifting that is typically required, including finding the right sources of data, importing the raw data, transforming it into the right shape, and adding business logic and metrics, before they can explore the data to derive insights.

With Azure Analysis Services, a BI professional can create a semantic model over the raw data and share it with business users so that all they need to do is connect to the model from any BI tool and immediately explore the data and gain insights. Azure Analysis Services uses a highly optimized in-memory engine to provide responses to user queries at the speed of thought.

In the video below, I demonstrate to Scott Hanselman how to use Azure Analysis Services over SQL Data Warehouse. In this scenario, Azure Analysis Services servers two major functions:

It provides a semantic model which acts as a lens that your business users look through to get to their data. It presents your underlying database in a way which makes it easy for your users to query without needing to change the structure of that database.
A very fast in memory data caching layer which can answer queries in a fraction of a second. The cache provides users interactive querying over billions of rows of data while reducing the load on the underlying data store.

See the whole video:

You can try the Azure Analysis web designer today to build your own models by linking to it from a server in the Azure portal.

Submit your own ideas for features on our feedback forum. Learn more about Azure Analysis Services and the Azure Analysis Services web designer.
Quelle: Azure