Two ways to share Azure Advisor recommendations

If your IT organization is like most, you probably work with many different people across many different teams. When it comes to common IT tasks like optimizing your cloud workloads, you might need to interact with several resource owners or even complete a formal review process.

That’s why with Azure Advisor, we’ve made it easy to share recommendations with other people across your teams so you can follow best practices that help you get the most out of Azure. Advisor is a free Azure service that helps you optimize your Azure resources for high availability, security, performance, and cost by providing personalized recommendations based on your usage and configurations.

Here are two ways you can share your Advisor best practice recommendations with your teams.

1. Export a PDF or CSV of your Advisor recommendations

Probably the simplest way to share your Advisor recommendations is by exporting an Advisor recommendation report as a PDF or CSV through the Advisor UI in the Azure portal.

This report shows a summary of your Advisor recommendations by category, subscription, and potential business impact. Then you can easily share it with other teams so the resource owners can take action and optimize their resources for high availability, security, performance, and cost.

If you want to provide a specific view of a subset of your recommendations, you can use the UI filters or drill down into specific categories and recommendations. The recommendation report will only contain what you see on the screen when you generate it, which can help you focus on the most critical optimizations.

2. Use the Advisor API to integrate with your ticketing system or dashboards

The other way to share your Advisor recommendations with other people in your organization is via the Advisor REST API. Using this API, you can connect Advisor with your organization’s ticketing system and assign remediation work, set up an internal working dashboard your teams can review and action, or leverage Advisor’s recommendation data any way you choose.

The visual above shows just one way you can use the Advisor API with your ticketing application to share Advisor recommendations with your teams. Some setup is required, but once this scenario is complete, you can start remediating your recommendations more programmatically which will save you time as you optimize your resources.

This more advanced approach tends to work best for larger organizations, organizations managing a large number of Azure subscriptions and resources that are generating a large number of recommendations, and organizations that have a fairly sophisticated IT practice in place, since it scales well with the size of your deployments.

Visit the Advisor API documentation to learn more.

Get started with Advisor

Visit Advisor in the Azure portal to get started reviewing, sharing, and remediating your recommendations. For more in-depth guidance, visit the documentation. Let us know if you have a suggestion for Advisor by submitting an idea to the Azure Advisor feedback forum.
Quelle: Azure

Analyze AI enriched content with Azure Search’s knowledge store

Through integration with Cognitive Services APIs, Azure Search has long had the ability to extract text and structure from images and unstructured content. Until recently, this capability was used exclusively in full text search scenarios, exemplified in demos like the JFK files which analyzes diverse content in JPEGs and makes it available for online search. The journey from visual unstructured content, to searchable structured content is enabled by a feature called cognitive search. This capability in Azure Search is now extended with the addition of a knowledge store that saves enrichments for further exploration and analysis beyond search itself.

The knowledge store feature of Azure Search, available in preview, refers to a persistence layer in cognitive search that describes a physical expression of documents created through AI enrichments. Enriched documents are projected into tables or hierarchical JSON, which you can explore using any client app that is able to access Azure Storage. In Azure Search itself, you define the physical expression or shape of the projections in the knowledge store settings within your skillset.

Customers are using a knowledge store (preview) in diverse ways, such as to validate the structure and accuracy of enrichments, generate training data for AI models, and ad-hoc analysis of their data.

For example, the Metropolitan Museum of Art opened access to all images of public domain works in its collection. Enriching the artworks with cognitive search and the knowledge store allowed us to explore the latent relationships within the artworks on different dimensions like time and geography. Questions like how have images of family groups changed over time, or when were domestic animals included in paintings, are now answerable when you are able to identify, extract, and save the information in a knowledge store (preview).

With the knowledge store, anyone with an Azure subscription can apply AI to find patterns, insights, or create dashboards over previously inaccessible content.

What is the knowledge store (preview)?

Cognitive search is the enrichment of documents with AI skills before they are added to your search index. The knowledge store allows you to project the already enriched documents as objects (blobs) in JSON format or tabular data in table storage.

As part of your projection, you can shape the enriched document to meet your needs. This ensures that the projected data aligns with your intended use.

When using tabular projections, a knowledge store (preview) can project your documents to multiple tables while preserving the relationships between the data projected across tables. The knowledge store has several other features like allowing you to save multiple unrelated projections of your data. You can find more information about a knowledge store (preview) in the overview documentation.

Data visualization and analytics

Search enables you to find relevant documents, but when you’re looking to explore your data for corpus wide aggregations or want to visualize changes over time you need your data represented in a form other than a search index.

Leveraging Power BI’s integration with Azure tables, gets your dashboard started with only a few clicks. To identify insights from the enriched documents over dimensions like time or space, simply project your enriched documents into tables, validate that Power BI recognizes the relationships and you should now have your data in a format that is ready to consume within the visuals.

When you create a visual, any filters work, even when your data spans related tables. As an example, the art dashboard was created on the open access data from the MET in the knowledge store and the Art Explorer site uses the search index generated from the same set of enrichments.

The art explorer site allows you to find art works and related works while the Power BI report gives you a visual representation of the corpus and allows you to slice your data along different dimensions. You now can answer questions like “How does body armor evolve over time?”

In this example, a knowledge store (preview) enabled us to analyze the data ad-hoc. In another example, we may for instance enrich invoices or business forms, project the structured data to a knowledge store (preview), and then create a business-critical report.

Improving AI models

A knowledge store (preview) can also help improve the cognitive search experience itself as a data source for training AI models deployed as a custom skill within the enrichment pipeline. Customers deploying an AI model as a custom skill can project a slice of the enriched data shaped to be the source of their machine learning (ML) pipelines. A knowledge store (preview) now serves as a validator of the custom skill as well as a source of new data that can be manually labeled to retrain the model. While the enrichment pipeline operates on each document individually, corpus level skills like clustering require a set of documents to act on. A knowledge store (preview) can operate on the entire corpus to further enrich documents with skills like clustering and save the results back in a knowledge store (preview) or update the documents in the index.

Getting started

To start using a knowledge store (preview) you will need to:

Add a knowledge store (preview) configuration to your skillset.
Optionally, add a shaper skill to the skillset to define the shape of the projected enrichment.
Add a projection for tables, objects, or both to a knowledge store (preview). You may project the output of the shaper skill, or elements from the enriched document directly.

A knowledge store (preview) enables the use of your enriched data in new or improved models, visualizing and exploring the data in tools like Power BI and app based experiences merging the raw and enriched data. We will continue to add more capabilities and updates over the coming months.

For a detailed walkthrough, see the knowledge store (preview) getting started guide.
Quelle: Azure

Cloud TPU Pods break AI training records

Google Cloud’s AI-optimized infrastructure makes it possible for businesses to train state-of-the-art machine learning models faster, at greater scale, and at lower cost. These advantages enabled Google Cloud Platform (GCP) to set three new performance records in the latest round of the MLPerf benchmark competition, the industry-wide standard for measuring ML performance.All three record-setting results ran on Cloud TPU v3 Pods, the latest generation of supercomputers that Google has built specifically for machine learning. These results showcased the speed of Cloud TPU Pods— with each of the winning runs using less than two minutes of compute time.AI-optimized infrastructureWith these latest MLPerf benchmark results, Google Cloud is the first public cloud provider to outperform on-premise systems when running large-scale, industry-standard ML training workloads of Transformer, Single Shot Detector (SSD), and ResNet-50. In the Transformer and SSD categories, Cloud TPU v3 Pods trained models over 84% faster than the fastest on-premise systems in the MLPerf Closed Division.The Transformer model architecture is at the core of modern natural language processing (NLP)—for example, Transformer has enabled major improvements in machine translation, language modeling, and high-quality text generation. The SSD model architecture is widely used for object detection, which is a key part of computer vision applications including medical imaging, autonomous driving, and photo editing.To demonstrate the breadth of ML workloads that Cloud TPUs can accelerate today, we also submitted results in the NMT and Mask R-CNN categories. The NMT model represents a more traditional approach to neural machine translation, and Mask R-CNN is an image segmentation model.Scalable GCP provides customers the flexibility to select the right performance and price point for all of their large-scale AI workloads. The wide range of Cloud TPU Pod configurations, called slice sizes, used in the MLPerf benchmarks illustrates how Cloud TPU customers can choose the scale that best fits their needs. A Cloud TPU v3 Pod slice can include 16, 64, 128, 256, 512, or 1024 chips, and several of our open-source reference models featured in our Cloud TPU tutorials can run at all of these scales with minimal code changes.Get started todayOur growing Cloud TPU customer base is already seeing benefits from the scale and performance of Cloud TPU Pods. For example, Recursion Pharmaceuticals can now train in just 15 minutes on Cloud TPU Pods compared to 24 hours on their local GPU cluster.If cutting-edge deep learning workloads are a core part of your business, please contact a Google Cloud sales representative to request access to Cloud TPU Pods. Google Cloud customers can receive evaluation quota for Cloud TPU Pods in days instead of waiting months to build an on-premise cluster. Discounts are also available for one-year and three-year reservations of Cloud TPU Pod slices, offering businesses an even greater performance-per-dollar advantage.Only the beginningWe’re committed to making our AI platform—which includes the latest GPUs, Cloud TPUs, and advanced AI solutions—the best place to run machine learning workloads. Cloud TPUs will continue to grow in performance, scale, and flexibility, and we will continue to increase the breadth of our supported Cloud TPU workloads (source code available).To learn more about Cloud TPUs, please visit our Cloud TPU homepage and documentation. You can also try out a Cloud TPU for free, right in your browser, via this interactive Colab that applies a pre-trained Mask R-CNN image segmentation model to an image of your choice. You can find links to many other Cloud TPU Colabs and tutorials at the end of our recent beta announcement.1. MLPerf v0.6 Training Closed. Retrieved from www.mlperf.org 10 July 2019. MLPerf name and logo are trademarks. See www.mlperf.org for more information.2. MLPerf entries 0.6-6 vs. 0.6-28, 0.6-6 vs. 0.6-27, 0.6-6 vs. 0.6-30, 0.6-5 vs. 0.6-26, 0.6-3 vs. 0.6-23, respectively.3.  MLPerf entries 0.6-3, 0.6-4, 0.6-5, 0.6-6, respectively, normalized by entry 0.6-1
Quelle: Google Cloud Platform

Reducing overall storage costs with Azure Premium Blob Storage

In this blog post, we will take a closer look at pricing for Azure Premium Blob Storage, and its potential to reduce overall storage costs for some applications.

Premium Blob Storage is Azure Blob Storage powered by solid-state drives (SSDs) for block blobs and append blobs. For more information see, “Azure Premium Blob Storage is now generally available.” It is ideal for workloads that require very fast storage response times and/or has a high rate of operations. For more details on performance see, “Premium Block Blob Storage – a new level of performance.”

Azure Premium Blob Storage utilizes the same ‘pay-as-you-go’ pricing model used by standard general-purpose V2 (GPv2) hot, cool, and archive. This means customers only pay for the volume of data stored per month and the quantity of operations performed.

The current blob pricing can be found on the Azure Storage pricing page. You will see, data storage gigabyte (GB) pricing decreases for colder tiers, while the inverse is true for operation prices where operations per 10,000 pricing decreases for hotter tiers. Premium data storage pricing is higher than hot data storage pricing. However, read and write operations pricing for premium are lower than hot read and write operations. This means premium blob storage is meant to store data that is transacted upon frequently and is not intended for storing infrequently or rarely accessed data.

Given the lower operations costs, is there a point where premium, not only provides better performance, but also costs less than standard (GPv2) hot?

To answer this question, I created the graph below, which shows the relative total monthly cost of storing 1 Terabytes (TiB) of data in standard (GPv2) hot and premium, varying the operations per second performed on this 1TiB of data using a 70/30 split between read and write operations.

As you can see in the graph above, the estimated total monthly cost for premium becomes less than standard (GPv2) hot between 40 to 50 operations per second for each 1TiB of data. This means customers will save money for workloads with high rate of operations by using premium even if they do not require the better performance provided by premium.

Next steps

To get started with Premium Blob Storage, you provision a ‘Block Blob’ storage account in your subscription, and start creating containers and blobs using the existing Blob Service REST API or any existing tools such as AzCopy or Azure Storage Explorer.

Conclusion

We are very excited about Azure Premium Blob Storage providing low and consistent latency, and the potential cost savings for applications with high rate of operations. We look forward to hearing your feedback at premiumblobfeedback@microsoft.com or feel free to share your ideas and suggestions for Azure Storage on our feedback forum. To learn more about Azure Blob Storage please visit our product page.
Quelle: Azure