Cloud Codec: Xilinx übernimmt NGCodec

Xilinx expandiert. Dafür übernimmt das Unternehmen einen Video-Encoding-Spezialisten, dessen Lösungen ohnehin schon auf Xilinx-FPGA-Plattformen setzen. An den mittelfristigen Plänen ändert sich dabei nach derzeitigem Stand nichts. (Xilinx, Prozessor)
Quelle: Golem

Azure Cost Management updates – June 2019

Whether you're a new student, thriving startup, or the largest enterprise, you have financial constraints and you need to know what you're spending, where, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Azure Cost Management comes in.

We're always looking for ways to learn more about your challenges and how Cost Management can help you better understand where you're accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less.

Here are the improvements that we'll be looking at today, all based on your feedback:

Reservation and marketplace purchases for Enterprise Agreements and AWS
Forecasting your Azure and AWS costs
Standardizing cost and usage terminology for Enterprise Agreements and Microst Customer Agreements
Keeping an eye on costs across subscriptions with management group budgets
Updating your dashboard tiles
Expanded availability of resource tags in cost reporting
The new Cost Management YouTube channel

Let's dig into the details.

 

Reservation and marketplace purchases for Enterprise Agreements and AWS

Effective cost management starts by getting all your costs into a single place with a single taxonomy. Now, with the addition of reservation and marketplace purchases, you have a more complete picture of your Enterprise Agreements (EA) for Azure and AWS costs, and can track large reservation costs back to the teams using the reservation benefit. Breaking reservation purchases down will simplify cost allocation efforts, making it easier than ever to manage internal chargeback.

Start by opening cost analysis and changing scope to your EA billing account, AWS consolidated account, or a management group which spans both. You'll notice four new grouping and filtering options to break down and drill into costs:

Charge type indicates which costs are from usage, purchases, and refunds.
Publisher type indicates which costs are from Azure, AWS, and marketplace. Marketplace costs include all clouds. Use Provider to distinguish between the total Azure and AWS costs, and first and third-party costs.
Reservation specifies what the reservation costs are associated with, if applicable.
Frequency indicates which costs are usage-based, one-time fees, or recurring charges.

By default, cost analysis shows your actual cost as it is on your bill. This is ideal for reconciling your invoice, but results in visible spikes from large purchases. This also means usage against a reservation will show no cost, since it was prepaid, and subscription and resource group readers won't have any visibility into their effective costs. This is where amortization comes in.

Switch to the amortized cost view to break down reservation purchases into daily chunks and spread them over the duration of the reservation term. As an example, instead of seeing a $365 purchase on January , you will see a $1 purchase every day from January 1 to December 31. In addition to basic amortization, these costs are also reallocated and associated with the specific resources which used the reservation. For example, if that $1 daily charge is split between two virtual machines, you'll see two $0.50 charges for the day. If part of the reservation is not utilized for the day, you'll see one $0.50 charge associated with the applicable virtual machine and another $0.50 charge with a new charge type titled UnusedReservation.

As an added bonus subscription, resource group, and AWS linked account readers can also see their effective costs by viewing amortized costs. They won't be able to see the purchases, which are only visible on the billing account, but they can see their discounted cost based on the reservation.

To build a simple chargeback report, switch to amortized cost, select no granularity to view the total costs for the period, group by resource group, and change to table view. Then, download the data to Excel or CSV for offline analysis or to merge with your own data.

If you need to automate getting cost data, you have two options. Use the Query API for rich analysis with dynamic filtering, grouping, and aggregation or use the UsageDetails API for the full, unaggregated cost and usage data. Note UsageDetails is only available for Azure scopes. The general availability (GA) version of these APIs is 2019-01-01, but you'll want to use 2019-04-01-preview to include reservation and Marketplace purchases.

As an example, let's get an aggregated view of amortized costs broken down by charge type, publisher type, resource group—left empty for purchases, and reservation—left empty if not applicable.

POST https://management.azure.com/{scope}/providers/Microsoft.CostManagement/query?api-version=2019-04-01-preview
Content-Type: application/json

{
"type": "AmortizedCost",
"timeframe": "Custom",
"timePeriod": { "from": "2019-06-01", "to": "2019-06-30" },
"dataset": {
"granularity": "None",
"aggregation": {
"totalCost": { "name": "PreTaxCost", "function": "Sum" }
},
"grouping": [
{ "type": "dimension", "name": "ChargeType" },
{ "type": "dimension", "name": "PublisherType" },
{ "type": "dimension", "name": "Frequency" },
{ "type": "dimension", "name": "ResourceGroup" },
{ "type": "dimension", "name": "SubscriptionName" },
{ "type": "dimension", "name": "SubscriptionId" },
{ "type": "dimension", "name": "ReservationName" },
{ "type": "dimension", "name": "ReservationId" }
]
}
}

And if you don't need the aggregation and prefer the full, raw dataset for Azure scopes:

GET https://management.azure.com/{scope}/providers/Microsoft.Consumption/usageDetails?metric=AmortizedCost&$filter=properties/usageStart+ge+'2019-06-01'+AND+properties/usageEnd+le+'2019-06-30'&api-version=2019-04-01-preview

If you need actual costs to show purchases as they are shown on your bill, simply change the type or metric to ActualCost. For more information about these APIs, refer to the Query and UsageDetails API documentation. The published docs show the GA version, but they both work the same for the 2019-04-01-preview API version outside of the new type/metric attribute.

Note that Cost Management APIs work across all scopes above resources. Namely, resource group, subscription, management group via Azure roll-based access control (RBAC) access, EA billing accounts (enrollments), departments, enrollment accounts via EA portal access, AWS consolidated, and linked accounts via Azure RBAC. To learn more about scopes, including how to determine your scope ID or manage access, see our documentation "Understand and work with scopes."

Support for reservation and marketplace purchases is currently available in preview in the Azure portal, but will roll out globally in the coming weeks. In the meantime, please check it out and let us know if you have any feedback.

 

Forecasting your Azure and AWS costs

History teaches us a lot, and knowing where you've been is critical to understanding where you're going. This is no less true when it comes to managing costs. You may start with historical costs to understand application and organization trends, but to really get into a healthy, optimized state, you need to plan for the future. Now you can with Cost Management forecasts.

Check your forecasted costs in cost analysis to anticipate and visualize cost trends, and proactively take action to avoid budget or credit overages on any scope. From a single application in a resource group, to the entire subscription or billing account, to higher-level management groups spanning both Azure and AWS resources. Learn about connecting your AWS account in last month's wrap up here.

Cost Management forecasts are in preview in the Azure portal, and will roll out globally in the coming weeks. Check it out and let us know what you'd like to see next.

 

Standardizing cost and usage terminology for Enterprise Agreement and Microsoft Customer Agreement

Depending on whether you use a pay-as-you-go (PAYG), Enterprise Agreement (EA), Cloud Solution Provider (CSP), or Microsoft Customer Agreement (MCA) account, you may be used to a different terminology. These differences are minor and won't impact your ability to understand and break down your bills, but they do introduce a challenge as your organization grows and needs a more holistic cost management solution, spanning multiple account types. With the addition of AWS and eventual migration of PAYG, EA, and CSP accounts into MCA, this becomes even more important. In an effort to streamline the transition to MCA at your next EA renewal, Cost Management now uses new column or property names to align to MCA terminology. Here are the primary differences you can expect to see for EA accounts:

EnrollmentNumber → BillingAccountId/BillingProfileId

​EA enrollments are represented as "billing accounts" within the Azure portal today, and they will continue to be mapped to a BillingAccountId within the cost and usage data. No change there. MCA also introduces the ability to create multiple invoices within a billing account. The configuration of these invoices is called a "billing profile". Since EA can only have a single invoice, the enrollment effectively maps to a billing profile. In line with that conceptual model, the enrollment number will be available as both a BillingAccountId and BillingProfileId.

DepartmentName → InvoiceSectionName

​MCA has a concept similar to EA departments, which allows you to group subscriptions within the invoice. These are called "invoice sections" and are nested under a billing profile. While the EA invoice isn't changing as part of this effort, EA departments will be shown as InvoiceSectionName within the cost data for consistency.

ProductOrderName (new)

​New property to identify the larger product the charge applies to, like the Azure subscription offer.

PublisherName (new)

​New property to indicate the publisher of the offering.

ServiceFamily (new)

​New property to group related meter categories.

Organizations looking to renew their EA enrollment into a new MCA should strongly consider moving from the key-based EA APIs (such as consumption.azure.com) to the latest UsageDetails API (version 2019-04-01-preview) based on these new properties to minimize future migration work. The key-based APIs are not supported for MCA billing accounts.

To learn more about the new terminology, see our documentation "Understand the terms in your Azure usage and charges file."

 

Keeping an eye on costs across subscriptions with management group budgets

Every organization has a bottom line. Cost Management budgets help you make sure you don't hit yours. And now, you can create budgets that span both Azure and AWS resources using management groups.

Organize subscriptions into management groups, and use filters to perfectly tune the budget that's right for your teams.

To learn more, see our tutorial "Create and manage budgets."

 

Updating your dashboard tiles

You already know you can pin customized views of cost analysis to the dashboard.

You may have noticed these tiles were locked to the specific date range you selected when pinning it. For instance, if you chose to view this month's costs in January, the tile would always show January, even in February, March, and so on. This is no longer the case.

Cost analysis tiles now maintain the built-in range you selected in the date picker. If you pin "this month," you'll always get the current calendar month. If you pin "last 7 days," you'll get a rolling view of the last 7 days. If you select a custom date range, however, the tile will always show that specific date range.

To get the updated behavior, please update your pinned tiles. Simply click the chart on the tile to open cost analysis, select the desired date range, and pin it back to the dashboard. Your new tile will always keep the exact view you selected.

What else would help you build out your cost dashboard? Do you need other date ranges? Let us know.

 

Expanded availability of resource tags in cost reporting

Tagging is the best way to organize and categorize your resources outside of the built-in management group, subscription, and resource group hierarchy, allowing you to add your own metadata and build custom reports using cost analysis. While most Azure resources support tags, some resource types do not. Here are the latest resource types which now support tags:

App Service environments
Data Factory services
Event Hub namespaces
Load balancers
Service Bus namespaces

Remember tags are a part of every usage record and are only available in Cost Management reporting after the tag is applied. Historical costs are not tagged, so update your resources today for the best cost reporting.

 

The new Cost Management YouTube channel

Last month, we talked about eight new quickstart videos to get you up and running with Cost Management quickly. Subscribe to the new Azure Cost Management YouTube channel to stay in the loop with new videos as they're released. Here's the newest video in our cost optimization collection:

Five tips to help you save money and manage costs with Azure

Let us know what other topics you'd like to see covered.

 

What's next?

These are just a few of the big updates from the last month. We're always listening and making constant improvements based on your feedback, so please keep the feedback coming! 

Follow @AzureCostMgmt on Twitter and subscribe to the YouTube channel for updates, tips, and tricks. And, as always, share your ideas and vote up others in the Cost Management feedback forum.
Quelle: Azure

Azure. Source–Volume 89

Dear Azure fans, Azure.Source is going on hiatus. Thank you for reading each week and be sure to follow @Azure for updates and new ways to learn more.

Now available

Announcing the general availability of Azure premium files

We are excited to announce the general availability of Azure premium files for customers optimizing their cloud-based file shares on Azure. Premium files offers a higher level of performance built on solid-state drives (SSD) for fully managed file services in Azure.

Premium tier is optimized to deliver consistent performance for IO-intensive workloads that require high-throughput and low latency. Premium file shares store data on the latest SSDs, making them suitable for a wide variety of workloads like databases, persistent volumes for containers, home directories, content and collaboration repositories, media and analytics, high variable and batch workloads, and enterprise applications that are performance sensitive. Our existing standard tier continues to provide reliable performance at a low cost for workloads less sensitive to performance variability, and is well-suited for general purpose file storage, development/test, backups, and applications that do not require low latency.

Leveraging complex data to build advanced search applications with Azure Search

Data is rarely simple. Not every piece of data we have can fit nicely into a single Excel worksheet of rows and columns. Data has many diverse relationships, such as the multiple locations and phone numbers for a single customer .or multiple authors and genres of a single book. Of course, relationships typically are even more complex than this, and as we start to leverage AI to understand our data the additional learnings we get only add to the complexity of relationships. For that reason, expecting customers to have to flatten the data so it can be searched and explored is often unrealistic. We heard this often and it quickly became our number one most requested Azure Search feature. Because of this we were excited to announce the general availability of complex types support in Azure Search. In this post, we explain what complex types adds to Azure Search and the kinds of things you can build using this capability.

Azure Blockchain Workbench 1.7.0 integration with Azure Blockchain Service

The release of Microsoft Azure Blockchain Workbench 1.7.0, which along with our new Azure Blockchain Service, can further enhance your blockchain development and projects. You can deploy a new instance of Blockchain Workbench through the Azure portal or upgrade your existing deployments to 1.7.0 using the upgrade script. This update includes the improvements such as integration with Azure Blockchain Service, and enhance compatibility with Quorum.

New PCI DSS Azure Blueprint makes compliance simpler

Announcing our second Azure Blueprint for an important compliance standard with the release of the PCI-DSS v3.2.1 blueprint. The new blueprint maps a core set of policies for Payment Card Industry (PCI) Data Security Standards (DSS) compliance to any Azure deployed architecture, allowing businesses such as retailers to quickly create new environments with compliance built in to the Azure infrastructure. Azure Blueprints is a free service that enables customers to define a repeatable set of Azure resources that implement and adhere to standards, patterns, and requirements. Azure Blueprints allow customers to set up governed Azure environments that can scale to support production implementations for large-scale migrations.

Now in preview

Event-driven analytics with Azure Data Lake Storage Gen2

Announcing that Azure Data Lake Storage Gen2 integration with Azure Event Grid is in preview. This means that Azure Data Lake Storage Gen2 can now generate events that can be consumed by Event Grid and routed to subscribers with webhooks, Azure Event Hubs, Azure Functions, and Logic Apps as endpoints. With this capability, individual changes to files and directories in Azure Data Lake Storage Gen2 can automatically be captured and made available to data engineers for creating rich big data analytics platforms that use event-driven architectures.

Technical content

How to deploy your machine learning models with Azure Machine Learning

Azure Machine Learning service is a cloud service that you use to train, deploy, automate, and manage machine learning models, all at the broad scale that the cloud provides. The service fully supports open-source technologies such as PyTorch, TensorFlow, and scikit-learn and can be used for any kind of machine learning, from classical ml to deep learning, supervised and unsupervised learning. In this article you will learn to deploy your machine learning models with Azure Machine Learning.

Azure Cloud Shell Tips for SysAdmins Part II – Using the Cloud Shell tools to Migrate

In the last blog post Azure Cloud Shell Tips for SysAdmins (bash) the author discussed some of the tools that the Azure Cloud Shell for bash already has built into it.  This time he goes deeper and show you how to utilize a combination of the tools to create an UbuntuLTS Linux server.  Once the server is provisioned, he will demonstrate how to use Ansible to deploy Node.js from the nodesource binary repository.

Step-By-Step: Migrating The Active Directory Certificate Service From Windows Server 2008 R2 to 2019

End of support for Windows Server 2008 R2 has been slated by Microsoft for January 14th 2020.  Said announcement increased interest in a previous post detailing steps on Active Directory Certificate Service migration from server versions older than 2008 R2.  Many subscribers of ITOpsTalk.com have reached out asking for an update of the steps to reflect Active Directory Certificate Service migration from 2008 R2 to 2016 / 2019 and of course our team is happy to oblige.

Home Grown IoT – Local Dev

Now that we’re starting to build our IoT application it’s time to start talking about the local development experience for the application. At the end of the day I use IoT Edge to do the deployment onto the device and manage the communication with IoT Hub and there is a very comprehensive development guide for Visual Studio Code and Visual Studio 2019. The workflow of this is to create a new IoT Edge project, setup IoT Edge on your machine and do deployments to it that way. This is the way I’d recommend going about it yourself as it gives you the best replication of production and local development.

Delivering static content via Azure CDN | Azure Friday

In one of the prior episodes we learned how to serve a static website from Azure's blob storage<?XML:NAMESPACE PREFIX = "[default] http://www.w3.org/2000/svg" NS = "http://www.w3.org/2000/svg" /> . This is great for a low volume web site. As your site starts getting more hits, you wanted to deliver the content closer to the end user. In this episode, we will learn how to deliver a static content via Azure Content Delivery Network(CDN). Azure CDN offers developers a global solution for rapidly delivering high-bandwidth content to users by caching their content at strategically placed physical nodes across the world.

Azure shows

Deploy your web app in Windows containers on Azure App Service | Azure Friday

Windows Container support is available in preview in Azure App Service. By deploying applications via Windows Containers in Azure App Service you can install your dependencies inside the container, call APIs currently blocked by the Azure App Service sandbox and use the power of containers to migrate applications for which you no longer have the source code. All of this and you still get to use the awesome feature set enabled by Azure App Service such as auto-scale, deployment slots and increased developer productivity.

Using open data to build family trees | The Open Source Show

Erica Joy joins Ashley McNamara to share her not-so-secret personal mission: making genealogy information open, queryable, and easily parsable. She shares a bit about why this is so critical, common challenges, and tips for re-building your own family tree – or using open data to uncover whatever the information you need for your personal mission.

Supporting Windows forms and WPF in .NET Core 3 | On .NET

There is significant effort happening to add support for running desktop applications on .NET Core 3.0. In this episode, Jeremy interviews Mike Harsh about some of the work being done and decisions being made to enable Windows Forms and WPF applications to run well on .NET Core 3.0 and beyond.

Five things about RxJS and reactive programming | Five Things

Where do RxJS, Reactive Programming and the Redux pattern fit into your developer workflow? Where can you learn form the community leaders? Does wearing a hoodie make you a better developer? Oh and remember, go to RxJS Live and drinks are on Aaron!

How to use the Global Search in the Azure portal | Azure Portal Series

In this video of the Azure Portal “How To” Series, you will learn how to find Azure services, resources, documentation, and more using the Global Search in the Azure portal.

Episode 285 – The Azure Journey | The Azure Podcast

Sujit, Kendall, and Cynthia talk with the one and only Richard Campbell on how to tell the cloud story, the conversations to have with customers as they enter the cloud and the implications of globally distributed cloud that needs to be considered. Probably one of our favorite shows.

HTML5 audio not supported

Industries and partners

Solving the problem of duplicate records in healthcare

As the U.S. healthcare system continues to transition away from paper to more a digitized ecosystem, the ability to link an individual’s medical data together correctly becomes increasingly challenging. Patients move, marry, divorce, change names and visit multiple providers throughout their lifetime, with each visit creating new records, and the potential for inconsistent or duplicate information grows. Duplicate medical records often occur as a result of multiple name variations, data entry errors, and lack of interoperability—or communication—between systems. Poor patient identification and duplicate records in turn lead to diagnosis errors, redundant medical tests, skewed reporting and analytics, and billing inaccuracies. The Azure platform offers a wealth of services for partners to enhance, extend, and build industry solutions. Here we will describe how one Microsoft partner, Nextgate, uses Azure to solve a unique problem.

A solution to manage policy administration from end to end

Legacy systems can be a nightmare for any business to maintain. In the insurance industry, carriers struggle not only to maintain these systems but to modify and extend them to support new business initiatives. The insurance business is complex, every state and nation has its own unique set of rules, regulations, and demographics. Creating new products such as an automobile policy has traditionally required the coordination of many different processes, systems, and people. These monolithic systems traditionally used to create new products are inflexible and creating a new product can be an expensive proposition. The Azure platform offers a wealth of services for partners to enhance, extend, and build industry solutions. Here we describe how one Microsoft partner, Sunlight Solutions, uses Azure to solve a unique problem.

Using natural language processing to manage healthcare records

The Azure platform offers a wealth of services for partners to enhance, extend, and build industry solutions. Here we describe how SyTrue, a Microsoft partner focusing on healthcare uses Azure to empower healthcare organizations to improve efficiency, reduce costs, and improve patient outcomes.

Azure Cosmos DB: A competitive advantage for healthcare ISVs

CitiusTech is a specialist provider of healthcare technology services which helps its customers to accelerate innovation in healthcare. CitiusTech used Azure Cosmos DB to simplify the real-time collection and movement of healthcare data from variety of sources in a secured manner. With the proliferation of patient information from established and current sources, accompanied with scrupulous regulations, healthcare systems today are gradually shifting towards near real-time data integration.
Quelle: Azure

Introducing the Jenkins GKE Plugin—deploy software to your Kubernetes clusters

Jenkins is one of the most widely used tools for automating software build, test, and deployment. Kubernetes, meanwhile, is an increasingly popular deployment target for those workloads. While it’s already possible to run Jenkins on Google Kubernetes Engine (GKE) clusters, it’s harder to manage robust deployment strategies for your workloads that run on Kubernetes. Today, we are excited to announce the availability of the Jenkins Google Kubernetes Engine (GKE) Plugin, which provides a build step that streamlines deploying workloads to GKE clusters across GCP projects. Here is a screenshot of the user interface:After providing credentials and configuration to the plugin it will do the following during your Jenkins job:Download ephemeral credentials for your target GKE clusterUse kubectl to apply the Kubernetes resources in your workspaceWait for the number of replicas you have defined in your Deployment specification to reach the healthy stateGetting started with the Jenkins GKE plugin is easy. First, provide a single set of credentials to the plugin to discover the GKE clusters across your GCP projects. Then, after choosing a project and cluster, configure the path to the manifests in the Jenkins workspace from which you’d like to deploy. You can also optionally define a namespace to deploy your manifests to. While many deployment mechanisms fire off a kubectl command and hope that Kubernetes realizes their changes successfully, this can lead to many false positives as deployments fail to reach the healthy state. You can configure the Jenkins GKE Plugin to wait for your deployment to enter the desired state by checking the “Verify Deployments” option. For each Deployment manifest that is applied to the cluster, the plugin polls the deployment to ensure that the number of healthy pods matches the requested minimum number of healthy replicas. In the future we hope to add more of this type of logic to verify other types of resources.Getting started using the graphical interface like we do with the build step configuration above can speed up your initial exploration of the plugin, providing some guard rails and a more intuitive user experience. But in most cases you’ll want to define your application deployment processes in code so that changes can be reviewed, audited, and approved. Thankfully Jenkins provides the Pipeline syntax that lets you define your build, test and release process in a file alongside your source code. Below is an example pipeline that defines a simple rollout process that deploys to a staging cluster, waits for a manual approval from someone in the “sre-approvers” group and then finally deploys to production.Now that you’ve seen some of the features of the Jenkins GKE plugin, go ahead and install it. Head over to the Jenkins Plugin Manager and search the available plugins for “Google Kubernetes Engine Plugin” to install the latest version. For more information on how to configure the plugin, check out the documentation. We’d love your feedback and contributions:Visit our GitHub repo to let us know how we can make this plugin even betterChat with us on the GCP Community Slack in the #gcp-jenkins channelMore about Jenkins on GCPWe’ve released a number of Jenkins plugins that make running continuous integration and continuous delivery workloads on Google Cloud even easier:Use the Google Cloud Storage Plugin to store your build artifactsUse the Google Compute Engine Plugin to dynamically create Jenkins agents that match your utilizationUse the Google OAuth Plugin to store GCP service account credentials in the Jenkins credentials storeWe also have the following tutorials to help you get up to speed with Jenkins on GCP:Setting up Jenkins on Kubernetes EngineContinuous Deployment with Jenkins on Kubernetes EngineDistributed Builds with Jenkins on Google Compute Engine
Quelle: Google Cloud Platform