Build your own deep learning models on Azure Data Science Virtual Machines

As a modern developer, you may be eager to build your own deep learning models but aren’t quite sure where to start. If this is you, I recommend you take a look at the deep learning course from fast.ai. This new fast.ai course helps software developers start building their own state-of-the-art deep learning models. Developers who complete this fast.ai course will become proficient in deep learning techniques in multiple domains including computer vision, natural language processing, recommender algorithms, and tabular data.

You’ll also want to learn about Microsoft’s Azure Data Science Virtual Machine (DSVM). Azure DSVM empowers developers like you with the tools you need to be productive with this fast.ai course today on Azure, with virtually no setup required. Using fast cloud-based GPU virtual machines (VMs), at the most competitive rates, Azure DSVM saves you time that would otherwise be spent in installation, configuration, and waiting for deep learning models to train.

Here is how you can effectively run the fast.ai course examples on Azure.

Running the fast.ai deep learning course on Azure DSVM

While there are several ways in which you can use Azure for your deep learning course, one of the easiest ways is to leverage Azure Data Science Virtual Machine (DSVM). Azure DSVM is a family of virtual machine (VM) images that are pre-configured with a rich curated set of tools and frameworks for data science, deep learning, and machine learning.

Using Azure DSVM, you can utilize tools like Jupyter notebooks and necessary drivers to run on powerful GPUs. In result saving time that would otherwise be spent installing, configuring, and troubleshooting any compatibility issues on your system. Azure DSVM is offered on both Linux and Windows editions. Azure VMs provides a neat extension mechanism that the DSVM can leverage, allowing you to automatically configure your VM to your needs.

Microsoft provides an extension to the DSVM specifically for the fast.ai course, making the process so simple you can answer a couple of questions and get your own instance of DSVM provisioned in a few minutes. The fast.ai extension installs all the necessary libraries you need to run the course Jupyter notebooks and also pull down the latest course notebooks from the fast.ai GitHub repository. So in a very short time, you’ll be ready to start running your course samples.

Getting started with Azure DSVM and fast.ai

Here’s how simple it is to get started:

1. Sign in or sign up for an Azure subscription

If you don’t have an Azure subscription you can start off with a free trial subscription to explore any Azure service for 30 days and access to a set of popular services free for 12 months. Please note that free trial subscriptions do not give access to GPU resources. For GPU access, you need to sign up for an Azure pay-as-you-go subscription or use the Azure credits from the Visual Studio subscriptions if you have one. Once you have created your subscription, you can login to the Azure portal.

2. Create a DSVM instance with fast.ai extension

You can now create a DSVM with the fast.ai extension by selecting one of the links below. Choose one depending on whether you prefer a Windows or a Linux environment for your course.

Linux (Ubuntu) edition of DSVM with fast.ai
Windows Server 2016 edition of DSVM with fast.ai

After answering a few simple questions in the deployment form, your VM is created in about five to 10 minutes and is pre-configured with everything you need to run the fast.ai course. While creating the DSVM, you can choose between a GPU-based or a CPU-only instance of the DSVM. A GPU instance will drastically cut down execution times when training deep learning models. This is largely what the course notebooks covers, so I recommend a GPU instance. Azure also offers low-priority instances including GPU at a significant discount which is as much as 80 percent on compute usage charges compared to standard instances. Though keep in mind, they can be preempted and deallocated from your subscription at any time depending on factors like the demand for these resources. If you want to take advantage of the deep discount, you can create preemptable Linux DSVM instance with the fast.ai extension.

3. Run your course notebooks

Once you have created your DSVM instance, you can immediately start using it to run all the code in the course examples by accessing Jupyter and the course notebooks that are preloaded in the DSVM.

You can find more information on how to get started with fast.ai for Azure on the course documentation page.

Next steps

You can continue your journey in machine learning and data science by taking a look at the Azure Machine Learning service which enables you to track your experiments. You can also use automated machine learning, build custom models, and deploy machine learning, deep learning models, or pipelines in production at scale with several sample notebooks that are pre-built in the DSVM. You can also find additional learning resources on Microsoft’s AI School and LearnAnalytics.

I look forward to your feedback and questions on the fast.ai forums or on Stack Overflow.
Quelle: Azure

Best practices to consider before deploying a network virtual appliance

A network virtual appliance (NVA) is a virtual appliance primarily focused on network functions virtualization. A typical network virtual appliance involves various layers four to seven functions like firewall, WAN optimizer, application delivery controllers, routers, load balancers, IDS/IPS, proxies, SD-WAN edge, and more. While the public cloud may provide some of these functionalities natively, it is quite common to see customers deploying network virtual appliances from independent software vendors (ISV). These capabilities in the public cloud enable hybrid solutions and are generally available through the Azure Marketplace.

What exactly is the network virtual appliance in the cloud?

A network virtual appliance is often a full Linux virtual machine (VM) image consisting of a Linux kernel and includes user level applications and services. When a VM is created, it first boots the Linux kernel to initialize the system and then starts up any application or management services needed to make the network virtual appliance functional. The cloud provider is responsible for the compute resources, while the ISV provides the image that represents the software stack of the virtual appliance.

Similar to a standard Linux distribution, the Linux kernel is integral to the NVA’s image and is provided by the ISV often customized. The kernel itself includes the drivers needed for all network and disk devices available to the virtual machine. The version and customizations made to the NVA’s kernel will often impact the performance and functionality of the virtual machine, for more information about Linux and accelerated networking see our documentation, “Create a Linux virtual machine with Accelerated Networking.” As new networking enhancements are made to the Azure platform such as performance improvements or even entirely new networking features, the ISV may need to update the software image to provide support for those enhancements. Often, this entails updating their version of the Linux kernel from the upstream Linux project. For the latest updates, see the Linux Kernel Archives website.

All NVA images published in the Azure Marketplace go through rigorous testing and onboarding workflows. As part of Azure’s continuous integration and deployment life cycle, NVA images are deployed and tested in a pre-production environment for any regression or issues. ISVs are responsible for publishing deployment guidelines and GithHub published Azure Resource Manager (ARM) templates for their specific products. Technical and performance specifications of the appliance are owned by the ISVs, while Microsoft owns the technical and performance specifications of the host environment. Technical support for the customer’s virtual appliance, it’s features, recommended OS version, kernel version, and security updates are provided by the ISV.

Pricing for NVA solutions may vary based on product types and publisher specifications. Software license fees and Microsoft Azure usage costs are charged separately through the Azure subscription. Learn more by visiting our list of Marketplace FAQs related to virtual appliance and Azure marketplace.

Below is an example of a hybrid network that extends an on-premises network to Azure. Demilitarized zone (DMZ) represents a perimeter network between on-premises and Azure, which includes NVAs.

Another example below shows an NVA with Azure Virtual WAN. For more details on how to steer traffic from a Virtual WAN hub to a network virtual appliance please visit our documentation, “Create Virtual Hub route table steer traffic to a Network Virtual Appliance.”

Common best practices

Microsoft continues to collaborate with multiple ISVs to improve cloud experience for Microsoft customers.

Azure accelerated networking support: Consider a virtual appliance that is available on one of the supported VM types with Azure’s accelerated networking capability. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, greatly improving its networking performance. This high-performance path bypasses the host from the datapath, reducing latency, jitter, and CPU utilization for use with the most demanding network workloads on supported VM types. Accelerated networking is supported on most general purpose and compute-optimized instance sizes with two or more vCPUs. For a list of supported OS and additional information visit our documentation, “Create a Windows virtual machine with Accelerated Networking.” 
Multi-NIC support: A network interface (NIC) is the interconnection between a VM and a virtual network (VNet). A VM must have at least one NIC, but can have more than one depending on the size of the VM you create. Learn about how many NICs each VM size supports for Windows and Linux in our documentation, “Sizes for Windows virtual machines in Azure” or “Sizes for Linux virtual machines in Azure.” Many network virtual appliances require multiple NICs. With multiple NICs you can better manage your network traffic by isolating various types of traffic across the different NICs. A good example would be separating data plane traffic from the management plane and hence the VM supporting at least two NICs. A VM can only have as many network interfaces attached to it as the VM size supports. If you are considering adding a NIC after deploying the NVA, be sure to Enable IP forwarding on the NIC. The setting disables Azure's check of the source and destination for a network interface. Learn more about how to enable IP forwarding for a network interface.
HA Port with Azure Load Balancer: Azure Standard Load Balancer helps you load-balance TCP and UDP flows on all ports simultaneously when you're using an internal load balancer. A high availability (HA) port load balancing rule is a variant of a load balancing rule, configured on an internal Standard Load Balancer. You would want your NVA to be reliable and highly available, to achieve these goals simply by adding NVA instances to the back-end pool of your internal load balancer and configuring a HA ports load-balancer rule. For more information on HA port overview please visit our documentation, “High availability ports overview.”

Support for Virtual Machine Scale Sets (VMSS): Azure Virtual Machine Scale Sets let you create and manage a group of identical, load balanced VMs. The number of VM instances can automatically increase or decrease in response to a demand or a defined schedule. Scale sets provide high availability to your applications, and allow you to centrally manage, configure, and update a large number of VMs. Scale sets are built from virtual machines. With scale sets, the management and automation layers are provided to run and scale your applications. For more information visit our documentation, “What are virtual machine scale sets.”

As enterprises move ever demanding mission-critical workloads to the cloud, it is important to consider comprehensive networking services that are easy to deploy, manage, scale, and monitor. We are fully committed to providing you the best network virtual appliance experience that can provide all the benefits of cloud in conjunction with your network needs. Picking a virtual appliance can be an important decision when you are designing your network. We want to ensure you do so for ease of use, scale, and a better future together.

Additional links

Support for Linux and open source technology in Azure
Deploy highly available network virtual appliances
Azure Reference Architectures

Quelle: Azure

Investing in our partners’ success

Today Gavriella Schuster, CVP of Microsoft’s Partner organization, spoke about our longstanding commitment to partners, and new investments to enable partners to accelerate customer success.

As we shared in our recent earnings, Azure is growing at 76 percent, driven by a combination of continued innovation, strong customer adoption across industries and a global ecosystem of talented partners. I’m inspired by partners such as Finastra, Cognata, ABB, and Egress who are working with Azure to enable digital transformation within their respective industries.

While Microsoft has long been a partner-oriented organization, some things are different with cloud. Specifically, partners need Microsoft to be more than just a great technology provider, you need us to be a trusted business partner. This requires long-term commitment and the ability to continually adapt and innovate as the market shifts. This has been, and continues to be, our commitment. Our partnership philosophy is grounded in the foundation that we can only deliver on our mission if there is a strong and successful ecosystem around us.

In the spirit of being a trusted business partner, I wanted to highlight our key partner-oriented investments and some of the resources to help our partners successfully grow their businesses.  

Committed to growing our partners’ cloud businesses

Unlock new growth opportunities. Microsoft has sales organizations in 120 countries around the world. Our comprehensive partner co-selling program allows partners to tap into our global network to expose their solutions and services to new markets and new opportunities. Microsoft sales people are paid to bring the best solutions to our customers, spanning both Microsoft and partner solutions.

The Azure Marketplace and AppSource digital storefronts enable customers to easily find, try, and buy the right solutions from our partners. In March, we will add new capabilities to our marketplaces that enable partners to publish to a single location and then merchandise to over 75 million Microsoft customers, thousands of Microsoft sales people, and tens of thousands of Microsoft partners with the click of a button. This new capability further enables partners in our Cloud Solution Provider (CSP) program to create comprehensive, tailored solutions for their end customers. And this is just the beginning. More innovations are on the way, and you can view what’s coming through our Marketplaces roadmap.

“Azure Marketplace has transformed Chef’s business because it has opened up brand new channels and a new lead generation.” – Michele Todd, Chef Software

Technical resources and support whenever and wherever you need it. Whether you’re getting acquainted with Azure, or are further along in developing your solution – there are resources to help you find the answers:

Find Azure training whether online, in a classroom or at an event near you
We are committed to providing you with up-to-date documentation and transparency on the product roadmap
Technical support programs in various levels based on your need
Community forums supported by dedicated Microsoft technical experts

Cloud migration. I previously wrote about how we’re making it easy for customers to migrate their existing workloads to Azure. For our SI and managed services partners, the approaching SQL Server 2008 and Windows Server 2008 end of support also brings new opportunities to provide cloud migration, app modernization and ongoing app management services to customers. Just this migration opportunity alone creates over $50B in opportunity for our partners.

We’ve created the Cloud Migration and Modernization partner playbook and offer the Azure FastTrack program to help you connect with Microsoft engineers as you accelerate this practice. And available this week, new migration content will be launched on Digital Marketing Content OnDemand, a free benefit in MPN Go-to-Market Services.

An open, hybrid, and trusted platform to turn ideas into solutions faster

Build on a secure and trusted foundation. With GDPR and cybersecurity top of mind for customers, partners need a cloud partner that allows them to focus on building their solution, and not on performing security and privacy audits. Microsoft leads the industry in establishing clear security and privacy requirements and in consistently meeting these requirements. And to protect our partners’ cloud-based innovations and investments, we’ve created unique programs like the Microsoft Azure IP Advantage program which lets you leverage a portfolio of Microsoft’s patents to protect against IP infringement risks.

Flexibility to deliver hybrid cloud solutions. Azure has been developed for hybrid deployment from the ground up, providing partners the flexibility to build hybrid solutions for customers, using Windows and Linux.

Develop on any platform, with tools that you know and love. With Azure, partners can migrate existing apps to the cloud, implement Kubernetes-based architectures, or develop cloud-native apps using microservices and serverless technologies from Microsoft, our partners, and the open-source community.

New innovations to light up customer opportunities

Analytics and insights. Our customers’ hunger for better insights is creating great opportunities for partners. Azure enables customers to efficiently manage the end to end data analytics lifecycle. TimeXtender is helping customers speed up digital transformation by building platforms for operational data exchange (ODX) using Azure. Neal Analytics created an algorithm for retailers and consumer goods companies that makes inventory data actionable

AI. Azure provides a comprehensive set of flexible AI services, and a thoughtful and trusted approach to AI, so partners can create AI solutions quickly and with confidence. Talview is a pioneer in using artificial intelligence (AI) and cognitive technologies to analyze video interviews in multiple formats. 

“The Talview platform was previously hosted on Amazon Web Services (AWS), but we shifted to Azure because its AI capabilities were deeper and richer for our needs.” – Sanjoe Jose, CEO, Talview

Internet of Things. Partners use of Azure IoT has become a key differentiator. Willow is enabling its customer thyssenkrupp Elevators to drive building insights and improvements using Azure Digital Twins, that creates virtual representations of the physical world, allowing partners to develop contextually-aware solutions specific to their industries.

“Partnering with Microsoft gives us access to both the best technology platform for designing and developing innovative solutions for our clients, along with the best partner enablement organization in the industry.” – Matt Jackson, VP Services for Americas, Insight

We are thrilled to be on this journey together with you. And, if you’re new to Azure, I invite you to become an Azure partner today.
Quelle: Azure

Stackdriver Profiler adds more languages and new analysis features

Historically, cloud developers have had limited visibility into the impact of their code changes. Profiling non-production deployments doesn’t yield useful results, and profiling tools used in production are typically expensive, with a performance impact that means that they can only be used briefly and on a small portion of the overall code base. Code that’s not performing well can add latency and slow down an application without anyone noticing. Stackdriver Profiler, part of Google Cloud’s Stackdriver monitoring and logging tool, lets you understand the performance impact of your code, down to each function call – without sacrificing speed. We’ve added new language and platform support for Profiler, along with weighted filtering and other new filters.Profiler launched to public beta last spring and it’s been critical when it comes to cost optimization for many developers using Google Cloud Platform (GCP). In particular, we’ve heard from customers that they like the continual insight into their code’s execution, and the cost and performance improvements that they achieve once Profiler is deployed. We’ve heard from multiple enterprise users that they’ve achieved double-digit compute savings with just over one hour of analysis with Profiler. Others discovered the sources of slow memory leaks that they’d previously been unable to identify.Game developer Outfit7 has already achieved success with Stackdriver Profiler: “Using Stackdriver Profiler, the backend team at Outfit7 was able to analyze the memory usage pattern in our batch processing Java jobs running in App Engine Standard, identify the bottlenecks and fix them, reducing the number of OOMs from a few per day to almost zero,” says Anže Sodja, Senior Software Engineer at Outfit7. “Stackdriver Profiler helped us to identify issues fast, as well as significantly reducing debugging time by enabling us to profile our application directly in the cloud without setting up a local testing environment.”New features and support now available for ProfilerSince the beta release, we’ve made Stackdriver Profiler even better by adding support for more runtimes and platforms, and adding powerful new analysis features. These include:Support for Node.js, Python (coming soon), and App Engine Standard JavaAnalyzing worst-case performanceIdentifying commonly called functions that have a high aggregate impactStackdriver Profiler launched with support for Java CPU profiling, and Go CPU and heap profiling. Since then, we’ve added instrumentation for Node.js, Python, and the App Engine Standard Java runtime. Here’s a quick look at the available environments:It’s incredibly easy to get started with Profiler; at most, you’ll have to add a library to your application and redeploy. You can find setup guides for all languages and platforms in the Profiler documentation.If you have Java code deployed to App Engine Standard, getting started with Profiler is even easier. Simply enable it in your appengine-web.xml or app.yaml file, and then click on Stackdriver Profiler in the Cloud Console to see how your code is running in production.Analyzing worst-case performance with ProfilerMany workloads can be characterized as having bursts or spikes of high resource consumption and poor performance. Stackdriver Profiler’s new weighted filtering functionality allows you to find out what’s causing these spikes so you can smooth them out and improve your customers’ experience with your applications.Understanding the average resource consumption of your code is incredibly useful during development or when you’re trying to reduce compute spend. However, this isn’t as useful when you’re trying to improve performance. This is where Profiler’s weighted filtering feature comes in.By applying the Weight filter, you can instruct Stackdriver Profiler to only analyze telemetry captured when your application was consuming its peak amount of resources. For example, if you select “Top 10% weight” when inspecting CPU time, Profiler will only analyze data captured from periods when CPU consumption was at its top 10%. The remaining 90% of data captured when CPU consumption was relatively lower will be ignored. Here’s what this looks like:Identifying high-aggregate impact functionsIn addition, Stackdriver Profiler now includes a list of all of the functions captured within a profile, along with their total aggregate cost. The flame graph that’s currently presented by Profiler lets you quickly discover resource-hungry functions that are called from a single code path. However, it’s less helpful for identifying suboptimal functions that are called throughout your code and impact overall performance.In the example below, we use this new function list to identify a commonly called logging process that is consuming 50% of the service’s CPU consumption. The impact of this function wasn’t necessarily obvious from looking at the flame graph alone.To open this list, click on the magnifying glass button to the left of the filter bar, as shown above. This will apply the Focus filter to the selected function.Exploring filters in ProfilerAlong with the Weight filter, there are other filters in Profiler that let you view details of your code to find issues.FocusAlong with using the function list, you can access the Focus filter view by entering “Focus” into the filter bar, or from the tool tip displayed when mousing over a function in the flame graph. Focus reflows the flame graph to show all of the code paths that flow to and from the focused function, along with their relative resource consumption. It’s great for visualizing the impact of a commonly called function or for understanding which ways a particular piece of code gets called.Show StacksA stack refers to a vertical set of functions on the flame graph, which represent a call path through a code base.The Show Stacks filter presents a similar view to Focus, with a few key differences. While Focus combines all of the instances of the selected function to show the code paths that flow in and out of it, Show Stacks simply filters the view to remove any stacks that don’t contain the selected function. This is useful when you want to preserve separate instances of a specific function and don’t want to change the structure of the flame graph.Hide StacksThis filter is similar to Show Stacks, except it removes stacks that lack the specified function name. Hide Stacks is often useful for hiding information about uninteresting threads in Wall profiles of Java programs. For example, adding a “Hide stacks: Unsafe.park” filter is rather common.Show From FrameLike the Focus filter, this combines all instances of the selected function and shows the aggregated set of paths leaving that function. Unlike Focus, it does not show the code paths that lead into a function.Also, unlike Focus, it can match several functions, and all of the matching functions will be shown as roots of the flame graph. This filter is useful to focus on a subset of functions (for example, a specific library) to dive into its performance aspects. For example, adding a “Show from frame: com.example.common.stringutil” might be a useful way to limit the view to the string utility functions used across the code base.Hide FramesThis filter removes functions that match the specified name. It is commonly used to hide unimportant stack frames, such as using “Hide frames: java.util” to emphasize application code on the flame graph.HighlightThe Highlight filter allows you to quickly identify the location of a given function in the flame graph without changing the graph itself. Think of it as Ctrl+F for function names within a profile.WeightAs discussed earlier, this filter allows you to only analyze profiles captured when the selected resource (CPU, heap, etc.) was at peak consumption.Let us know what you think of ProfilerWe’re excited to make Stackdriver Profiler more useful for more developers, and we have some bigger announcements in the works. Until then, send your feedback via our ongoing user survey (you’ll see a notification for it at the bottom of your screen when you open Profiler) or other channels. And consider taking Profiler’s Quickstart or codelab to get started.
Quelle: Google Cloud Platform

Introducing six new cryptocurrencies in BigQuery Public Datasets—and how to analyze them

Since they emerged in 2009, cryptocurrencies have experienced their share of volatility—and are a continual source of fascination. In the past year, as part of the BigQuery Public Datasets program, Google Cloud released datasets consisting of the blockchain transaction history for Bitcoin and Ethereum, to help you better understand cryptocurrency. Today, we’re releasing an additional six cryptocurrency blockchains.We are also including a set of queries and views that map all blockchain datasets to a double-entry book data structure that enables multi-chain meta-analyses, as well as integration with conventional financial record processing systems.Additional blockchain datasetsThe six cryptocurrency blockchain datasets we’re releasing today are Bitcoin Cash, Dash, Dogecoin, Ethereum Classic, Litecoin, and Zcash.Five of these datasets, along with the previously published Bitcoin dataset now follow a common schema that enables comparative analyses. We are releasing this group of Bitcoin-like datasets (Bitcoin, Bitcoin Cash, Dash, Dogecoin, Litecoin and Zcash) together because they all have similar implementations, i.e., their source code is derived from Bitcoin’s. Similarly, we’re also releasing the Ethereum Classic dataset alongside the previously published Ethereum dataset, and Ethereum Classic is also using the same common schema.A unified data ingest architectureAll datasets update every 24 hours via a common codebase, the Blockchain ETL ingestion framework (built with Cloud Composer, previously described here), to accommodate a variety of Bitcoin-like cryptocurrencies. While this means higher latency for loading Bitcoin blocks into BigQuery, it also means that:We are able to ingest additional BigQuery datasets with less effort, meaning additional datasets can be onboarded more quickly in the future.We can implement a low-latency loading solution once that can be used to enable real-time streaming transactions for all blockchains.Unified schema and viewsSince we provided the original Bitcoin dataset last year, we’ve learned how users want to access data, and restructured the dataset accordingly. Some of these changes address performance and convenience concerns, yielding faster and lower cost queries (commonly accessed nested data are denormalized; each table is partitioned by time).We’ve also included more data, such as script op-codes. Most Bitcoin transactions describe transfers of value not simply as a debit/credit pair, but rather as a series of functions that describe both simple transfers and more complex transactions.Having these scripts available for Bitcoin-like datasets enables more advanced analyses similar to this smart contract analyzer that Tomasz Kolinko recently built on top of the BigQuery Ethereum dataset. For example, we can now identify and report on patterns of activity involving multi-signature wallets. This is particularly important for analyzing privacy-oriented cryptocurrencies like Zcash.For analytics interoperability, we designed a unified schema that allows all Bitcoin-like datasets to share queries. To further interoperate with Ethereum and ERC-20 token transactions, we also created some views that abstract the blockchain ledger to be presented as a double-entry accounting ledger.Double-entry book view: example queriesTo motivate an initial exploration of these new datasets, let’s start with a simple example, comparing the way to query both payments and receipts across multiple cryptocurrencies. This comparison is the simplest way to verify that a cryptocurrency is operating as intended, and at least operationally, is a mathematically correct store of value.1. Balance queries demonstrating preservation of valueHeres are some equivalent balance queries for the Bitcoin and Dogecoin datasets:Note that the only difference between them is the name of the data location. You can swap in Bitcoin Cash, Dash, Litecoin, and Zcash in a similar fashion.2. Understanding miner economics on BitcoinThe BigQuery dataset makes it possible to analyze how miners are allocating space in the blocks they mine.This query shows that transaction fees on the bitcoin network follows a Poisson distribution, confirming that there are zero-fee transactions being included in mined blocks.Given that miners are incentivized to profit from transaction fees, it begs the question: why are they including zero-fee transactions? Possible reasons include:Miners are including their own transactions for zero fees.Miners run transaction accelerators, i.e., off-chain services that allows transactors to pay mining fees out-of-band (typically with fiat currency) for the purpose of accelerating confirmation of transactions.3. Understanding how often Bitcoin addresses are reusedOver 91% of addresses on the Bitcoin network have been used only once.Creating a new Bitcoin address for each inbound payment is a suggested best practice for users seeking to protect their privacy. This is because using blockchain analytics it is possible to identify which other addresses a given user’s wallet has transacted with and the size of the shared transactions.This query can be plotted to show the relationship between addresses and the number of transacting partners:Multi-chain crypto-econometricsBeyond quality control and auditing applications, presenting cryptocurrency in a traditional format enables integration with other financial data management systems. As an example, let’s consider a common economic measure, the Gini Coefficient. In the field of macroeconomics, the Gini Coefficient is a member of a family of econometric measures of wealth inequality. Values range between 0.0 and 1.0, with completely distributed wealth (all members have the same amount) mapping to a value of 0.0 and completely accumulated wealth (one member has everything) mapping to 1.0.Typically, the Gini Coefficient is estimated for a specific country’s economy based on data sampling or imputation. For crypto-economies, we have complete transparency of the data at the highest possible resolution.In addition to data transparency, one of the purported benefits of cryptocurrencies is that they allow the implementation of money to more closely resemble the implementation of digital information. It follows that a fully digitized money network will come to resemble the internet, with reduced transactional friction and fewer barriers that impede capital flow. Frequently, implicit in this narrative is that capital will distribute more equally. But we don’t always observe that particular outcome, and the crypto-assets presented here display a broad spectrum of distribution patterns over time. You can read more about using the Gini coefficient to reason about crypto-economic network performance in Quantifying Decentralization.To set a baseline to interpret our findings, consider how resources are distributed in traditional, non-crypto economies. According to a World Bank analysis in 2013, recent Gini coefficients for world economies have a mean value of 39.6 (with a standard deviation of 9.6). We plot a histogram of the reported data below. Some recent Gini measures include:South Africa (2010): 67Sweden (2008): 26United States (2011): 48Venezuela (2011): 39We use the double-entry book pattern to compare the equality of cryptocurrency distribution of the Bitcoin-like datasets being released today along with Ethereum and a few Ethereum-based ERC-20 tokens. Primary data were normalized using a few different views (BTC-family to DE-Book, Ethereum to DE-Book, and ERC-20 to DE-book).In the figure below, the Gini coefficient is rendered for the top 10,000 address balances within each dataset, tabulated daily and across the entire history. The Bitcoin-like cryptocurrencies are rendered in ochre tones while the Ethereum chains and ERC-20 Maker token are rendered in blue tones. Note that Bitcoin Cash is rendered as a dotted line, diverging from Bitcoin in mid-2017. Similarly, Ethereum classic diverges as a dotted line away from Ethereum.It’s difficult to make conclusive statements about the crypto-economies from the Gini coefficient for the following reasons:Many of the crypto-assets are stored in exchanges and don’t correspond to individual holders. This biases the Gini coefficient toward accumulation.Gini is known to be sensitive to including small balances in the analysis and is usually done on large addresses only. Removing small balances, as we did here, biases the Gini coefficient toward distribution.In our analysis all addresses are treated as individual holders. In reality, multiple addresses can belong to the same individual. This can bias the Gini either toward accumulation or distribution.And when examining the chart to compare specific cryptocurrencies:Zcash in particular is difficult to measure because it has many so-called shielded transactions that produce addresses for which the balance cannot be accurately tabulated. It’s not clear in which direction shielded transactions bias the Gini coefficient. However we do speculate that there is asymmetric interest in using shielded transactions: larger holders are more likely to want to keep their holdings private and it follows that Gini for Zcash is probably biased toward distribution.Dash has a system property whereby interest payments may be earned from the network by address balances that hold a minimum 1000 DASH. Large asset holders are incentivized to split holdings amongst multiple addresses, which biases Gini toward distribution. Even so, Dash is remarkably well distributed relative to all other cryptocurrencies examined here.Bitcoin Cash was purportedly created to increase transfer-of-value use cases through lower transaction fees, which should ultimately lead to a lower Gini coefficient of address balances. However, we see that the opposite is true—Bitcoin Cash holdings have actually accumulated since Bitcoin Cash forked from Bitcoin. Similarly, the Ethereum Classic currency was rapidly accumulated post-divergence and remains so.The ERC-20 token Maker (a stablecoin) has a distribution that is decoupled from its parent chain, Ethereum. Maker was issued as distinct asset on the Ethereum chain, in contrast to Ethereum’s native currency, Ether.In early December 2018, Bitcoin, Ethereum, and Litecoin had a major distribution event, while Bitcoin Cash had a major accumulation event. This was the largest redistribution of large Bitcoin balances since December, 2011. The Bitcoin redistribution appears to be related to an announced Coinbase reorganization of funds storage. Given the synchronization of movements, it is likely that the Ethereum redistribution was also Coinbase activity.  Here’s the code to query the participating addresses. Also find a visualization of the distribution event below, with addresses as circles and lines between circles as value transfers. The original holding address is at the center. Sizes are determined by the post-event distribution of value, with peripheral circle areas proportional to the final balance and edge weights are proportional to the logarithm of the amount of Ether transferred.Studies in the domains of ecology and network science tell us that biodiversity is positively correlated to ecological stability and increases ecosystem productivity by supporting more complex community structures. The downward trend of Gini (i.e. higher levels of diversity) for crypto-asset holdings is likely a positive sign for the future health of crypto-economies.The Gini coefficient is but one of a number of econometric indicators of wealth inequality, and other indicators may give contradictory results. Rather than drawing conclusions from the analysis presented here, we emphasize that we’ve built useful infrastructure for performing analysis, and fully expect that motivated analysts will swap in their own methods.Address classificationBlockchain transaction history can be aggregated by address and used to analyze user behavior. To motivate further exploration, we present a simple classifier that can detect Bitcoin mining pools. As a brief historical note, mining pools were created when the difficulty of mining Bitcoin reached such a level that rewards could be expected only once every few years. Miners began to pool their resources to earn a smaller share of rewards more consistently and in proportion to their contribution to the pool in which they were mining.First, we constructed 26 feature vectors to characterize incoming and outgoing transaction flows to each address. Next, we trained the model using labels derived from transaction signatures. Many large mining pools identify themselves in the signature of blocks’ Coinbase transactions. Parsing these signatures, we labelled 10,000 addresses as belonging to known mining pools. One million other addresses were included in the dataset as “non-miners.” The query used to generate our features and labels can be seen here, and the source code for this analysis can be found in a Kaggle notebook here.Model selectionWe used a random forest classification model for its strong out-of-the-box effectiveness at building a good classifier and ability to model nonlinear effects.Because known mining pools are a very small percentage of our data, we are interested in correctly identifying as many of them as possible. In other words, we focused on maximizing recall. To ensure the minority class is adequately represented, we weighted classes in inverse proportion to how frequently they appear in the data.Interpreting the resultsThe confusion matrix below summarizes the performance of the classification model on a subset of addresses reserved for model testing. False positives (in the upper right quadrant) merit closer inspection. These addresses may belong to “dark” mining pools, i.e., those which are not publicly known or do not identify themselves in Coinbase transaction signatures.Because our dataset is imbalanced, as you can see in the matrix above, it is useful to examine the relationship between precision and recall. The model threshold can be adjusted to increase recall (less false negatives), but at the expense of decreased precision (more false positives).We can examine relative feature importance to determine which features are the strongest predictors in our model. Unsurprisingly, given that mining pools are making many small payments to the cooperating members, the following features have the most predictive power for a mining pool address:Number of output transactionsTotal number of transaction outputsTotal number of transaction inputsFor a deeper understanding of query performance on the blockchain, check out a comparison of transaction throughputs for blockchains in BigQuery..Next stepsTo get started exploring the new datasets, here are links to them in BigQuery:Bitcoin (new location): bigquery-public-data.crypto_bitcoinBitcoin Cash: bigquery-public-data.crypto_bitcoin_cashDash: bigquery-public-data.crypto_dashDogecoin: bigquery-public-data.crypto_dogecoinEthereum (new location): bigquery-public-data.crypto_ethereumEthereum Classic: bigquery-public-data.crypto_ethereum_classicLitecoin: bigquery-public-data.crypto_litecoinZcash: bigquery-public-data.crypto_zcashThere’s also a Kaggle notebook that illustrates how to import data into a notebook for applying machine learning algorithms to the data. We hope these new public datasets encourage you to try out BigQuery and BigQuery ML for yourself. Or, if you run your own enterprise-focused blockchain, these datasets and sample queries can guide you as you form your own blockchain analytics.Until then, if you have questions about this blog post, feel free to reach out to the authors on Twitter: Allen Day,Evgeny Medvedev, Nirmal AK, and Will Price. And here’s a shout-out to the outside contributors who helped develop and review this blog post: Gitcoin, for supporting Blockchain ETL;Samuel Omidiora and Yaz Khoury, for contributing to Blockchain ETL; and Aleksey Studnev of Bloxy for valuable discussions of analyses.
Quelle: Google Cloud Platform

Microsoft Azure portal February 2019 update

This month we’re bringing you updates to several compute (IaaS) resources, the ability to export contents of lists of resources and resource groups as CSV files, an improvement to the layout of essential properties on overview pages, enhancements to the experience on recovery services pages, and expansions of setting options in Microsoft Intune.

Sign in to the Azure portal now and see for yourself everything that’s new. You can also download the Azure mobile app.

Here is a list of February updates to the Azure portal:

Compute (laaS)

Add a new virtual machine (VM) directly to an application gateway or load balancer
Migrate classic virtual machines (VMs) to Azure Resource Manager
Virtual machine scale sets (VMSS) password reset

Shell

Export as CSV in All resources and Resource groups
Layout change for essential properties on overview pages

Site Recovery

Azure Site Recovery UI updates

Other

Updates to Microsoft Intune

Let’s look at each of these updates in detail.

Compute (laaS)

Add a new VM directly to an application gateway or load balancer

We learned from you that a common scenario involves adding a new VM to a load balanced set such as setting up a SharePoint form or putting together a three-tier web application. You can now add a new VM to an existing load balancing solution during the VM creation process. When you specify networking parameters for your virtual machine, you can now choose to add it to the backend pool of an application gateway for HTTP and HTTPS traffic or load balancer Standard SKU for all TCP and UDP traffic.

Migrate classic VMs to Azure Resource Manager

The Azure Resource Manager (ARM) deployment model was released nearly three years ago, and many features have been added since then that are exclusive to ARM. The Azure platform supports migrating classic Azure Service Manager (ASM) resources to ARM, and you can now use the Azure portal to migrate existing infrastructure virtual machines, virtual networks, and storage accounts to the modern ARM deployment model.

Navigate to a classic virtual machine, and select Migrate to ARM from the Resource menu under Settings.

VMSS password reset

You can now use the portal to reset the password of virtual machine scale set instances.

Navigate to a virtual machine scale set in the Azure portal, and select Reset password.

Shell

Export as CSV in All resources and Resource groups

We have recently added the ability to export the contents of lists of resources and resource groups to a CSV (comma separated values) file.

This capability is available in the All resources screen:

It is also available also in the Resource groups screen:

We have added this capability to an instance of the Resource group screen, so you can download all the resources within a single resource group to a CSV file:

Layout change for essential properties on overview pages

We’ve changed the way that essential properties are laid out on overview pages so there’s less vertical scrolling required now. On standard wide screen resolutions, the essential properties (key/value) will be laid out horizontally rather than vertically to save vertical space. However, you will still get the vertical layout if the essential properties do not have enough horizontal space to avoid truncation and/or ellipsis of the important information.

Select Virtual Machines within the menu on the left.
Select any virtual machine.

Site Recovery

Azure Site Recovery UI updates

The new enhanced IaaS VM disaster recovery multiple tab experience lets you configure the replication with a single click. It’s as simple as selecting the Target region.

Select any virtual machine.
Select Disaster recovery within the menu located on the left.
Select Target region.
Select Review + Start replication.

We also now have a new immersive experience for Site Recovery infrastructure with the addition of an overview tab.

Select any Recovery service vault.
Select Site Recovery infrastructure under the subheading Manage.

Other

Updates to Microsoft Intune

The Microsoft Intune team has been hard at work on updates. You can find a complete list on the What’s new in Microsoft Intune page, including changes that affect your experience using Intune.

Did you know?

You can always test features by visiting the preview version of Azure portal.

Next steps

Thank you for all your terrific feedback. The Azure portal is built by a large team of engineers who are always interested in hearing from you.

We recently launched the Azure portal “how to” series where you can learn about a specific feature of the portal in order to become more productive using it. To learn more please watch the videos “How to manage multiple accounts, directories, and subscriptions in Azure” and “How to create a virtual machine in Azure.” Keep checking in on the Azure YouTube channel for new videos each week.

If you’re interested in learning how we streamlined resource creation in Microsoft Azure to improve usability, consistency, and accessibility, read the new Medium article, “Creation at Cloud Scale.” If you’re curious to learn more about how the Azure portal is built, be sure to watch the Microsoft Ignite 2018 session, “Building a scalable solution to millions of users.”

Don’t forget to sign in on the Azure portal and download the Azure mobile app today to see everything that’s new. Let us know your feedback in the comments section or on Twitter. See you next month.
Quelle: Azure

Account failover now in public preview for Azure Storage

Today we are excited to share the preview for account failover for customers with geo-redundant storage (GRS) enabled storage accounts. Customers using GRS or RA-GRS accounts can take advantage of this functionality to control when to failover from the primary region to the secondary region for their storage accounts.

Customers have told us that they wish to control storage account failover so they can determine when storage account write access is required and the secondary replication state is understood. 

If the primary region for your geo-redundant storage account becomes unavailable for an extended period of time, you can force an account failover. When you perform a failover, all data in the storage account is failed over to the secondary region, and the secondary region becomes the new primary region. The DNS records for all storage service endpoints – blob, Azure Data Lake Storage Gen2, file, queue, and table – are updated to point to the new primary region. Once the failover is complete, clients can automatically begin writing data to the storage account using the service endpoints in the new primary region, without any code changes.

The diagram below shows how account failover works. Under normal circumstances, a client writes data to a geo-redundant storage account (GRS or RA-GRS) in the primary region, and that data is replicated asynchronously to the secondary region. If write operations to the primary region fail consistently then you can trigger the failover.

After the failover is complete, write operations can resume against the new primary service endpoints.

Post failover, the storage account is configured to be locally redundant (LRS). To resume replication to the new secondary region, configure the account to use geo-redundant storage again (either RA-GRS or GRS). Keep in mind that converting an locally-redundant (LRS) account to RA-GRS or GRS incurs a cost.

Account failover is supported in preview for new and existing Azure Resource Manager storage accounts that are configured for RA-GRS and GRS. Storage accounts may be general-purpose v1 (GPv1), general-purpose v2 (GPv2), or Blob Storage accounts. Account failover is currently supported in US-West 2 and US-West Central.

You can initiate account failover using the Azure portal, Azure PowerShell, Azure CLI, or the Azure Storage Resource Provider API. The process is simple and easy to perform. The image below shows how to trigger account failover in the Azure portal in one step.

As is the case with most previews, account failover should not be used with production workloads. There is no production SLA until the feature becomes generally available.

It's important to note that account failover often results in some data loss, because geo-replication always involves latency. The secondary endpoint is typically behind the primary endpoint. So when you initiate a failover, any data that has not yet been replicated to the secondary region will be lost.

We recommend that you always check the Last Sync Time property before initiating a failover to evaluate how far the secondary is behind the primary. To understand the implications of account failover and learn more about the feature, please read the documentation, “What to do if an Azure Storage outage occurs.”

For questions about participation in the preview or about account failover, contact xstoredr@microsoft.com. We welcome your feedback on the account failover feature and documentation!
Quelle: Azure

Exoplanets, astrobiological research, and Google Cloud: What we learned from NASA FDL’s Reddit AMA

Are we alone in the universe? Does intelligent life exist on other planets? If you’ve ever wondered about these things, you’re not the only one. Last summer, we partnered with NASA’s Frontier Development Lab (FDL) to help find answers to these questions—you can read about some of this work in this blog post. And as part of this work we partnered with FDL researchers to host an AMA (“ask me anything”) to answer all those burning questions from Redditlings far and wide. Here are some of the highlights:Question: What can AI do to detect intelligent life on other planets?Massimo Mascaro, Google Cloud Director of Applied AI: AI can help extract the maximum information from the very faint and noisy signals we can get from our best instruments. AI is really good at detecting anomalies and in digging through large amounts of data and that’s pretty much what we do when we search for life in space.Question: About how much data is expected to be generated during this mission? Are we looking at the terabyte, 10s of terabytes, or 100s of terabytes of data?Megan Ansdell, Planetary Scientist with a specialty in exoplanets: The TESS mission will download ~6 TB of data every month as it observes a new sector of sky containing 16,000 target stars at 2-minute cadence. The mission lifetime is at least 2 years, which means TESS will produce on the order of 150 TB of data. You can learn more about the open source deep learning models that have been developed to sort through the data here.Question: What does it mean to simulate atmospheres?Giada Arney, Astronomy and astrobiology (mentor): Simulating atmospheres for me involves running computer models where I provide inputs to the computer on gases in the atmosphere, “boundary conditions”, temperature and more. These atmospheres can then be used to simulate telescopic observations of similar exoplanets so that we can predict what atmospheric features might be observable with future observatories for different types of atmospheres.Question: How useful is a simulated exoplanet database?Massimo Mascaro: It’s important to have a way to simulate the variability of the data you could observe, before observing it, to understand your ability to distinguish patterns, to plan on how to build and operate instruments and even to plan how to analyze the data eventually.Giada Arney: Having a database of different types of simulated worlds will allow us to predict what types of properties we’ll be able to observe on a diverse suite of planets. Knowing these properties will then help us to think about the technological requirements of future exoplanet observing telescopes, allowing us to anticipate the unexpected!Question: Which off-the-shelf Google Cloud AI/ML APIs are you using?Massimo Mascaro, Google Cloud Director of Applied AI: We’ve leveraged a lot of Google Cloud’s infrastructure, in particular Compute Engine and GKE, to both experiment with data and to run computation on large scale (using up to 2500 machines simultaneously), as well as TensorFlow and PyTorch running on Google Cloud to train deep learning models for the exoplanets and astrobiology experiments.Question: What advancements in science can become useful in the future other than AI?Massimo Mascaro: AI is just one of the techniques science can benefit in our times. I would put in that league definitely the wide access to computation. This is not only helping science in data analysis and AI, but in simulation, instrument design, communication, etc.Question: What do you think are the key things that will inspire the next generation of astrophysicists, astrobiologists, and data scientists?Sara Jennings, Deputy Director, NASA FDL: For future data scientists, I think it will be the cool problems like the ones we tackle at NASA FDL, which they will be able to solve using new and ever increasing data and techniques. With new instruments and data analysis techniques getting so much better, we’re now at a moment where asking question such as whether there’s life outside our planet is not anymore preposterous, but real scientific work.Daniel Angerhausen, Astrophysicist with expertise spanning astrobiology to exoplanets (mentor): I think one really important point is that we see more and more women in science. This will be such a great inspiration for girls to pursue careers in STEM. For most of the history of science we were just using 50 percent of our potential and this will hopefully be changed by our generation.You can read the full AMA transcript here.
Quelle: Google Cloud Platform

The service mesh era: Advanced application deployments and traffic management with Istio on GKE

Welcome back to our series about the Istio service mesh. In our last post, we explored the benefits of using a service mesh, and placed Istio in context with other developments in the cloud-native ecosystem. Today, we’ll dive into the “what” and “how” of installing and using Istio with a real application. Our goal is to demonstrate how Istio can help your organization decrease complexity, increase automation, and ease the burden of application management on your operations and development teams.Install with ease; update automaticallyWhen done right, a service mesh should feel like magic: a platform layer that “just works,” freeing up your organization to use its features to secure, connect, and observe traffic between your services. So if Istio is a platform layer, why doesn’t it come preinstalled with Kubernetes? If Istio is middleware, why are we asking developers to install it?At Google, we are working on simplifying adoption by providing a one-click method of installing Istio on Kubernetes. Istio on GKEhttps://cloud.google.com/istio/docs/istio-on-gke/overview, the first managed offering of its kind, is an add-on for Google Kubernetes Engine (GKE) that installs and upgrades Istio’s components for you—no YAML required. With Istio on GKE, you can create a cluster with Istio pre-installed, or add Istio to an existing cluster.Installing Istio on GKE is easy, and can be done either through the Cloud Console or the command line. The add-on supports mutual TLS, meaning that with a single check-box, you can enforce end-to-end encryption for your service mesh.Once enabled, Istio on GKE provisions the Istio control plane for you, and enables Stackdriver integrations. You get to choose into which namespaces, if any, the Istio sidecar proxy is injected.Now that we have Istio installed on a GKE cluster, let’s explore how to use it with a real application. For this example, we’ll use the Hipster Shop demo, a microservices-based web application.While this sample app has multiple components, in this post we’ll focus on Product Catalog, which serves the list of products above. You can follow along in this post with the step-by-step tutorial here.Zero effort Stackdriver: Monitoring, logging, and tracingWhen you use Istio on GKE, the Stackdriver Monitoring API is provisioned automatically, along with an Istio adapter that forwards service mesh metrics to Stackdriver. This means that you have access to Istio metrics right away, alongside hundreds of existing GCP and GKE metrics.Stackdriver includes a feature called the Metrics Explorer, which allows you to use filters and aggregations together with Stackdriver’s built-in metrics to gain new insights into the behavior of your services. The example below shows an Istio metric (requests per second) grouped across each microservice in our sample application.You can add any Metrics Explorer chart to a new or existing Stackdriver Dashboard. Using Dashboards, you can also combine Istio metrics with your application metrics, giving you a more complete view into the status of your application.You can also use Stackdriver Monitoring to set SLOs using Istio metrics—for example, latency, or non-200 response codes. Then, you can set Stackdriver Policies against those SLOs to alert you when a policy reaches a failing threshold. In this way, Istio on GKE sets up your organization with SRE best practices, out of the box.Istio on GKE also makes tracing easy. With tracing, you can better understand how quickly your application is handling incoming requests, and identify performance bottlenecks. When Stackdriver Trace is enabled and you’ve instrumented tracing in your application, Istio automatically collects end-to-end latency data and displays it in real-time to the GCP Console.On the logging front, Stackdriver also creates a number of logs-based metrics. With logs-based metrics, you can extract latency information from log entries, or record the number of log entries that contain a particular message. You can also develop custom metrics to keep track of logs that are particularly important to your organization.Then, using the Logs Viewer, you can export the logs to Google Cloud data solutions, including Cloud Storage and BigQuery, for storage and further analysis.Traffic management and visualizationIn addition to providing visibility into your service mesh, Istio supports fine-grained, rule-based traffic management. These features give you control over how traffic and API calls flow between your services.As the first post in this series explains, adopting a service mesh lets you decouple your applications from the network. And unlike Kubernetes services, where load balancing is tethered to the number of running pods, Istio allows you to decouple traffic flow from infrastructure scaling through granular percentage-based routing.Let’s run through a traffic routing example, using a canary deployment.A canary deployment routes a small percentage of traffic to a new version of a microservice, then allows you to gradually roll it out to the whole user base, while phasing out and retiring the old version. If something goes wrong during this process, traffic can be switched back to the old version.In this example, we create a new version of the ProductCatalog microservice. The new version (“v2″) is deployed to Kubernetes alongside the working (“v1″) deployment.Then, we create an Istio VirtualService (traffic rule) that sends 25% of ProductCatalog traffic to v2. We can deploy this rule to the Kubernetes cluster, alongside our application. With this policy, no matter how much production traffic goes to ProductCatalog—and how many pods scale up as a result—Istio ensures that the right percentage of traffic goes to the specified version of ProductCatalog.We’ll also use another feature of Istio and Envoy: for demo purposes, we inject a three-second latency into all ProductCatalog v2 requests.Once the canary version is deployed to GKE, we can open Metrics Explorer to see how ProductCatalog v2 is performing. Notice that we are looking at the Istio Server Response Latency metric, and we have grouped by “destination workload name”—this tells us the time it takes for each service to respond to requests.Here, we can see ProductCatalog v2’s injected three-second latency. From here, it’s easy to roll back from v2 to v1. We can do this by updating the Istio VirtualService to return 100% of traffic to v1, then deleting the v2 Kubernetes deployment.Although this example demonstrates a manual canary deployment, often you’ll want to automate the process of promoting a canary deployment: increasing traffic percentages, and scaling down the old version. Open-source tools like Flagger can help automate percentage-based traffic shifting for Istio.Istio supports many other traffic management rules beyond traffic splitting, including content-based routing, timeout and retries, circuit breaking, and traffic mirroring for testing in production. Like in this canary example, these rules can be defined with the same declarative Istio building blocks.We hope this example gives you a taste of how, together, Istio and Stackdriver help simplify complex traffic management operations.What’s next?To get some more hands-on experience with Istio on GKE, check out the companion demo. You can find the instructions for getting started on GitHub.To read more about Istio, Stackdriver, and traffic management, see:Drilling down into Stackdriver Service Monitoring (GCP blog)Incremental Istio Part 1, Traffic Management (Istio blog)Stay tuned for the next post, which will be all about security with Istio.
Quelle: Google Cloud Platform