Bing Map v8 available in Azure IoT Suite Remote Monitoring preconfigured solution

Azure IoT Suite is designed to get customers and partners started quickly to connect and manage their devices in a simple and reliable manner, realizing business value from an IoT solution. Bing Map (v7) has been integrated as part of Azure IoT Suite Remote Monitoring preconfigured solution to visualize device locations and status in the preconfigured solution dashboard. We are pleased to announce that an advanced version of Bing Map (v8) has been effective since July 1st, 2017 for Azure IoT Suite Remote Monitoring preconfigured solution. It provides customers and partners:

High performance: Data renders 10x faster.
Extended culture support for 95 new languages: Uses the Bing Maps REST services to perform geocode and route requests on up to 117 languages.
New features based on developer feedback

To leverage the new advanced map functionality, you can do one of the following:

Deploy a new Remote Monitoring Solution: All solutions deployed from http://azureiotsuite.com will leverage the new V8 control.
Update to the latest coder: Get all new features from the master branch, such as Device Management and Bing Map v8.
Update only the map control: Get only the changes for the map control by referencing the single commit.

In all scenarios, the existing Bing Map API Key under your account will be kept valid for you, no specific action is expected from customers due to the migration.

Learn how to re-deploy over an existing Remote Monitoring preconfigured solution.

Quelle: Azure

Pokémon Go: Die Legenden kommen

Kurz vor dem Pokémon Go Fest in Chicago verteilt Niantic eine neue Version von Pokémon Go: Version 0.69.0 verbessert das Gameplay deutlich – gerade noch rechtzeitig für die lange erwarteten legendären Pokémon.

Quelle: Heise Tech News

Introducing the Azure Analysis Services web designer

Today we are releasing a preview of the Azure Analysis Services web designer. This new browser-based experience will allow developers to start creating and managing Azure Analysis Services (AAS) semantic models quickly and easily. While SQL Server Data Tools (SSDT) and SQL Server Management Studio (SSMS) are still the primary tools for development, this new experience is intended to make simple changes fast and easy. It is great for getting started on a new model or to do things such as adding a new measure to a development or production AAS model.

This initial release includes three major capabilities, model creation, model editing, and doing queries. The model can be created directly from Azure SQL Database and SQL Data Warehouse, or imported from Power BI Desktop PBIX files. When creating from a database, you can chose which tables to include and the tool will create a Direct Query model to your data source. Then you can view the model metadata, edit the Tabular Model Scripting Language (TMSL) JSON, and add measures. There are shortcuts to open a model in Power BI Desktop, Excel, or even open a Visual Studio project which is created from the model on the fly. You can also create simple queries against the model to see the data or test out a new measure.

Navigate to your Azure Analysis Services server in the Azure Portal and click the “Open” link from the Overview blade.

Once the web designer opens, it will take you directly to your server where you can examine existing models or create new ones.

Now that you have an AAS server, you can add a model directly from a database or you can import a Power BI Desktop PBIX file. Note, for PBIX import, only Azure SQL Database, Azure SQL Data warehouse, Oracle, and Teradata are supported at this time. Also, Direct Query models are not yet supported for import. We will be adding new connection types for import every month, so let us know if your desired data source is not yet supported.

Now that you have created a new model from a database or PBIX file, you can see it on the list of models in the AAS server and can edit the model or browse the model. To edit the model, click the pencil icon next to the model name.

When editing, you have access to the full TMSL and can make changes directly to the metadata of the model. Keep in mind that if you save changes, it is updating the model in the cloud. It is a best practice to make changes to a development server and propagate changes to production with a tool such as BISM Normalizer. For prototyping and development, you can make changes quickly and easily this way.

If you click Query, you can view data through a simple view or you can use the “Open in” button to open in Power BI Desktop or Excel. We think of the simple web view as a quick way to check your model or new measures you have created without having to use SSMS.

Finally you will notice that you can press the blue “Measures” link on the field list to open the simple measure editor. This is a great place to see all of your measures in one place, and also to quickly add measures to test in the UX.

That is a brief introduction to the new Azure Analysis Services web designer. We hope you find the tool useful and we welcome your feedback in the Azure Analysis Services feedback forum. We think this new experience will help you get started quickly and give you new options to make changes right in the web experience. We will be adding additional functionality each month, so let us know what you would like to see next!

Learn more about Azure Analysis Services.
Quelle: Azure

TCP BBR congestion control comes to GCP – your Internet just got faster

By Neal Cardwell, Senior Staff Software Engineer; Yuchung Cheng, Software Engineer; C. Stephen Gunn, Packet Mechanic; Soheil Hassas Yeganeh, Software Engineer; Van Jacobson, Research Scientist and Amin Vahdat, Google Fellow

We’re excited to announce that Google Cloud Platform (GCP) now features a cutting-edge new congestion control algorithm, TCP BBR, which achieves higher bandwidths and lower latencies for internet traffic. This is the same BBR that powers TCP traffic from google.com and that improved YouTube network throughput by 4 percent on average globally — and by more than 14 percent in some countries.

“BBR allows the 500,000 WordPress sites on our digital experience platform to load at lightening speed. According to Google’s tests, BBR’s throughput can reach as much as 2,700x higher than today’s best loss-based congestion control; queueing delays can be 25x lower. Network innovations like BBR are just one of the many reasons we partner with GCP.” — Jason Cohen, Founder and CTO, WP Engine

GCP customers, like WP Engine, automatically benefit from BBR in two ways:

From GCP services to cloud users: First, when GCP customers talk to GCP services like Cloud Bigtable, Cloud Spanner or Cloud Storage, the traffic from the GCP service to the application is sent using BBR. This means speedier access to your data.

From Google Cloud to internet users: When a GCP customer uses Google Cloud Load Balancing or Google Cloud CDN to serve and load balance traffic for their website, the content is sent to users’ browsers using BBR. This means faster webpage downloads for users of your site.

At Google, our long-term goal is to make the internet faster. Over the years, we’ve made changes to make TCP faster, and developed the Chrome web browser and the QUIC protocol. BBR is the next step. Here’s the paper describing the BBR algorithm at a high level, the Internet Drafts describing BBR in detail and the BBR code for Linux TCP and QUIC.

What is BBR?

BBR (“Bottleneck Bandwidth and Round-trip propagation time”) is a new congestion control algorithm developed at Google. Congestion control algorithms — running inside every computer, phone or tablet connected to a network — that decide how fast to send data.

How does a congestion control algorithm make this decision? The internet has largely used loss-based congestion control since the late 1980s, relying only on indications of lost packets as the signal to slow down. This worked well for many years, because internet switches’ and routers’ small buffers were well-matched to the low bandwidth of internet links. As a result, buffers tended to fill up and drop excess packets right at the moment when senders had really begun sending data too fast.

But loss-based congestion control is problematic in today’s diverse networks:

In shallow buffers, packet loss happens before congestion. With today’s high-speed, long-haul links that use commodity switches with shallow buffers, loss-based congestion control can result in abysmal throughput because it overreacts, halving the sending rate upon packet loss, even if the packet loss comes from transient traffic bursts (this kind of packet loss can be quite frequent even when the link is mostly idle).

In deep buffers, congestion happens before packet loss. At the edge of today’s internet, loss-based congestion control causes the infamous “bufferbloat” problem, by repeatedly filling the deep buffers in many last-mile links and causing seconds of needless queuing delay.

We need an algorithm that responds to actual congestion, rather than packet loss. BBR tackles this with a ground-up rewrite of congestion control. We started from scratch, using a completely new paradigm: to decide how fast to send data over the network, BBR considers how fast the network is delivering data. For a given network connection, it uses recent measurements of the network’s delivery rate and round-trip time to build an explicit model that includes both the maximum recent bandwidth available to that connection, and its minimum recent round-trip delay. BBR then uses this model to control both how fast it sends data and the maximum amount of data it’s willing to allow in the network at any time.

Benefits for Google Cloud customers

Deploying BBR has resulted in higher throughput, lower latency and better quality of experience across Google services, relative to the previous congestion control algorithm, CUBIC. Take, for example, YouTube’s experience with BBR. Here, BBR yielded 4 percent higher network throughput, because it more effectively discovers and utilizes the bandwidth offered by the network. BBR also keeps network queues shorter, reducing round-trip time by 33 percent; this means faster responses and lower delays for latency-sensitive applications like web browsing, chat and gaming. Moreover, by not overreacting to packet loss, BBR provides 11 percent higher mean-time-between-rebuffers. These represent substantial improvements for all large user populations around the world, across both desktop and mobile users.

These results are particularly impressive because YouTube is already highly optimized; improving the experience for users watching video has long been an obsession here at Google. Ongoing experiments provide evidence that even better results are possible with continued iteration and tuning.

The benefits of BBR translate beyond Google and YouTube, because they’re fundamental. A few synthetic microbenchmarks illustrate the nature (though not necessarily the typical magnitude) of the advantages:

Higher throughput: BBR enables big throughput improvements on high-speed, long-haul links. Consider a typical server-class computer with a 10 Gigabit Ethernet link, sending over a path with a 100 ms round-trip time (say, Chicago to Berlin) with a packet loss rate of 1%. In such a case, BBR’s throughput is 2700x higher than today’s best loss-based congestion control, CUBIC (CUBIC gets about 3.3 Mbps, while BBR gets over 9,100 Mbps). Because of this loss resiliency, a single BBR connection can fully utilize a path with packet loss. This makes it a great match for HTTP/2, which uses a single connection, and means users no longer need to resort to workarounds like opening several TCP connections to reach full utilization. The end result is faster traffic on today’s high-speed backbones, and significantly increased bandwidth and reduced download times for webpages, videos or other data.

Lower latency: BBR enables significant reductions in latency in last-mile networks that connect users to the internet. Consider a typical last-mile link with 10 Megabits of bandwidth, a 40 ms round-trip time, and a typical 1000-packet bottleneck buffer. In a scenario like this, BBR keeps queuing delay 25x lower than CUBIC (CUBIC has a median round-trip time of 1090 ms, versus just 43 ms for BBR). BBR reduces queues and thus queuing delays on last-mile links while watching videos or downloading software, for faster web surfing and more responsive video conferencing and gaming. Because of this ability to curb bufferbloat, one might say that BBR could also stand for BufferBloat Resilience, in addition to Bottleneck Bandwidth and Round-trip propagation time.

GCP is continually evolving, leveraging Google technologies like Espresso, Jupiter, Andromeda, gRPC, Maglev, Cloud Bigtable and Spanner. Open source TCP BBR is just the latest example of how Google innovations provide industry-leading performance.

If you’re interested in participating in discussions, keeping up with the latest BBR-related news, watching videos of talks about BBR or pitching in on open source BBR development or testing, join the public bbr-dev e-mail group.
Quelle: Google Cloud Platform