Azure Stream Analytics Tools for Visual Studio

Have you had chance to try out the public preview version of the Azure Stream Analytics Tools for Visual Studio yet? If not, read through this blog post and get a sense of the Stream Analytics development experience with Visual Studio. These tools are designed to provide an integrated experience of Azure Stream Analytics development workflow in Visual Studio. This will help you to quickly author query logic and easily test, debug, and diagnose your Stream Analytics jobs.

Using these tools, you get not only the best in class query authoring experience, but also the power of IntelliSense (for code-completion), Syntax Highlighting, and Error Marker capabilities. You now can test queries on local development machines with representative sample data to speed up development iterations. Seamless integration with the Azure Stream Analytics service helps you submitting jobs, monitoring live job metrics easily, and exporting existing jobs to projects with just few clicks. Never the less, you can naturally leverage Visual Studio source control integration capabilities to manage your job configurations and queries.

Use Visual Studio Project to manage a Stream Analytics job

To create a brand-new job, just create a new project from the built-in Stream Analytics template. The job input and output are stored in JSON files, and job query is saved in the Script.asaql file. Double click on input and output JSON files inside the project, you will find out that their setting UI is very similar to the Portal UI.

Using Visual Studio tools to author Stream Analytics queries

Double click on the Script.asaql file to open the query editor, its IntelliSense (code-completion), Syntax Highlighting, and Error Marker make your authoring experience very efficient. Also, if the local input file is specified and defined correctly or you have ever sampled data from the live input sources, the Query Editor will prompt the column names in the input source when the developer enters the input name.

Testing locally with sample data

Upon finishing the query authoring, you can quickly test it on your local machine by specifying local sample data as input. This speeds up your development iterations.

View live job metrics

After you validate the query test result, click on “Submit to Azure” to create a streaming job under your Azure subscription. Once the job is created you can start and monitor your job inside the job view.

View and update jobs in Server Explorer

You can browse all your Stream Analytics jobs under your subscriptions in Server Explorer, expand a job node to view its job status and metrics. If needed, you can directly update job queries and job configurations, or export an existing job to a project.

How do I get started?

You need to first install Visual Studio (2015, Visual Studio 2013 update 4, or Visual Studio 2012) and install the Microsoft Azure SDK for .NET version 2.7.1 or above using the Web platform installer. Then get the latest ASA tools from Download Center.

For more information, can be found at Tutorial: Use Azure Stream Analytics Tools for Visual Studio.
Quelle: Azure

Join us at Next, right now

By Alex Barrett, Editor, Google Cloud Platform blog

If you’re reading this blog post, stop right now and head over to the livestream of the Google Cloud Next ’17 keynote, featuring Diane Greene, Senior Vice President of Google Cloud; Sundar Pichai, CEO of Google; Eric Schmidt, Chairman of Alphabet and Fei-Fei Li, Chief Scientist for Google Cloud Machine Learning. We promise you’ll be glad you did.

After this morning’s keynote ends, we’ll kick off over 200 breakout sessions, where Googlers, customers and partners will discuss new, efficient and exciting ways of using Google Cloud technologies, including Google Cloud Platform (GCP), G Suite, Maps APIs and mobile.

If you’re at the show, be sure to check out the show floor and Sandbox, an interactive show-floor experience that showcases the amazing things you can build with Google Cloud technology.

And stay tuned to this channel for breaking news and announcements. Until then, enjoy the show!
Quelle: Google Cloud Platform

SONiC: The networking switch software that powers the Microsoft Global Cloud

Running one of the largest clouds in the world, Microsoft has gained a lot of insight into building and managing a global, high performance, highly available, and secure network. Experience has taught us that with hundreds of datacenters and tens of thousands of switches, we needed to:

Use best-of-breed switching hardware for the various tiers of the network.
Deploy new features without impacting end users.
Roll out updates securely and reliably across the fleet in hours instead of weeks.
Utilize cloud-scale deep telemetry and fully automated failure mitigation.
Enable our Software-Defined Networking software to easily control all hardware elements in the network using a unified structure to eliminate duplication and reduce failures.

To address these requirements, Microsoft pioneered Software for Open Networking in the Cloud (SONiC), a breakthrough for network switch operations and management. Microsoft open-sourced this innovation to the community, making it available on our SONiC GitHub Repository. SONiC is a uniquely extensible platform, with a large and growing ecosystem of hardware and software partners, that offers multiple switching platforms and various software components.

Switch Abstraction Interface (SAI) accelerates hardware innovation

SONiC is built on the Switch Abstraction Interface (SAI), which defines a standardized API. Network hardware vendors can use it to develop innovative hardware platforms that can achieve great speeds while keeping the programming interface to ASIC (application-specific integrated circuit) consistent. Microsoft open sourced SAI in 2015. This approach enables operators to take advantage of the rapid innovation in silicon, CPU, power, port density, optics, and speed, while preserving their investment in one unified software solution across multiple platforms.

Figure 1. SONiC: one investment to unblock hardware innovation

Modular design with containers accelerates software evolution

SONiC is the first solution to break monolithic switch software into multiple containerized components. SONiC enables fine-grained failure recovery and in-service upgrades with zero downtime. It does this in conjunction with Switch State Service (SWSS), a service that takes advantage of open source key-value pair stores to manage all switch state requirements and drives the switch toward its goal state. Instead of replacing the entire switch image for a bug fix, you can now upgrade the flawed container with the new code, including protocols such as Border Gateway Protocol (BGP), without data plane downtime. This capability is a key element in the serviceability and scalability of the SONiC platform.

Containerization also enables SONiC to be extremely extensible. At its core, SONiC is aimed at cloud networking scenarios, where simplicity and managing at scale are the highest priority. Operators can plug in new components, third-party, proprietary, or open sourced software, with minimum effort, and tailor SONiC to their specific scenarios.

Figure 2. SONiC: plug and play extensibility

Monitoring and diagnostic capabilities are also key for large-scale network management. Microsoft continuously innovates in areas such as early detection of failure, fault correlation, and automated recovery mechanisms without human intervention. These innovations , such as Netbouncer and Everflow, are all available in SONiC, and they represent the culmination of years of operations experience.

Rapidly growing ecosystem

SONiC and SAI have gained wide industry support over the last year. Most major network chip vendors are supporting SAI on their flagship ASICs:

Barefoot Networks: Tofino

Broadcom Limited: Trident and Tomahawk

Cavium: XPliant

Centec Networks: Goldengate

Mellanox Technologies: Spectrum

Marvell Technology Group Ltd.: Prestera

Nephos Inc: Taurus

The community are actively adding new extensions and advanced capabilities to SAI releases:

Broadcom, Marvell, Barefoot, and Microsoft are driving advanced monitoring and telemetry in SAI to enable deep visibility into the ASIC and powerful analytic capabilities.

Mellanox, Cavium, Dell, and Centec are contributing to protocol announcement to SAI for richer protocol support and large scale network scenarios; for example, MPLS, Enhanced ACL model, Bridge Model, L2/L3 Multicast, segment routing, and 802.1BR.

Dell and Metaswitch are bringing failure resiliency and performance to SAI by adding L3 fast reroute and BFD proposals.

The pipeline model driven by Mellanox and Broadcom and multi-NPU by Dell enriches the infrastructure that SAI and network stack built on top can apply to.

At the Open Compute Project U.S. Summit 2017, we will demonstrate 100-gigabyte switches from multiple switch hardware companies. SONiC is enabled on their latest and fastest SKUs. The platforms that support SONiC are:

Arista Networks: 7050 and 7060 series

Centec Networks: E580 and E582 series

Dell Inc: S6000 ON, S6100-ON and Z9100-ON series

Edge-core Networks: AS7512 series, Wedge-100b

Facebook: Wedge-100

Ingrasys Technology Inc.: S9100 series

Marvell Technology Group Ltd.: RD-BC3-4825G6CG-A4 and RD-ARM-48XG6CG-A4 series

Mellanox Technologies: SN2700 series

With SONiC, the cloud community has choices—they can cherry pick best-of-breed solutions. Partners are joining the eco-system to make it richer:

Arista is offering containerized EOS components like EOS BGP to run on top of SONiC. The SONiC community now has easy access to Arista’s rich software suite of EOS.

Canonical enabled SONiC as a snap for Ubuntu. It enables MAAS to deploy SONiC to switches as well as using SONiC to deploy the servers. Unified network and server deployment is going to significantly improve the agility of operators.

Docker enabled using Swarm to manage the SONiC containers. With its simple and declarative service model, Swarm can manage and update SONiC at scale.

Mellanox is using SONiC to unleash the hardware-based packet generation capabilities in the Spectrum ASIC. This is a highly sought-after capability that will help diagnosis and troubleshooting.

By working with the community and our partner ecosystem, we’re looking to revolutionize networking for today and into the future.

SONiC is fully open sourced on GitHub and is available to industrial collaborators, researchers, students, and innovators alike. With the SONiC containerized approach and software simulation tools, developers can experience the switch software used in Microsoft Azure, one of the world’s largest cloud platforms, and contribute components that will benefit millions of customers. SONiC will benefit the entire cloud community, and we’re very excited for the increasingly strong partner momentum behind the platform.

Quelle: Azure

Enabling cloud workloads through innovations in Silicon

Today, I’ll be talking to a global community of attendees at the 2017 Open Compute Project (OCP) U.S. Summit about the exciting disruptions we see in the processor ecosystem, and how Microsoft is embracing the innovation created by these disruptions for the future of our cloud infrastructure.

The demand for cloud services continues to grow at a dramatic pace and as a result we are always innovating and looking out for technology disruptions that can help us scale more rapidly. We see one such disruption taking shape in the silicon manufacturing industry. The economic slowdown of Moore’s Law and the tremendous scale of the high-end smart phone market has expanded the number of processor suppliers leading to a “Cambrian” explosion of server options.

We’re announcing that we are driving innovation with ARM server processors for use in our datacenters. We have been working closely with multiple ARM server suppliers, including Qualcomm and Cavium, to optimize their silicon for our use. We have been running evaluations side by side with our production workloads and what we see is quite compelling. The high Instruction Per Cycle (IPC) counts, high core and thread counts, the connectivity options and the integration that we see across the ARM ecosystem is very exciting and continue to improve.

Also, due to the scale required for certain cloud services, i.e. the number of machines allocated to them, it becomes more economically feasible to optimize the hardware to the workload instead of the other way around, even if that means changing the Instruction Set Architecture (ISA).

When we looked at the variety of server options, ARM servers stood out for us for a number of reasons:

There is a healthy ecosystem with multiple ARM server vendors which ensures active development around technical capabilities such as cores and thread counts, caches, instructions, connectivity options, and accelerators.
There is an established developer and software ecosystem for ARM. We have seen ARM servers benefit from the high-end cell phone software stacks, and this established developer ecosystem has significantly helped Microsoft in porting its cloud software to ARM servers.
We feel that ARM is well positioned for future ISA enhancements because its opcode sets are orthogonal. For example, with out-of-order execution running out of steam and with research looking at novel data-flow architectures, we feel that ARM designs are much more amenable to handle those new technologies without disrupting their installed software base.

We have been working closely with multiple ARM suppliers, including Qualcomm and Cavium, on optimizing their hardware for our datacenter needs. One of the biggest hurdles to enable ARM servers is the software. Rather than trying to port every one of our many software components, we looked at where ARM servers are applicable and where they provide value to us. We found that they provide the most value for our cloud services, specifically our internal cloud applications such as search and indexing, storage, databases, big data and machine learning. These workloads all benefit from high-throughput computing. 

To enable these cloud services, we’ve ported a version of Windows Server, for our internal use only, to run on ARM architecture. We have ported language runtime systems and middleware components, and we have ported and evaluated applications, often running these workloads side-by-side with production workloads.

During the OCP US Summit, Qualcomm, Cavium, and Microsoft will be demonstrating the version of Windows Server ported for our internal use running on ARM-based servers.

The Qualcomm demonstration will run on the Qualcomm Centriq 2400 ARM server processor, their recently announced 10nm, 48-core server processor with Qulacomm’s most advanced interfaces for memory, network, and peripherals.

The demonstration with Cavium runs on their flagship 2nd generation 64-bit ThunderX2 ARMv8-A server processor SoCs for datacenter, cloud and high performance computing applications.

Cavium, who collaborated with leading server supplier Inventec, and Qualcomm have each developed an Open Compute-based motherboard compatible with Microsoft’s Project Olympus that allows us to seamlessly deploy these new servers in our datacenters.

We feel ARM servers represent a real opportunity and some Microsoft cloud services already have future deployment plans on ARM servers.  We are working with ARM Limited on design specifications and server standard requirements and we are committed to collaborate with the community on open standards to advance ARM64 servers for cloud services applications.

You can read about our other announcements during the 2017 Open Compute Project Summit at this blog.
Quelle: Azure

Ecosystem momentum positions Microsoft’s Project Olympus as de facto open compute standard

Last November we introduced Microsoft’s Project Olympus – our next generation cloud hardware design and a new model for open source hardware development. Today, I’m excited to address the 2017 Open Compute Project (OCP) U.S. Summit to share how this first-of-its-kind open hardware development model has created a vibrant industry ecosystem for datacenter deployments across the globe in both cloud and enterprise.

Since opening our first datacenter in 1989, Microsoft has developed one of the world’s largest cloud infrastructure with servers hosted in over 100 datacenters worldwide. When we joined OCP in 2014, we shared the same server and datacenter designs that power our own Azure hyper-scale cloud, so organizations of all sizes could take advantage of innovations to improve the performance, efficiency, power consumption, and costs of datacenters across the industry. As of today, 90% of servers we procure are based on designs that we have contributed to OCP.

Over the past year, we collaborated with the OCP to introduce a new hardware development model under Project Olympus for community based open collaboration. By contributing cutting edge server hardware designs much earlier in the development cycle, Project Olympus has allowed the community to contribute to the ecosystem by downloading, modifying, and forking the hardware design just like open source software. This has enabled bootstrapping a diverse and broad ecosystem for Project Olympus, making it the de facto open source cloud hardware design for the next generation of scale computing.

Today, we’re pleased to report that Project Olympus has attracted the latest in silicon innovation to address the exploding growth of cloud services and computing power needed for advanced and emerging cloud workloads such as big data analytics, machine learning, and Artificial Intelligence (AI). This is the first OCP server design to offer a broad choice of microprocessor options fully compliant with the Universal Motherboard specification to address virtually any type of workload.

We have collaborated closely with Intel to enable their support of Project Olympus with the next generation Intel Xeon Processors, codename Skylake, and subsequent updates could include accelerators via Intel FPGA or Intel Nervana solutions.

AMD is bringing hardware innovation back into the server market and will be collaborating with Microsoft on Project Olympus support for their next generation “Naples” processor, enabling application demands of high performance datacenter workloads.

We have also been working on a long-term project with Qualcomm, Cavium, and others to advance the ARM64 cloud servers compatible with Project Olympus. Learn more about Enabling Cloud Workloads through innovations in Silicon.

In addition to multiple choices of microprocessors for the core computation aspects, there has also been tremendous momentum to develop the core building blocks in the Project Olympus ecosystem for supporting a wide variety of datacenter workloads.

Today, Microsoft is announcing with NVIDIA and Ingrasys a new industry standard design to accelerate Artificial Intelligence in the next generation cloud. The Project Olympus hyperscale GPU accelerator chassis for AI, also referred to as HGX-1, is designed to support eight of the latest “Pascal” generation NVIDIA GPUs and NVIDIA’s NVLink high speed multi-GPU interconnect technology, and provides high bandwidth interconnectivity for up to 32 GPUs by connecting four HGX-1 together. The HGX-1 AI accelerator provides extreme performance scalability to meet the demanding requirements of fast growing machine learning workloads, and its unique design allows it to be easily adopted into existing datacenters around the world.

Our work with NVIDIA and Ingrasys is just a one of numerous stand-out examples of how the open source strategy of Project Olympus has been embraced by the OCP community. We are pleased by the broad support across industry partners that are now part of the Project Olympus ecosystem.

This is a significant moment as we usher in a new era of open source hardware development with the OCP community.  We intend for Project Olympus to provide a blueprint for future hardware development and collaboration at cloud speed. You can learn more and view the specification for Microsoft’s Project Olympus at our OCP GitHub branch.
Quelle: Azure