Microsoft at PostgresOpen 2017

Earlier this year at Microsoft Build, we announced the public preview of Azure Database for PostgreSQL. Since then, we have been engaging deeply with the PostgreSQL community and are proud to be involved with PostgresOpen 2017 as a sponsor.

During my keynote at PostgresOpen 2017, I’ll share more about how at Microsoft we are committed to meeting customers where they are, enabling them to achieve more with the technologies and tools of their choice. Azure Database for PostgreSQL, built using the community edition of PostgreSQL database, offers built-in high availability, security and scaling on-the-fly with minimal downtime. Developers can seamlessly migrate their existing apps without any changes, and continue using existing tools. The simple pricing model enables developers to focus on developing apps.

Since introducing the preview a few months ago, we have been working closely with customers to understand their requirements and we continue to add features and updates as we move towards general availability. In addition to ensuring customer requirements are reflected in the product development, we continue to work closely with the PostgreSQL community, engage on pgsql-hackers mailing list, and work with the community on patches. PostgreSQL is a great product, with industry leading innovations in extensibility, and we hope to work with the community to make PostgreSQL even better for our customers.

“Spinning up the PostgreSQL database through the Azure portal was very easy. Then we just exported the database from the existing system and imported it into Azure Database for PostgreSQL. It only took two or three hours, and we really didn't run into any problems.”

– Eric Spear, Chief Executive Officer, Higher Ed Profiles

In addition to Azure Database for PostgreSQL, we also introduced Azure Database for MySQL. Developers using PostgreSQL and MySQL to build and deploy applications for web, mobile, content management system (CMS), customer relationship management (CRM), business, or analytical applications can now choose their favorite database engines delivered as a managed service on Azure. They seamlessly integrate with the most common open source programming languages such as PHP, Python, Node.js, and application development frameworks such as WordPress, Magento, Drupal, Django, and Ruby on Rails. Whether you want to build a website using MySQL database, or want to quickly build and deploy a geospatial web or mobile app with PostgreSQL, you can now quickly get setup using the managed service capabilities offered by Azure. In addition, app developers can continue to use the familiar community tools to manage their MySQL or PostgreSQL databases. The Azure Database for MySQL and PostgreSQL improves application developer productivity by bringing the following common differentiated benefits of the relational database platform services to all applications:

The ability to provision a database server in minutes with built-in high availability that does not require any configuration, VMs, or setup.
Predictable performance with provisioned resources and governance.
The option to scale Compute Units up/down in response to actual or anticipated workload changes without application downtime.
Built-in security to protect sensitive data by encrypting user data and backups, as well as data in-motion using SSL encryption.
Automatic backups with storage for recovery to any point up to 35 days.
Consistent management experience with Azure Portal, Command Line Interface (CLI), or REST APIs.

Get started

Explore more with 5-minute quickstarts and step-by-step tutorials for Azure Database for PostgreSQL and Azure Database for MySQL. If you are ready to go, you can start using the $200 free credit when you create a free account with Azure.

Feedback

We would love to receive your feedback. Feel free to leave comments below. You can also engage with us directly through User Voice (PostgreSQL and MySQL) if you have suggestions on how we can further improve the service.
Quelle: Azure

Azure Management Libraries for .NET – v1.2

We released 1.2 of the Azure Management Libraries for .NET. This release adds support for additional security and deployment features, and more Azure services:

Managed service identity
Create users in Azure Active Directory, update service principals and assign permissions to apps
Storage service encryption
Deploy Web apps and functions using MS Deploy
Network watcher service
Search service

https://github.com/azure/azure-sdk-for-net/tree/Fluent

Getting Started

You can download 1.2 libraries from:

Create a Virtual Machine with Managed Service Identity (MSI)

You can create a virtual machine with MSI enabled using a define() … create() method chain:

IVirtualMachine virtualMachine = azure.VirtualMachines.Define("myLinuxVM")
.WithRegion(Region.USEast)
.WithNewResourceGroup(rgName)
.WithNewPrimaryNetwork("10.0.0.0/28")
.WithPrimaryPrivateIPAddressDynamic()
.WithNewPrimaryPublicIPAddress(pipName)
.WithPopularLinuxImage(KnownLinuxVirtualMachineImage.UbuntuServer16_04_Lts)
.WithRootUsername("tirekicker")
.WithRootPassword(password)
.WithSize(VirtualMachineSizeTypes.StandardDS2V2)
.WithOSDiskCaching(CachingTypes.ReadWrite)
.WithManagedServiceIdentity()
.WithRoleBasedAccessToCurrentResourceGroup(BuiltInRole.Contributor)
.Create();

You can manage any MSI-enabled Azure resources from a virtual machine with MSI and add an MSI service principal to an Azure Active Directory security group.

Add New User to Azure Active Directory

You can add a new user to Azure Active Directory using a define() … create() method chain:

IActiveDirectoryUser user = authenticated.ActiveDirectoryUsers
.Define("tirekicker")
.WithEmailAlias("tirekicker")
.WithPassword("StrongPass!12")
.Create();

Similarly, you can create and update users and groups in Active Directory.

Enable Storage Service Encryption for a Storage Account

You can enable storage service encryption at a storage account level when you create a storage account using a define() … create() method chain:

IStorageAccount storageAccount = azure.StorageAccounts
.Define(storageAccountName)
.WithRegion(Region.USEast)
.WithNewResourceGroup(rgName)
.WithEncryption()
.Create();

Deploy Web apps and Functions using MS Deploy

You can use MS Deploy to deploy Web apps and functions by using the deploy() method:

// Create a Web app
IWebApp webApp = azure.WebApps.Define(webAppName)
.WithExistingWindowsPlan(plan)
.WithExistingResourceGroup(rgName)
.With.NETVersion(.NETVersion.V8Newest)
.WithWebContainer(WebContainer.Tomcat8_0Newest)
.Create();

// Deploy a Web app using MS Deploy
webApp.Deploy()
.WithPackageUri("link-to-bin-artifacts-in-storage-or-somewhere-else")
.WithExistingDeploymentsDeleted(true)
.Execute();

And…

// Create a function app
IFunctionApp functionApp = azure.AppServices.FunctionApps
.Define(functionAppName)
.WithExistingAppServicePlan(plan)
.WithExistingResourceGroup(rgName)
.WithExistingStorageAccount(app3.StorageAccount)
.Create();

// Deploy a function using MS Deploy
functionApp.Deploy()
.WithPackageUri("link-to-bin-artifacts-in-storage-or-somewhere-else")
.WithExistingDeploymentsDeleted(true)
.Execute();

Create Network Watcher and start Packet Capture

You can visualize network traffic patterns to and from virtual machines by creating and starting a packet capture using a define() … create() method chain, downloading the packet capture and visualizing network traffic patterns using open source tools:

// Create a Network Watcher
INetworkWatcher networkWatcher = azure.NetworkWatchers.Define(nwName)
.WithRegion(Region.USEast)
.WithNewResourceGroup(rgName)
.Create();

// Start a Packet Capture
IPacketCapture packetCapture = networkWatcher.PacketCaptures
.Define(packetCaptureName)
.WithTarget(virtualMachine.Id)
.WithStorageAccountId(storageAccount.Id)
.WithTimeLimitInSeconds(1500)
.DefinePacketCaptureFilter()
.WithProtocol(PcProtocol.TCP)
.Attach()
.Create();

Similarly, you can programmatically:

Verify if traffic is allowed to and from a virtual machine.
Get the next hop type and IP address for a virtual machine.
Retrieve network topology for a resource group.
Analyze virtual machine security by examining effective network security rules applied to a virtual machine.
Configure network security group flow logs.

Create a Managed Cloud Search Service

You can create a managed cloud search service (Azure Search) with replicas and partitions using a define() … create() method chain:

ISearchService searchService = azure.SearchServices.Define(searchServiceName)
.WithRegion(Region.USEast)
.WithNewResourceGroup(rgName)
.WithStandardSku()
.WithPartitionCount(1)
.WithReplicaCount(1)
.Create();

Similarly, you can programmatically: 

Manage query keys.
Update search service with replicas and partitions.
Regenerate primary and secondary admin keys.

Try it

You can get more samples from our GitHub repo. Give it a try and let us know what you think (via email or comments below).
 
You can find plenty of additional info about .NET on Azure at https://docs.microsoft.com/en-us/dotnet/azure/.
Quelle: Azure

Seamless cost reporting and analysis for Enterprise customers: now in preview

I’m thrilled to announce the preview release of Enterprise Cost Management within the Azure portal. With today’s release, Azure Enterprise Agreement (EA) users can view and analyze their subscription costs across different pivots, within the Azure portal. This addresses a top ask from our EA customers to empower subscription admins with visibility into their costs. We’ve also heard you on the need for more granular cost reporting by resource group and tag.

This is an initial step in our commitment to providing best-in-class cost management experiences within the Azure portal. Over the next few months, you will continue to see us roll out new capabilities to help you better manage your Azure costs and services.

In the remainder of this post, I walk you through some of the new features we’ve made available.

Try it today

All you have to do is sign in to the Azure portal at https://portal.azure.com. Go to Browse -> Subscriptions and click on your subscription. You’ll see the below charts in the Overview blade.

 

The pie chart on the left gives you a quick idea of the top resources contributing to your costs. The burn rate chart on the right tells you your month-to-date costs and provides a forecast, enabling you to take corrective action early.

Click into the pie chart or on Cost analysis in the Resource menu. This launches the Costs by resource report that gives you a breakdown of your costs by resource. Here you can filter costs by Resource Group, Tag, or Resource Type. If you spot excessive spend, you can right-click to navigate to a resource and view its activity logs or take corrective action. You can also download the results into a CSV file for offline analysis.

Further, left-clicking into any resource will launch the Cost history report. This enables you to spot trends and spikes in costs. You can get cost history for a resource group or resource, and group the results so you can easily identify what’s contributing to costs.

Note: in order to access these subscription-level costs in the Azure portal, the user needs to be a Billing Reader, Reader, Contributor, or Owner at the Subscription level.

To learn more about these capabilities, visit our help documentation here.

As mentioned above, this preview is just an early step in our efforts to build rich cost management experiences in the Azure portal, for Enterprise customers. Over the next few months, you will continue to see us roll out new capabilities to help you better manage your Azure costs and services. As always, we’d love to hear from you. Send us your feedback here.
Quelle: Azure

Azure IoT Hub Device Provisioning Service preview automates device connection and configuration!

Today we're announcing the public preview of the Azure IoT Hub Device Provisioning Service! The Device Provisioning Service is new service that works with Azure IoT Hub to enable customers to configure zero-touch device provisioning to their IoT hub. With the Device Provisioning Service, you can provision millions of devices in a secure and scalable manner, automating a process that has historically been time and resource intensive for manufacturers and companies managing volumes of connected devices. The Device Provisioning Service is the only cloud service to provide complete automated provisioning, including both registering the device to the cloud as well as configuring the device. Device Provisioning Service is available in East US, West Europe, and Southeast Asia starting today, and eventually will be available globally.

Without a provisioning service, connecting devices to Azure IoT Hub requires manual work. Each device needs a unique identity to enable per-device access revocation in case the device is compromised. Doing this manually works for very few devices, but at IoT scale, you have to individually place connection credentials on each of millions of devices.

A naïve way to go about solving the connection problem is to hardcode IoT Hub connection information in the device at manufacture time, but this only works in some scenarios. In many cases, complete provisioning requires information that was not available when the device was manufactured, such as who purchased the device or what the device is to be used for.

Even once the device is connected it still needs to be configured with its desired twin state and software and/or firmware updates. This is yet more work to account for when planning device deployments.

This is where the Device Provisioning Service saves customers a lot of time, helping get devices configured automatically during registration to IoT Hub. Device Provisioning Service contains all the information needed to provision a device, and the information can easily be updated later in the supply chain without having to unbox and re-flash the device.

Here are some of the provisioning scenarios the Device Provisioning Service enables:

Zero-touch provisioning to a single IoT solution without requiring hardcoded IoT Hub connection information in the factory (initial setup).
Automatically configuring devices based on solution-specific needs.
Load balancing devices across multiple hubs.
Connecting devices to their owner’s IoT solution based on sales transaction data (multitenancy).
Connecting devices to a particular IoT solution depending on use-case (solution isolation).
Connecting a device to the IoT hub with the nearest geo-location.
Re-provisioning based on a change in the device such as a change in ownership or location.

All these scenarios are achievable today through the Device Provisioning Service using the same basic flow:

 

Device Provisioning Service works best with devices using Hardware Security Modules (HSMs) to securely store their keys. HSMs provide the maximum amount of security for key storage, and the updated device SDK makes it incredibly easy to establish a root of trust between the device and the cloud using an HSM. Microsoft has partnerships with several HSM manufacturers, and you can read about the partnerships and HSM options we have on the Azure blog. Even if your device is incapable of using an HSM, it can still connect to Device Provisioning Service. You can learn about how to use a simulated TPM or a software-based x509 certificate here.

Learn more about all the technical concepts involved in device provisioning.

Azure IoT is committed to offering you services which take the pain out of deploying and managing an IoT solution in a secure, reliable way. You can create your own Device Provisioning Service on the Azure portal, and you can check out the device SDK on GitHub. Learn all about the Device Provisioning Service and how to use it in the documentation center. We would love to get your feedback on secure device registration (that's the point of the preview!), so please continue to submit your suggestions through the Azure IoT User Voice forum.
Quelle: Azure

General availability of App Service on Linux and Web App for Containers

Applications are changing the pace of business today – from delivering amazing customer experiences, to transforming internal operations. To keep pace, developers need solutions that help them quickly build, deploy and scale applications without having to maintain the underlying web servers or operating systems. Azure App Service delivers this experience and currently hosts more than 1 million cloud applications. Using its powerful capabilities such as integrated CI/CD, deployment slots and auto scaling, developers can get applications to the end users much faster; and today we’re making it even better. 

I am pleased to announce that Azure App Service is now generally available on Linux, including its Web App for Containers capability. With this, we now offer built-in image support for ASP.NET Core, Node.js, PHP and Ruby on Linux, as well as provide developers an option to bring their own Docker formatted container images supporting Java, Python, Go and more.

In Azure, we continue to invest in providing more choices that help you maximize your existing investments. Supporting Azure App Service on Linux is an important step in that direction.

High productivity development

To accelerate cloud applications development, you can take advantage of the built-in images for ASP.NET Core, Node.js, PHP and Ruby, all running on Linux, letting you focus on your applications instead of infrastructure. Just select the stack your web app needs, we will set up the application environment and handle the maintenance for you. If you want more control of your environment, simply SSH into your application and get full remote access to administrative commands.

Pre-built packages including WordPress, Joomla and Drupal solutions are also available in Azure Marketplace and can be deployed with just a few clicks to App Service.

Ease of deployment

With the new App Service capability, Web App for Containers, you can get your containerized applications to production in seconds. Simply push your container image to Docker Hub, Azure Container Registry, or your private registry, and Web App for Containers will deploy your containerized application and provision required infrastructure. Furthermore, whenever required, it will automatically perform Linux OS patching and load balancing for you.

Apart from the portal, you also have the option to deploy to App Service using CLI or Azure Resource Management templates.

Built-in CI/CD, scale on demand

Azure App Service on Linux offers built-in CI/CD capabilities and an intuitive scaling experience. With a few simple clicks, you can integrate with GitHub, Docker Hub or Azure Container Registry, and realize continuous deployment through Jenkins, VSTS or Maven.

Deployment Slots let you easily deploy to target environments, swap staging to production, schedule performance and quality tests, and roll-back to previous versions with zero downtime.

After you promote the updates to production, scaling is as simple as dragging a slider, calling a REST API, or configuring automatic scaling rules. You can scale your applications up or down on demand or automatically, and get high availability within and across different geographical regions.

To get started with Azure App Service on Linux, check out the use cases and try App Service for free. Want to learn more? Sign up for our upcoming webinar focused on containerized applications. You can also join us and thousands of other developers at Open Source Summit North America. For more information and updates, follow @OpenAtMicrosoft. 
Quelle: Azure

Overview on Web App for Containers and Azure App Service on Linux

Today, we announced the general availability of Azure App Service on Linux and Web App for Containers. Providing Linux support in Azure App Service is an important step towards our commitment to support a variety of OSS stacks on the platform. We started on this journey with an intention to support OSS workloads running natively on Linux VM inside Azure App Service. During the journey, it became obvious that a substantial set of customers are interested in bringing their Docker formatted containers to the Azure App Service Platform. This provided us with a unique opportunity to address the scenario with Web App for Containers offerings.

Let’s look into these offerings to understand which one to use, and when.

Azure App Service on Linux (Web App with built-in images)

The built-in image option running on Linux is an extension of a current Azure App Service offering, catering to developers who want to use FTP or GIT, deploy .NET Core, Node, PHP or Ruby applications to Azure App Service running on Linux. This is a vanilla App Service scenario powered by Linux OS. Check out our Quick Start article to deploy .NET Core applications running on App Service on Linux. Below is an example of one of the many architecture patterns used by customers who use this offering.

The underlying architecture is based off of running Docker containers, but they are essentially abstracted away by platform in the form of built-in runtime images that Microsoft manages and maintains on the customer’s behalf. All of the built-in Docker images are open sourced on GitHub and available on DockerHub. We plan on extending the list by adding built-in images for other OSS frameworks like Java and Python in future releases.

Web App for Containers

Web App for Containers is catered more towards developers who want to have more control over, not just the code, but also the different packages, runtime framework, tooling etc. that are installed on their containers. This offering is essentially bringing years worth of Azure App Service PaaS innovations to the community, by allowing developers to just focus on composing their containers without worrying about managing and maintaining an underlying container orchestrator. Customers of this offering prefer to package their code and dependencies into containers using various CI/CD systems like Jenkins, Maven, Travis CI or VSTS, alongside setting up continuous deployment web hooks with App Service.

Below is an example of one of the many architecture patterns used by customers of this offering.

 

Let's take a lap around Web App for Containers offering using Azure CLI

[Note: Click on the GIF to open Full Screen]

Here is a quick demo of how to create Web App for Containers

Now, let’s see how easy it is to create a staging slot and setup CI/CD with DockerHub

Finally, set up scale anywhere from 1 to 10 instances

Be sure to check out our FAQs. To keep up with investments and improvements to the service, check out the release notes and submit your feature ideas in UserVoice, just make sure to add [Linux] at the beginning of the title.
Quelle: Azure

Proactively monitoring cloud operations with Microsoft Azure Stack

Introduction

As an Azure customer, you enjoy its capability to deploy and manage workloads across many different services and regions. You are responsible for managing those Azure resources, including monitoring for problems. However, the underlying hardware and software supporting Azure (e.g. Azure’s physical hosts and network) are managed and monitored by Microsoft engineers.

Azure Stack runs as an integrated system in your own datacenter so the model is different. When you adopt Azure Stack, you enjoy the same capability to provision and consume workloads, but because Azure Stack services and hardware reside in your datacenter, you are responsible for managing and monitoring the Azure Stack environment to ensure system health and reliability. These tasks are taken care of by a new role in your organization – the Azure Stack operator.

The role of the Azure Stack operator

The Azure Stack operator is responsible for the integration, service provisioning, and life cycle of Azure Stack. Because Azure Stack is deployed as a hyper-converged, integrated system, it behaves like an appliance. As an appliance, many of the complexities and deep subject matter expertise of previous cloud technology solutions are minimized.

Once the solution is deployed in your datacenter, most regular maintenance tasks of the Azure Stack operator are typically isolated or infrequent configuration tasks such as managing plans and quotas and provisioning and managing Azure services and content. A big part of the Azure Stack operator’s role is responding to changes or issues within the datacenter.

In Azure Stack, we expect the Azure Stack operator’s work to be driven largely by alerts related to changes or issues. We have designed the monitoring and health system of Azure Stack so that Azure Stack operators get effective and relevant alerts along with specific remediation actions. The alerts inform the Azure Stack operator about the status of the Azure Stack infrastructure and provide directions for what actions are required.

Monitoring Azure Stack software infrastructure

When discussing monitoring and the Azure Stack software infrastructure, we typically talk about the following layers:

Azure Resource Manager (ARM) layer. The public and administrative portals, the ARM APIs, and the ARM components which implement those APIs.
Resource Provider (RP) layer. The foundational RPs that offer Azure Resources (e.g. IaaS services) to tenants as well as the Azure Stack specific RPs to provide infrastructure management capabilities such as the Health RP (HRP), which provides health state and alerting information.
Infrastructure control layer. The infrastructure roles that handle the requests from the RPs and turn them into actions within the system. The infrastructure roles don’t need regular management but may require a restart in certain circumstances. The infrastructure roles are supported by infrastructure role instances, which likewise may require a restart in certain circumstances.
Hardware layer. The network switches and servers that make up the computing and storage devices in Azure Stack.

Figure 1 provides an overview of the various layers and components of Azure Stack.

Figure 1 – Overview of Azure Stack infrastructure

To monitor the Azure Stack software infrastructure (the three top layers in the picture above), we use the Azure Monitoring agent. It collect events specific to each component and forwards them to local Azure Stack storage accounts in the default provider subscription. The Health RP and its services then raise alerts for those events, which are visible in the Azure Stack portal and available through both a Rest API or PowerShell query.

When issues arise, it is the Azure Stack operator’s responsibility to resolve it quickly to avoid or minimize impact to their tenants. To help Azure Stack operators get to the root of the issue faster, we designed and reviewed the component alerts for content and severity, consistency, and to ensure that each alert offers a clear understanding of impact and the steps necessary to resolve the issue.

As an example of an alert scenario with clear actions for the Azure Stack operator, see Figure 2 below. This alert informs the Azure Stack operator that one of the infrastructure role instances is unavailable and provides the ability to click on “AZS-CA01” to navigate directly to the blade which will allow them to take the start or restart actions.

Figure 2 – Azure Stack Alert Example

Other examples of alerting scenarios for the Azure Stack infrastructure software includes:

RP and infrastructure role availability
Capacity of compute memory, storage, and available public IPs
Node availability

Monitoring Azure Stack hardware components

The Azure Stack operator is also responsible for maintaining the health of the Azure Stack hardware components. The monitoring solution exposed by the Health RP handles the health state and alerting for the Azure Stack software infrastructure. For the lower levels of that stack, it has a subsystem to provide alerts for failed physical disks, network cards, and nodes, shown in Figure 3.

For hardware monitoring of the nodes (physical servers) an external solution is available from Azure Stack solution providers. These solutions monitor the nodes using agentless communication with the base board management controllers to raise alerts for failed power supplies, fans, temperature sensors, and other standard node hardware monitoring.

Similarly, the network switches also require external monitoring using datacenter monitoring tools either in your environment or acquired through the Azure Stack solution provider.

Monitoring hardware using an external solution is a best practice to ensure that alerts occur in cases where hardware failure(s) result in a software failure that delays or prevents alert generation.

Figure 3 – Hardware monitoring overview

Integrate with datacenter monitoring systems

Azure Stack is a fully integrated system. It does not allow the installation of any agent on its physical or infrastructure components (tenants are able to add any agent they want to their tenant VMs).  When integrated into customers’ existing management systems, Azure Stack is surfaced as a single (integrated) system, the underlying components are not exposed. Said another way, the internals of Azure Stack are internal.

For customers to get a single view of all alerts from their Azure Stack deployments and devices, as well as to integrate alerts into existing IT service management workflows for ticketing, Azure Stack supports integration with external datacenter monitoring solutions using either the Health RP Rest API or PowerShell access.

An example of integration support for monitoring Azure Stack deployments using Nagios was provided with an open source connector available to download from Nagios plug-in directory. Additionally, support for monitoring Azure Stack deployments with System Center Operations Manager is also available with the Azure Stack Management pack.

Figure 4 below demonstrates the integration point of the Health RP alerts, network switches, and hardware monitoring.

Figure 4 – Azure Stack integration with datacenter monitoring systems.

Next steps

Download and install Azure Stack Development kit today and get familiar with viewing alerts and health. To learn more about how to find alerts in Azure Stack, see Monitoring health and alerts in Azure Stack.

More Information

At Ignite this year in Orlando, we will have a series of sessions that will educate you on all aspects of Azure Stack. See our list of sessions and register here.

For more information on operating and monitoring Azure Stack check out this session, BRK3127 Operating principles of Azure Stack.

We are always looking for feedback and if you want to talk to us directly please sign up here.
Quelle: Azure

Microsoft joins IC3 in advancing blockchain enterprise readiness

Microsoft has just become a member of IC3, The Initiative for CryptoCurrencies & Contracts. We are excited to collaborate with this team of world class experts in cryptography, game theory, distributed systems, programming languages, and system security. Their work is aligned to five areas, which they call Grand Challenges, that form the basis of blockchain enterprise readiness: Scaling & Performance, Correctness by Design, Confidentiality, Authenticated Data Feeds, and Safety & Compliance.

When I first set foot in the IC3 office, it was sitting in a temporary office space near Chelsea Market in New York. Cavernous, unorganized, and teeming with students, it was hard to find the focus of the space. I was there for a meetup on Town Crier, being presented by Ari Juels. Having been invited by Jim Ballingall, Executive Director of IC3, I was there to come in and meet some folks, see what they are about, and do what you generally do at meetups. This was after having tried unsuccessfully to recruit Andrew Miller to join our crypto team over in Microsoft Research earlier in the year. Andrew and I had a great Skype call while I was attending Microsoft’s annual //BUILD event in San Francisco, where we would first launch the ConsenSys teams’ work to integrate the Ethereum Smart Contract language, Solidity, with Visual Studio. Note, there are now 3 Solidity extensions to Visual Studio and Visual Studio Code available. Ultimately Andrew decided to stay in academia and landed at the University of Illinois, Champaign-Urbana, coincidentally where I was born, but that’s another story.

Watching Ari’s Town Crier presentation, it immediately struck me as approaching a problem we were also trying to solve at Microsoft. Our approaches were different, but ultimately this work aligned with how we thought about some common problems in the blockchain space, trusted data feeds (often called oracles in blockchain lingo) and confidential queries via smart contracts. It also leveraged a new capability called secure enclaves to achieve some of this. Interestingly, we had similar work going on with two projects we’ve announced that handle these two concerns in different ways. The first project is Enterprise Smart Contracts, originally announced as Cryptlets in June 2016, which addresses off chain smart contract execution or the integration with legacy data or oracles. Learn more by exploring a deep dive of Project Bletchley – The Cryptlet Fabric, inclusive of a pictorial comparison to oracles.

The second project is the Coco Framework, which addresses the needs of enterprise consortium networks for scalability, flexible confidentiality and improved governance. Ledgers integrated with Coco allow competing companies to participate on the same blockchain and ensures confidentiality of their transactions among parties they’d like to include. Other projects, such as the Enterprise Ethereum Alliance’s fork of Quorum do this in different ways. Coco does this using secure enclaves or trusted execution environments delivered either at the chip level, using Intel SGX, or in software using Windows Virtual Secure Mode. We announced this project August 10th, 2017 along with partners Intel, Mojix, and JP Morgan’s Amber Baldet, representing Quorum.

"Microsoft's Coco Framework represents a breakthrough in achieving highly scalable, confidential, permissioned Ethereum or other blockchain networks that will be an important construct in the emerging world of variously interconnected blockchain systems." – Joseph Lubin, Founder of ConsenSys

Some of our team members were lucky enough to attend the July 2017 IC3-Ethereum Crypto Boot Camp at Gates Hall, Cornell University along with other industry collaborators.

Wind the clock forward, and we have joined the IC3 team just as they move into their new offices on Roosevelt Island for the Fall Semester at Cornell Tech. We continue to be excited about the work the IC3 team is doing and look forward to collaborating more closely and their upcoming events such as the October 2017 retreat and Blockchain Workshop.
Quelle: Azure

August 2017 Leaderboard of Database Systems contributors on MSDN

Congratulations to our August top-10 contributors! Alberto Morillo maintain his first position in the cloud ranking while Erland Sommarskog climbs to the top in the All Databases ranking.

This Leaderboard initiative was started in October 2016 to recognize the top Database Systems contributors on MSDN forums. The following continues to be the points hierarchy (in decreasing order of points):

Quelle: Azure

Run Hortonworks clusters and easily access Azure Data Lake

Enterprise customers love Hortonworks for running Apache Hive, Apache Spark and other Apache Hadoop workloads. They also love the value that Azure Data Lake Store (ADLS) provides, like high throughput access to cloud data of any size, sharing easily and securely with its true hierarchical file system, Posix ACLs, along with Role-based Access Control (RBAC), and encryption-at-rest.

Azure HDInsight managed workloads – which offers built-in integration with and access to ADLS – vastly simplifies the management of enterprise clusters for many enterprises. Customers have a choice, and some Hortonworks customers choose to customize and manage their own clusters deployed directly on Azure cloud infrastructure, and those deployments need direct access ADLS.

With the recent announcement of Hortonworks Data Platform (HDP®) 2.6.1 with Azure Data Lake Store support, now customers can do just that. Customers can deploy HDP clusters and easily access and interoperate HDP with ADLS data. With HDP 2.6.1 and its access to ADLS, we bring another way for our customers to realize the business value of their data. Here’s how some customers are enriching key scenarios:

One or more Hortonworks clusters can access data in the same Azure Data Lake.
On-premises clusters can directly access data in ADLS facilitating access to data in the cloud using standard Hadoop utilities, like DistCp.

In addition to using HDP directly, Hortonworks is also making Cloudbreak for Hortonworks Data Platform available via the Azure Marketplace. Cloudbreak for Hortonworks Data Platform simplifies the provisioning, management, and monitoring of HDP clusters in the cloud environments. This is a great way to get started trying HDP and HDP + ADLS.

Ready to get started with HDP and ADLS?

You can start to deploy HDP clusters with ADLS today, simply from the Azure Marketplace using Cloudbreak for Hortonworks Data Platform. After you set up your ADLS account, follow the instructions to launch Cloudbreak on Azure and create a cluster, adding your ADLS as a file system. Once the cluster is deployed, you can give the cluster data lake access to all, or part of, your Azure data lake. Then try it out using some simple commands. 

Alternatively, visit documentation for information on how to custom deploy your Hortonworks cluster with ADLS. See also this recent Azure blog, titled "Hortonworks extends IaaS offering on Azure with Cloudbreak," to find out more information about Cloudbreak.

Quelle: Azure