Azure Container Registry preview

In today’s cloud-first world, applications are the vital ingredients that fuel innovation and productivity. Whether you are lifting and shifting apps to the cloud or building new cloud-native apps, containers are an attractive option to efficiently package software applications and deploy them as quickly as the business demands. Containers empower engineers to focus on innovation and new features rather than worry about how their code will be deployed. Containers also enable IT pros to easily adjust to seasonal demand fluctuations by easily enabling upward and downward scaling. In a nutshell, containers, and the ecosystem that is developing around them, will empower organizations to create the next generation of applications experiences and ship those innovations at the speed of business. A critical component of this container ecosystem is the container registry, which lets users store, manager and retrieve containers.

April 2016, Microsoft made Azure Container Service (ACS) generally available. June 2016 Microsoft announced container support for Azure Batch and Azure Service Fabric container support. Today, we are announcing major upgrades to Azure Container offerings, thereby making Azure the cloud with the broadest support for container deployments. This includes a preview of Azure Container Registry (ACR). ACR is a private registry for hosting container images. Using the Azure Container Registry, customers can store Docker-formatted images for all types of container deployments. Azure Container Registry integrates well with orchestrators hosted in Azure Container Service, including Docker Swarm, DC/OS and Kubernetes. Users can benefit from using familiar tooling capable of working with the open source Docker Registry v2.

Use cases for the Azure Container Registry include these:

Store and manage images for all types of container deployments

Docker is becoming the new binary format for deployments. Development and operations teams can manage the configuration of their app, isolated from the configuration of the hosting environment. Containers aren&;t just deployed to highly scalable orchestration systems like Mesosphere DC/OS, Docker Swarm and Kubernetes, but all types of deployments. Azure App Services, Azure Batch, Service Fabric and other services are coming online that support containers as their deployment model. Regardless of where you deploy containers, you&039;ll need a place to store and manage the images. Using the Azure Container Registry, you can store your images for all types of container deployments.

Automated Container Builds, Testing and Security Scanning

Using Visual Studio Team Services developers can automate the process for compiling their code, in containers, building Docker images and deploying them to the Azure Container Registry. With partners like TwistLock, you can rest assured that your image-building process will produce secure images as they get deployed to the Azure Container Registry, as well as protect your deployment environments like ACS by securing each node in the cluster.

Store your container image in local, network-close storage on Azure

The Azure Container Registry provides local, network-close storage of your container images. By instancing a registry in the same datacenter as your deployments, your network latency will be reduced, without incurring ingress/egress charges.

Use Common Command Line Interface (CLI) to interact with the registry

Benefit from using familiar and open source CLI tools like Docker login, push and pull. You don’t need to learn new APIs or commands to work with the registry. Users can benefit from using familiar tooling capable of working with the open source Docker Registry.

Use Azure Active Directory to manage access, including Service Principals for headless connections like automated CI/CD and vulnerability scanning

Rest assured your credentials are safely managed using Azure Active Directory. The Public Preview will support Azure Active Directory Service Principal-backed authentication for basic auth flows, including role-based access for read-only, write and owner permissions.

Manage Windows and Linux container images in a single registry

Azure container registry can manage both Windows and Linux images, giving you the flexibility to choose the platform and workloads to run within the containers.

These innovations demonstrate our continued investment in the container ecosystem and highlight our unique strategy of offering the only public cloud container orchestration service that offers a choice of open source orchestration technologies — DC/OS, Docker Swarm and Kubernetes. The support for Azure Container Registry amplifies our strategy to make it easier for organizations to adopt containers in the cloud.

Customers will be able to access the preview of Azure Container Registry on Nov. 16 — watch for more details at Microsoft Connect();!
Quelle: Azure

Azure DocumentDB updates: quick start experience, backup and restore, and firewall support

Over the past couple weeks we released a number of improvements to the developer experience and capabilities of  DocumentDB. We added a new quick start experience helping you to get up and running with a working app on DocumentDB in seconds. We launched a preview for backup/restore and inbound firewall capabilities, as well as released numerous runtime improvements including expanded support for geospatial types.

Quick start

One important characteristic of any service is the time it takes to get a working app up and running. We released a new quick start experience that gives you a personalized ready-to-run sample app connected to your newly created DocumentDB account in seconds. Create a new DocumentDB account and click on the quick start menu item in your existing account and give it a try.

For accounts with MongoDB API support, we had added a handy code snippet for all major platforms, with all the necessary configuration to get you started, including what connection string to use.

Backup and restore

DocumentDB is built with high availability and global distribution at its core – it allows you to scale throughput across multiple Azure regions along with policy driven failover and transparent multi-homing APIs. As a database system offering 99.99 availability SLAs, all the writes in DocumentDB are durably committed by a quorum of replicas within a local data center, and replicated across all the regions associated with your DocumentDB account.

DocumentDB also automatically takes backup of all your data at regular intervals. The backups are taken without affecting the performance or availability of your database operations. All your backups are stored separately in another storage service and are further globally replicated for resiliency against regional disasters. Customers can now request to restore their databases and collections from a backup by contacting Azure support. Below is an illustration of periodic backups to GRS Azure Storage performed by DocumentDB for all entities.

Learn more about backup and restore for DocumentDB.

Firewall support

Due to popular customer request, we recently offered support for IP filtering and firewall rules in DocumentDB for both DocumentDB and MongoDB APIs. Customers can configure their DocumentDB account to allow traffic only from a specified list of the individual IP addresses and IP address ranges. Once this configuration is applied, all requests originating from machines outside this allowed list will be blocked by DocumentDB. The connection processing flow for the IP-based access control is described in the following diagram.

Learn more about DocumentDB firewall support.

Expanded geospatial support

With a recent service update, Azure DocumentDB now supports geospatial indexing and querying of Polygon and LineString objects, in addition to the Point object. DocumentDB can automatically detect GeoJSON fragments that contain Polygon and LineString objects within your documents, and index them for efficient spatial proximity queries.

Spatial querying of Polygon and LineString objects is commonly used to detect "geo-fences" in IoT, telematics, gaming, and mobile applications. You can enable or disable spatial indexing by changing the indexing policy on a per-collection basis. Read more about working with geospatial data in DocumentDB.

 

We hope you find this new functionality useful as you build your solutions on top of DocumentDB. As always, let us know how we are doing and what improvements you&;d like to see going forward through UserVoice,  StackOverflow azure-documentdb, or Twitter @DocumentDB.
Quelle: Azure

Azure Blueprint takes takes on DoD Level 4

I am pleased to announce the release of the Azure Blueprint for the Department of Defense (DoD). Azure Blueprint recently released documentation to streamline the path for Azure Government customers working with the Federal Risk and Authorization Management Program (FedRAMP) Moderate Baseline to attain Authorizations to Operate (ATO).

Azure Blueprint has expanded to support our DoD customers working in Azure Government to document their customer security responsibilities. The Azure Blueprint Customer Responsibilities Matrix (CRM) and System Security Plan (SSP) template can now be used by DoD mission owners and third-party providers building systems on behalf of DoD customers.

The DoD migration to the cloud has been guided by the Department of Defense Security Requirements Guide (SRG) Version 1 Release 2. All cloud systems must meet the security standards outlined in the SRG for use by DoD customers. The Cloud Computing SRG breaks down requirements into impact levels, covering specific data classifications that are adequately protected at each level.

As announced on June 23, 2016, Azure Government has been granted a provisional authorization (PA) at the DoD Impact Level 4 for processing of controlled unclassified information (CUI) and mission critical data. This includes export controlled data, protected health information, privacy information, and others (e.g. FOUO, SBU, etc.). Since that announcement, we have been working with DoD customers to help them understand the Azure Government security protections and work through their security responsibilities. Azure Blueprint now provides these DoD customers with a simplified way to understand the scope of their security responsibilities when architecting solutions in Azure.

We look forward to providing DoD L5 Azure Blueprints once we attain our Impact Level 5 PA for the Microsoft Azure Government DoD Regions and expanding our footprint as the most trusted cloud.

For any questions and to access to these documents, please e-mail AzureBlueprint@microsoft.com.

We welcome your comments and suggestions to help us continually improve your Azure Government experience. To stay up to date on all things Azure Government, be sure to subscribe to our RSS feed and to receive emails, click “Subscribe by Email!” on the Azure Government Blog. To experience the power of Azure Government for your organization, sign up for an Azure Government Trial.
Quelle: Azure

Azure Blueprint takes ATO processes to the next level

I am pleased to announce the release of the Azure Blueprint for the Department of Defense (DoD). Azure Blueprint recently released documentation to streamline the path for Azure Government customers working with the Federal Risk and Authorization Management Program (FedRAMP) Moderate Baseline to attain Authorizations to Operate (ATO).

Azure Blueprint has expanded to support our DoD customers working in Azure Government to document their customer security responsibilities. The Azure Blueprint Customer Responsibilities Matrix (CRM) and System Security Plan (SSP) template can now be used by DoD mission owners and third-party providers building systems on behalf of DoD customers.

The DoD migration to the cloud has been guided by the Department of Defense Security Requirements Guide (SRG) Version 1 Release 2. All cloud systems must meet the security standards outlined in the SRG for use by DoD customers. The Cloud Computing SRG breaks down requirements into impact levels, covering specific data classifications that are adequately protected at each level.

As announced on June 23, 2016, Azure Government has been granted a provisional authorization (PA) at the DoD Impact Level 4 for processing of controlled unclassified information (CUI) and mission critical data. This includes export controlled data, protected health information, privacy information, and others (e.g. FOUO, SBU, etc.). Since that announcement, we have been working with DoD customers to help them understand the Azure Government security protections and work through their security responsibilities. Azure Blueprint now provides these DoD customers with a simplified way to understand the scope of their security responsibilities when architecting solutions in Azure.

We look forward to providing DoD L5 Azure Blueprints once we attain our Impact Level 5 PA for the Microsoft Azure Government DoD Regions and expanding our footprint as the most trusted cloud.

For any questions and to access to these documents, please e-mail AzureBlueprint@microsoft.com.

We welcome your comments and suggestions to help us continually improve your Azure Government experience. To stay up to date on all things Azure Government, be sure to subscribe to our RSS feed and to receive emails, click “Subscribe by Email!” on the Azure Government Blog. To experience the power of Azure Government for your organization, sign up for an Azure Government Trial.
Quelle: Azure

Project Bletchley – Blockchain comes to Azure Marketplace

Only a couple weeks after our most recent update, I am pleased to be back to announce more great additions to our blockchain offering on Azure, in addition to new partner solutions.  We continue to expand on our blockchain infrastructural work to improve the services, tools, and best practices needed to design, build out, and manage complex consortium networks to develop new business applications.

This week we are excited to expand support of Bletchley v1 into the Azure Marketplace.  As you may recall, with the first phase of blockchain support on Azure, you can quickly and easily deploy a many-node consortium blockchain network.  With this release, you have all the same great functionality as with the original release in the Azure Quickstart templates, but with a more robust user experience directly integrated into the Azure portal.

Since we try to never release without new functionality, we have also added support for:

Dozen Consortium Members: You can now deploy a blockchain network that has a dozen consortium members.
Premium storage: To support low latency and high throughput applications, you can configure the nodes within the consortium network to leverage premium storage backed virtual machines.
Password or SSH key: To secure the nodes within your network, you can now specify an SSH key instead of a password.

For more information about the solution, you can visit our detailed walkthrough.

In addition to our solutions, we continue to grow our blockchain ecosystem on Azure.  We are excited to welcome many new exciting partner blockchain solutions in the Azure Marketplace, including:

​Chain: As announced last week, you can now deploy Chain&;s distributed ledger technology, Chain Core, on Azure.
Ethereum Studio: You can quickly set up ether.camp&039;s Ethereum stack, a full stack developer sandbox to develop and test Ethereum solutions, on Azure.

Try out all the latest blockchain releases and let us know if you have any question, feedback, or additional requests.  We are excited to continue on this journey with you.
Quelle: Azure

General availability: Azure cool blob storage in additional regions

Azure Blob storage accounts with hot and cool storage tiers are generally available in six new regions: US East, US West, Germany Central, Germany Northeast, Australia Southeast and Brazil South. You can find the updated list of available regions on the Azure services by region page.

Blob storage accounts are specialized storage accounts for storing your unstructured data as blobs (objects) in Azure Storage. With Blob storage accounts, you can choose between hot and cool storage tiers to store your less frequently accessed (cool) data at a lower storage cost, and store more frequently accessed (hot) data at a lower access cost.

Customers in the new regions can take advantage of the cost benefits of the cool storage tier for storing backup data, media content, scientific data, active archival data—and in general, any data that is less frequently accessed. For details on how to start using this feature, please see our getting-started documentation.

For details on regional pricing, see the Azure Storage pricing page.
Quelle: Azure

Azure Backup supports encrypted Azure virtual machines using Portal and PowerShell

Azure Backup already supports backup and restore of Classic and Resource Manager virtual machines and also premium storage VMs. Today, we are announcing support for backup and restore of encrypted Azure virtual machines using portal as well as PowerShell, available for VMs encrypted using Azure Disk Encryption.

Azure Disk Encryption solution helps protect customer data to meet their security and compliance commitments through a range of advanced technologies to encrypt, control and manage encryption keys, and audit access of data. Additionally various security requirements like key rollover and re-encryption of VMs make it more complex to maintain keys and secrets of these VMs. Azure Backup supports backup of encrypted VMs across all of these scenarios seamlessly and maintains security, privacy and sovereignty of enterprise data throughout the backup lifecycle.

Value Proposition

This feature provides:

Enhanced security: Since the keys and secrets of encrypted VMs are backed up in encrypted form, unauthorized users cannot read or use these backed up keys and secrets. Only users with right level of permissions can backup and restore encrypted VMs as well as keys and secrets.
Improved restores: Besides backing up and restoring encrypted VMs, latest keys and secrets associated with the VM are also backed up. So even if VM is restored after years and the keys are lost, the backed up version can be used to retrieve the VM. Learn more about how to restore keys and secrets using Azure Backup.
Simplified experience: With this capability, you can seamlessly backup and restore your encrypted VMs through a familiar and consistent experience.

Features

With this release, Azure Backup provides:

Backup of encrypted VMs using Key Encryption Key: The current capability supports backup of VMs encrypted using BitLocker Encryption Key (BEK) and Key Encryption Key (KEK) both. The BEK and KEK backed up will be stored in encrypted form so they can be read and used only when restored back to key vault by the right user.
Restore lost keys and secrets: Since KEK and BEK are backed up as well, users with right set of permissions will be able to restore keys and secrets, in case they are lost, back to the key vault and bring up the encrypted VM.
PowerShell: Customers can leverage Azure PowerShell to automate and perform backup and restore operations at scale.

Getting Started

To get started with backup of encrypted Azure VMs:

Create a recovery services vault, if it doesn’t exist. Open the recovery services vault.
Click on +Backup to start backing up encrypted VMs – Refer Backup and Restore of encrypted VMs documentation for more details.

To restore encrypted Azure VMs:

To restore encrypted VMs, use the steps mentioned in restore virtual machines in Azure portal documentation for more details.
To restore keys and secrets of encrypted VMs, use the steps mentioned in how to restore keys and secrets using Azure Backup for more details.

Related Links and Additional Content

Learn more about how to backup and restore encrypted Azure VMs
Getting started with Recovery Services vault
Learn more about Azure Backup
Want more details? Check out Azure Backup Documentation
Need help? Reach out to Azure Backup forum for support
Tell us how we can improve Azure Backup by contributing new ideas and voting up existing ones
Follow us on Twitter @AzureBackup for latest news and updates

Quelle: Azure

Azure Relay – cross-platform, open-protocol connections

The Azure Relay service was amongst the initial core set of services available on Azure, and is a fundamental hybrid cloud integration tool for many current solutions running on Azure. The Relay allows for secure and seamless communication bridging between cloud and on-premises assets, or even between different sites using the cloud as a matchmaker. The Relay operates at the application-level, allowing for traversal of network address translation (NAT) boundaries and firewalls and for endpoint discovery completely without requiring any intrusive changes to the networking environment.

The first great news we have for you today is that the Relay service, which is developed and maintained by the same team that brings you Event Hubs and Service Bus Messaging, is now (finally!) available for management and monitoring in the modern Azure Portal, and as a standalone service as Relay.

But we have even greater news.

Until today, the Azure Relay’s capabilities have required using a particular runtime and platform: the full .NET Framework running on Windows. Building relayed services required using the WCF communication framework. Over the years, we’ve done a lot of (often invisible) work to make these capabilities robust on the client and in the service, and we will continue to offer them under the WCF Relay feature name. If you’re using the Relay features today with WCF, nothing changes.

If you are building apps on platforms other than Windows and/or using you own choice of languages, runtimes, or frameworks – everything changes.

Today, we’re announcing and making available the public preview of the next generation cross-platform and open-protocol Azure Relay capabilities, named Hybrid Connections.

Hybrid Connections

Hybrid Connections evolution of the Relay is completely based on HTTPS and WebSockets, allowing you to securely connect resources and services residing behind a firewall in your on-premises setup with services in the cloud or other assets anywhere.

Being based on the WebSockets protocol and thus providing a secure, bi-directional, and binary communications channel unencumbered by particularities of specific frameworks, allows easy integration with many existing and modern RPC frameworks such as Apache Thrift, Apache Avro, Microsoft Bond, and many others. It is also a great foundation for stream communication bridges that allows relaying database, remote desktop or shell connections, to name just some examples.

Hybrid Connections leverages the robust security model of the Relay service, and provides the proven load balancing and failover capabilities that Relay solutions rely on today.

The initial set of samples and the client code is available for C# and JavaScript (Node) today on GitHub and we will provide more language bindings as we work towards general commercial availability. The C# samples already demonstrate integration with Thrift and Avro, and we’re inviting contributions and further RPC framework bindings with wide open arms. The full protocol documentation to make that possible is now available on Azure Documentation Center.

Hybrid Connections and BizTalk Services

It is also important to note that this Hybrid Connections feature is a newer, but separate version from the Hybrid Connections feature brought to you by BizTalk Services. That particular feature will continue to run as it does today with no change in billing and how you use it.
Quelle: Azure

Azure SQL Database: Now supporting up to 10 years of Backup Retention (Public Preview)

Does your application have compliance requirements to retain data for a long period of time? Or do you need to extend the built-in backup retention for oops recovery past 35 days? Now, with just a few clicks, you can easily enable your databases to have long-term retention. Azure SQL Database now supports backups stored in your own Azure Backup Service Vault. This allows you easily extend the built-it retention period from 35 days to up to 10 years.

Now supporting your data retention requirements is much simpler. Today Azure SQL Database automatically creates a full backup every week for each of your databases. Once you add the LTR policy to a database using Azure Portal or API, these weekly backups will be automatically copied to your own Azure Backup Service Vault. If your databases are encrypted with TDE, that&;s no problem — the backups are automatically encrypted at rest. The Services Vault will automatically delete your expired backups based on their timestamp, so there&039;s no need to to manage the backup schedule or worry about the cleanup of the old files. The following diagram shows how to add LTR policy in the Portal.

Get started with the Azure SQL Database long-term backup retention preview by simply selecting Azure Backup Service Vault for your SQL server in the Azure Portal and creating a retention policy for your database. The database backups will show up in the vault within seven days.

Learn more about Azure SQL Database backups
Quelle: Azure