Azure Site Recovery now supports large disk sizes in Azure

Following the recent general availability of large disk sizes in Azure, we are excited to announce that Azure Site Recovery (ASR) now supports the disaster recovery and migration of on-premises virtual machines and physical servers with disk sizes of up to 4095 GB to Azure.

Many on-premises virtual machines that are part of the Database tier and file servers use disks with sizes greater than 1 TB. Support for protecting these virtual machines with large disk sizes has consistently featured as a top ask from both our customers and partners. With this enhancement, ASR now provides you the ability to recover or migrate these workloads to Azure.

These large disk sizes are available on both standard and premium storage. In standard storage, two new disk sizes, S40 (2TB) and S50 (4TB) are available for managed and unmanaged disks. For workloads that consistently require high IOPS and throughput, two new disk sizes, P40 (2TB) and P50 (4TB) are available in premium storage, again for both managed and unmanaged disks. Depending upon your application requirements, you can choose to replicate your virtual machines to standard or premium storage with ASR. More details on the configuration, region availability and pricing of large disks is available in this storage documentation.

To show you how Azure Site Recovery supports large disk sizes, I protected the Database tier VM of a SharePoint farm. You can see that this VM has data disks which are greater than 1 TB.

Pre-requisite step for existing ASR users:

Before you start protecting virtual machines/physical servers with greater than 1 TB disks, you need to install the latest update on your existing on-premises ASR infrastructure. This is a mandatory step for existing ASR users.

For VMware environments/physical servers, install the latest update on the Configuration server, additional process servers, additional master target servers and agents.

For Hyper-V environments managed by System Center VMM, install the latest Microsoft Azure Site Recovery Provider update on the on-premises VMM server.

For Hyper-V environments not managed by System Center VMM, install the latest Microsoft Azure Site Recovery Provider on each node of the Hyper-V servers that are registered with Azure Site Recovery.

I would like to call out that support for Disaster Recovery of IaaS machines in Azure with large disk sizes is not available currently. This support would be made available soon.

Start using Azure Site Recovery today. Visit the Azure Site Recovery forum on MSDN for additional information and to engage with other customers. You can also use the ASR User Voice to let us know what features you want us to enable next.
Quelle: Azure

Enterprise Cloud Strategy, 2nd Edition

A roadmap to becoming a cloud-centric company

I’m delighted to announce the second edition of our free e-book, Enterprise Cloud Strategy, written by Barry Briggs and myself. Get your copy of the e-book. 

Much has changed in the two years since we published the first edition. Cloud computing has evolved from a technology that delivers efficiencies and cost savings, to a technology that also transforms the scope of IT and business operations with new opportunities.

The questions around the cloud have gone from “if” to “when” and “how.”

Learn how to shape the efficient enterprise cloud transformation

In the second edition, you’ll find best practices and guidance on how to get started, and which applications to consider first in your cloud migration. After the technical exercise of migrating applications, the journey starts for the rest of the organization. The cloud can and should begin transforming your business with greater scale, integration, and richer capabilities.

The book is based on real-world experiences from enterprise IT and seeks to answer a new question, “How can I use cloud computing to become a true partner to the business?” You’ll come away with an understanding of the three stages of cloud migration – experimentation, migration, and transformation – and how to plan and build strategies that involve all departments of the business.

With your organization’s data in the cloud, how do you integrate your migrated applications to take maximum advantage of new cloud services like big data analytics, machine learning, and Internet of Things? What new skills and new roles are needed? How do you appropriately involve various business units in the decision-making process?

“The move to the cloud has opened many opportunities. With it comes the need for best practices and guidance of how to adopt cloud platforms with enterprise-grade rigor and governance. This book fills this much-needed gap in a clear, concise, and practical way. It is an easy read, too.”

– Gavriella Schuster, Corporate Vice President, Channels & Programs, One Commercial Partner,  Microsoft

Enterprise Cloud Strategy, 2nd Edition is organized for cloud experts and novices alike, with chapters dedicated to understanding the different types of cloud, application models, and cloud journeys; all the way to planning and implementing a cloud transformation.

About the authors

Barry Briggs, an independent consultant, has a long history in software and enterprise computing. He served in a several roles during his twelve-year career at Microsoft, most recently as chief enterprise architect on the Microsoft DX (Developer Experience) team. Previously Barry served as chief architect and CTO for Microsoft’s IT organization, where he created and led Microsoft IT’s cloud strategy team.

Eduardo Kassner is the Chief Technology and Innovation Officer for the Worldwide Channels & Programs Group at Microsoft Corporation. His team is responsible for defining the strategy and developing the programs to drive the technical capacity, practice development, and profitability for the hundreds of thousands Microsoft partners worldwide. He recently co-wrote and published the first edition of Enterprise Cloud Strategy published by Microsoft Press, which has been downloaded more than 250,00 times from the Azure.com website.

Download the e-book today.
Quelle: Azure

Microsoft announces Project Olympus support for new Intel Xeon Scalable Processors

In March at the Open Compute Project (OCP) annual summit, we announced that Project Olympus, our next generation hyperscale cloud hardware design, attracted the latest in silicon innovation to address the exploding growth of cloud services. Project Olympus was based on a new hardware development model for community based open collaboration that we developed with OCP. Today, Microsoft is proud to announce support for the newest generation of Intel Xeon Scalable Processors within the Project Olympus ecosystem. 

Intel has been a premier platform partner for Project Olympus and the Intel Xeon Scalable Processor will be a cornerstone for this new platform. Microsoft has also worked closely with Intel to engineer Arria-10 FPGAs, which are deployed on every single Project Olympus server, to create a “Configurable Cloud” that can be flexibly provisioned and optimized to support a diverse set of applications and functions.

We designed Project Olympus with the ability to accommodate a variety of workloads from email to databases, online productivity, HPC, and even AI. Some of these workloads have extremely demanding requirements for compute, storage, and networking which require a base platform that can scale with demands of current and emerging workloads. Intel Xeon Scalable Processors enable such platform capabilities by providing the ability to scale resources as needed. Whether it’s high core counts and memory bandwidth for extreme multithreaded performance, IO scaling capabilities, or the new Intel AVX-512 instructions for HPC and AI workloads, Intel Xeon Scalable Processors, and Intel FPGAs provide a significant degree of flexibility and performance that allows us to meet the emerging demands of the cloud.

Project Olympus is Microsoft’s blueprint for future hardware development and collaboration.  We look forward to the continued collaboration with Intel in designing and building the highest performing, most flexible, and secure clouds possible.
Quelle: Azure

Securing the AtSea App with Docker Secrets

Passing application configuration information as environmental variables was once considered best practice in 12 factor applications. However, this practice can expose information in logs, can be difficult to track how and when information is exposed, third party applications can access this information. Instead of environmental variables, Docker implements secrets to manage configuration and confidential information.
Secrets are a way to keep information such as passwords and credentials secure in a Docker CE or EE with swarm mode. Docker manages secrets and securely transmits it to only those nodes in the swarm that need access to it. Secrets are encrypted during transit and at rest in a Docker swarm. A secret is only accessible to those services which have been granted explicit access to it, and only while those service tasks are running.
The AtSea Shop is an example storefront application that can be deployed on different operating systems and can be customized to both your enterprise development and operational environments. The previous post showed how to use multi-stage builds to create small and efficient images. In this post, I’ll demonstrate how secrets are implemented in the application.
Creating Secrets
Secrets can be created using the command line or with a Compose file. The AtSea application uses nginx as a reverse proxy secured with HTTPS. To accomplish this, I created a self-signed x509 certificate.
mkdir certs
openssl req -newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key -x509 -days 365 -out certs/domain.crt
I then created secrets using the domain key and certificate for nginx.
docker secret create revprox_cert certs/domain.crt
docker secret create revprox_key certs/domain.key
I also used secrets to hold the PostgreSQL database password and a token for the payment gateway by making files that contained the password and token. For example, the postgres_password file contains the password ‘gordonpass’. In the compose file, I added the secrets:
secrets:
postgres_password:
file: ./devsecrets/postgres_password
payment_token:
file: ./devsecrets/payment_token
 I then set the database password secret,
 database:
   build: 
      context: ./database
   image: atsea_db
   environment:
     POSTGRES_USER: gordonuser
     POSTGRES_DB_PASSWORD_FILE: /run/secrets/postgres_password
     POSTGRES_DB: atsea
   ports:
     – “5432:5432″ 
   networks:
     – back-tier
   secrets:
     – postgres_password
and I make the postgres_password secret available to the application server.
appserver:
   build:
      context: .
      dockerfile: app/Dockerfile
   image: atsea_app
   ports:
     – “8080:8080″ 
     – “5005:5005″
   networks:
     – front-tier
     – back-tier
   secrets:
     – postgres_password
As you can see, you can set secrets at the command line and programmatically in the a compose file.
Docker Enterprise Edition (formerly known as Docker Datacenter) fully incorporates secrets management through creation, update and removal of secrets. In addition Docker EE supports authorization, rotation and auditing of secrets. Creating a secret in Docker Enterprise Edition is accomplished by clicking on the Resources tab and then the Secrets menu item.

Create the secret by entering the name and the value and clicking Create. In this example, I’m using the secret for the PostgreSQL password in the AtSea application.
 

Using Secrets
In order to use the Secret containing the certificate for nginx, I configured the nginx.conf file to point at the secret in the nginx container.
server {
  listen 443;
       ssl on;
       ssl_certificate /run/secrets/revprox_cert;
       ssl_certificate_key /run/secrets/revprox_key;
       server_name atseashop.com;
       access_log /dev/stdout;
       error_log /dev/stderr;

       location / {
           proxy_pass http://appserver:8080;
       }
   }

The AtSea application uses the postgres_password secret to connect to the database. This is done by reading the secret from the container and setting it to Spring-Boot’s DataSourceProperties class in the JpaConfiguration.java file.
// Set password to connect to postgres using Docker secrets.
try(BufferedReader br = new BufferedReader(new  FileReader(“/run/secrets/postgres_password”)))
{
StringBuilder sb = new StringBuilder();
String line = br.readLine();

while (line != null) {
sb.append(line);
sb.append(System.lineSeparator());
line = br.readLine();
}
dataSourceProperties.setDataPassword(sb.toString());
} catch (IOException e) {
System.err.println(“Could not successfully load DB password file”);
}
return dataSourceProperties;
}

Learn more about Docker secrets:

Documentation
Command line
Docker Enterprise Editionr Secrets
Play with Docker Secrets hands-on lab
Docker Captain Alex Ellis’ Docker Secrets in Action
Why you shouldn’t use ENV variables for secret data

Securing AtSea with #Docker Secrets by @spara #dockersecurityClick To Tweet

The post Securing the AtSea App with Docker Secrets appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Xeon Skylake-SP: Das können Intels 28-Kern-CPUs mit AVX-512

Aufgrund vieler Kerne und Speicherkanäle sind die Xeon Scalable Processors alias Skylake-SP sehr schnell. Die Server-CPUs nutzen mehrere neue Interconnects und erstmals AVX-512-Instruktionen. Amüsant war der Vergleich mit AMDs Epyc, denn der bestehe ja nur aus zusammengepappten Desktop-Chips. Von Marc Sauter (Skylake, Prozessor)
Quelle: Golem

Azure Site Recovery support for Storage Spaces and Windows Server 2016

We recently announced the public preview of disaster recovery for Azure IaaS machines, which allows you to replicate applications between Azure regions as well as create networks, storage accounts, and availability sets. This capability reduces the complexity typically involved in setting up disaster recovery and helps you stay compliant by having a business continuity plan in place to keep applications available during a disaster.

Today we are announcing Azure Site Recovery between Azure region’s support for Windows Server 2016 and Storage Spaces.

Windows Server 2016 has seen tremendous adoption on both private clouds as well as on Azure in the few months since the time it became generally available. Azure Site Recovery for Azure virtual machines now supports workloads running on Windows Server 2016 Data center and Windows Server 2016 Data center – server core editions.

Storage spaces is a technology in Windows Server that enables virtualization of storage by grouping disks into storage pools for performance, flexibility and storage scaling. Storage spaces is a commonly utilized configuration on Azure virtual machines to improve input/output performance by striping disks and to create logical disks larger than 4 TB. For example, this is a very common configuration in SQL workloads where need for higher performance and capacity is obvious. Popular Azure gallery templates like SQL Server Always On deploy machines using storage spaces and to meet this need, in the latest release of Azure Site Recovery, we’ve added support for storage spaces so you can have better availability and compliance for your workloads.

Check out our product information to start replicating your IaaS workloads between Azure regions today.

Visit the Azure Site Recovery forum on MSDN for additional information and to engage with other customers or use the ASR User Voice to let us know what features you want.
Quelle: Azure