Build and run your first Docker Windows Server container

Today, Microsoft announced the general availability of Server 2016, and with it, Docker engine running containers natively on Windows. This blog post describes how to get setup to run Docker Windows Containers on Windows 10 or using a Windows Server 2016 VM. Check out the companion blog posts on the technical improvements that have made Docker containers on Windows possible and the post announcing the Docker Inc. and Microsoft partnership.
Before getting started, It’s important to understand that Windows Containers run Windows executables compiled for the Windows Server kernel and userland (either windowsservercore or nanoserver). To build and run Windows containers, you have to have a Windows system with container support.
Windows 10 with Anniversary Update
For developers, Windows 10 is a great place to run Docker Windows containers and containerization support was added to the the Windows 10 kernel with the Anniversary Update (note that container images can only be based on Windows Server Core and Nanoserver, not Windows 10). All that’s missing is the Windows-native Docker Engine and some image base layers.
The simplest way to get a Windows Docker Engine is by installing the Docker for Windows public beta (direct download link). Docker for Windows used to only setup a Linux-based Docker development environment (slightly confusing, we know), but the public beta version now sets up both Linux and Windows Docker development environments, and we’re working on improving Windows container support and Linux/Windows container interoperability.
With the public beta installed, the Docker for Windows tray icon has an option to switch between Linux and Windows container development. For details on this new feature, check out Stefan Scherers blog post.
Switch to Windows containers and skip the next section.

Windows Server 2016
Windows Server 2016 is the where Docker Windows containers should be deployed for production. For developers planning to do lots of Docker Windows container development, it may be worth setting up a Windows Server 2016 dev system (in a VM, for example), at least until Windows 10 and Docker for Windows support for Windows containers matures.
For Microsoft Ignite 2016 conference attendees, USB flash drives with Windows Server 2016 preloaded are available at the expo. Not at ignite? Download a free evaluation version and install it on bare metal or in a VM running on Hyper-V, VirtualBox or similar. Running a VM with Windows Server 2016 is also a great way to do Docker Windows container development on macOS and older Windows versions.
Once Windows Server 2016 is running, log in and install the Windows-native Docker Engine directly (that is, not using &;Docker for Windows&;). Run the following in an Administrative PowerShell prompt:
# Add the containers feature and restart
Install-WindowsFeature containers
Restart-Computer -Force

# Download, install and configure Docker Engine
Invoke-WebRequest “https://download.docker.com/components/engine/windows-server/cs-1.12/docker.zip” -OutFile “$env:TEMPdocker.zip” -UseBasicParsing

Expand-Archive -Path “$env:TEMPdocker.zip” -DestinationPath $env:ProgramFiles

# For quick use, does not require shell to be restarted.
$env:path += “;c:program filesdocker”

# For persistent use, will apply even after a reboot.
[Environment]::SetEnvironmentVariable(“Path”, $env:Path + “;C:Program FilesDocker”, [EnvironmentVariableTarget]::Machine)

# You have to start a new PowerShell prompt at this point
dockerd –register-service
Start-Service docker
Docker Engine is now running as a Windows service, listening on the default Docker named pipe. For development VMs running (for example) in a Hyper-V VM on Windows 10, it might be advantageous to make the Docker Engine running in the Windows Server 2016 VM available to the Windows 10 host:
# Open firewall port 2375
netsh advfirewall firewall add rule name=”docker engine” dir=in action=allow protocol=TCP localport=2375

# Configure Docker daemon to listen on both pipe and TCP (replaces docker –register-service invocation above)
dockerd.exe -H npipe:////./pipe/docker_engine -H 0.0.0.0:2375 –register-service
The Windows Server 2016 Docker engine can now be used from the VM host  by setting DOCKER_HOST:
$env:DOCKER_HOST = “<ip-address-of-vm>:2375″
See the Microsoft documentation for more comprehensive instructions.
Running Windows containers
First, make sure the Docker installation is working:
> docker version
Client:
Version:      1.12.1
API version:  1.24
Go version:   go1.6.3
Git commit:   23cf638
Built:        Thu Aug 18 17:32:24 2016
OS/Arch:      windows/amd64
Experimental: true

Server:
Version:      1.12.2-cs2-ws-beta-rc1
API version:  1.25
Go version:   go1.7.1
Git commit:   62d9ff9
Built:        Fri Sep 23 20:50:29 2016
OS/Arch:      windows/amd64
Next, pull a base image that’s compatible with the evaluation build, re-tag it and to a test-run:
docker pull microsoft/windowsservercore:10.0.14393.206
docker tag microsoft/windowsservercore:10.0.14393.206 microsoft/windowsservercore
docker run microsoft/windowsservercore hostname
69c7de26ea48
Building and pushing Windows container images
Pushing images to Docker Cloud requires a free Docker ID. Storing images on Docker Cloud is a great way to save build artifacts for later user, to share base images with co-workers or to create build-pipelines that move apps from development to production with Docker.
Docker images are typically built with docker build from a Dockerfile recipe, but for this example, we’re going to just create an image on the fly in PowerShell.
“FROM microsoft/windowsservercore `n CMD echo Hello World!” | docker build -t <docker-id>/windows-test-image –
Test the image:
docker run <docker-id>/windows-test-image
Hello World!
Login with docker login and then push the image:
docker push <docker-id>/windows-test-image
Images stored on Docker Cloud available in the web interface and public images can be pulled by other Docker users.
Using docker-compose on Windows
Docker Compose is a great way develop complex multi-container consisting of databases, queues and web frontends. Compose support for Windows is still a little patchy and only works on Windows Server 2016 at the time of writing (i.e. not on Windows 10).
To try out Compose on Windows, you can clone a variant of the ASP.NET Core MVC MusicStore app, backed by a SQL Server Express 2016 database. If running this sample on Windows Server 2016 directly, first grab a Compose executable and make it is in your path. A correctly tagged microsoft/windowsservercore image is required before starting. Also note that building the SQL Server image will take a while.
git clone https://github.com/friism/Musicstore

cd Musicstore
docker build -t sqlserver:2016 -f .dockermssql-server-2016-expressDockerfile .dockermssql-server-2016-express.

docker-compose -f .srcMusicStoredocker-compose.yml up

Start a browser and open http://<ip-of-vm-running-docker>:5000/ to see the running app.
Summary
This post described how to get setup to build and run native Docker Windows containers on both Windows 10 and using the recently published Windows Server 2016 evaluation release. To see more example Windows Dockerfiles, check out the Golang, MongoDB and Python Docker Library images.
Please share any Windows Dockerfiles or Docker Compose examples your build with @docker on Twitter using the tag windows. And don’t hesitate to reach on the Docker Forums if you have questions.
More Resources:

Sign up to be notified of GA and the Docker Datacenter for Windows Beta
Register for a webinar: Docker for Windows Server
Learn more about the Docker and Microsoft partnership

The post Build and run your first Docker Windows Server container appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Evaluating Cloud SQL Second Generation for your mobile game

Posted by Joseph Holley, Gaming Solutions Architect, Google Cloud Platform

Many of today’s most successful games are played in small sessions on the devices in our pockets. Players expect to open the game app from any of their supported devices and find themselves right where they left off. In addition, players may be very sensitive to delays caused by waiting for the game to save their progress during play. For mobile game developers, all of this adds up to the need for a persistent data store that can be accessed with consistently low latency.

Game developers with database experience are usually most comfortable with relational databases as their backend game state storage. MySQL, with its ACID-compliant transactions and well-understood semantics offers a known pattern. However, “game developer” and “database administrator” are different titles for a reason; game developers may not relish standing up and administering a database when they could be building new game content and features. That’s why Google Cloud Platform offers high-performance, fully-managed MySQL instances in the form of Google Cloud SQL Second Generation to help handle your mobile game’s persistent storage.

Many game developers ask for guidance about how much player load (concurrent users in a game) Cloud SQL can handle. In order to provide a starting point for these discussions, we recently published a new solutions document that details a simple mock game stress-testing framework built on Google Cloud Platform and Cloud SQL Second Generation. For a data model, we looked to the data schema and access patterns of popular massively single-player social games such as Puzzle and Dragons™ or Monster Strike™ for our testing framework. We also made the source code for the framework available so you can have a look at whether the simulated gameplay patterns and the data model are similar to your game’s. The results should provide a starting point for deciding if Cloud SQL Second Generation’s performance is the right fit for your next game project’s concurrent user estimates.

For more information about Cloud SQL Second Generation, have a look at the documentation. If you’d like to see more solutions, check out the gaming solutions page.

Quelle: Google Cloud Platform

Availability of H-series VMs in Microsoft Azure

We are excited to announce availability of the new H-series virtual machines in Azure.   With the availability of these new VM sizes we continue our mission to deliver great performance for HPC applications in Azure.  H-series VM sizes is an excellent fit for any compute-intensive workload.  They are designed to deliver cutting edge performance for complex engineering and scientific workloads like computational fluid dynamics, crash simulations, seismic exploration, and weather forecasting simulations. 

The new H-series sizes are initially available in the South Central US Azure region and will be rolled out across other regions in the near future. 

The H-series VMs will be available in six different sizes, all based on Intel E5-2667 V3 3.2 GHz (with turbo up to 3.5 GHz) processor technology, utilizing DDR4 memory and SSD-based local storage.  The new H-series VMs furthermore features a dedicated RDMA backend network enabled by FDR InfiniBand network, capable of delivering ultra-low latency.  RDMA networking is dedicated for MPI (Message Passing Interface) traffic when running tightly coupled applications.

*m: High Memory, r: RDMA network

We see a large number of enterprise customers embracing Microsoft Azure for their enterprise HPC workloads. Enterprise customers bursting their HPC jobs to Azure for additional compute power helping to solve complex design of experiments (DOE), optimizations, and other critical projects.  One of our premier partners, Altair Engineering Inc., with its suite of enterprise computing products is an excellent and current example of integration between customers&; on-premises environment and H-series VMs in Azure.

“We are excited about the introduction of new non hyper-threaded compute and network optimized H-series VMs in Azure, we worked closely with the Microsoft Team to test our solutions for performance and scaling on the H Series VMs. Based on the testing we are confident not only to deliver high performance to our customers but also provide deep integration with the Azure environment to enable HPC cloud environments”  Sam Mahalingam, CTO Altair Engineering Inc

H-series VMs can deliver great performance running a variety of applications, helping businesses around the world reducing their product development cycle and bring products faster to market.

"We are pleased to see the launch of the new Azure high performance H series VMs with InfiniBand and Linux RDMA technology. This performance accelerates the pace of product design cycles using simulation and helps engineers discover better designs, faster." – Keith Foston, Product Manager, CD-adapco, a Siemens Business

H-series virtual machines provide on demand compute capacity for our customers that want to solve complex automotive crash simulation problems.  Through partners like d3View we can bring large scale computing capabilities to our customers when needed.

“We see a great need for the best-in-class compute power and capacity. With the introduction of Azure’s new H-Series with E5-2667 processor and RDMA InfiniBand network, running large-scale simulations across hundreds of cores will offer reduced turn-around time for simulation engineers and scientists. Multi-physics simulation software like LS-DYNA is designed to scale to thousands of processors and with the H-Series, we look forward to helping our customers in evaluating designs quickly” Suri Bala, CEO d3View

The low-latency RDMA network enabled by FDR InfiniBand in the H-series VMs, particularly using the H16r and H16mr sizes, make up an ideal combination for delivering the necessary scale and performance for very large CFD (Computational Fluid Dynamics) simulations.

“The RDMA technology in the new H series VMs is critical when running large scale-out jobs on the cloud.  With clock frequencies flattening out the last few years, RDMA technology enables jobs to scale out to large number of nodes. Our testing of CFD codes with the TotalCAE Portal enabled us to achieve reduced runtimes at large core counts that would not be possible without RDMA technology in Azure.”  Rod Mach, CEO TotalCAE

Non-RDMA-enabled H-series sizes can be deployed with various Linux distributions and Windows Server OS images available in the Azure Marketplace.  RDMA-enabled H-series VM sizes can be deployed using Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2, and CentOS-based 7.1 HPC and SUSE Linux Enterprise Server 12 SP1 HPC images.  For more information and a quick guide on how to get started, see the Linux and Windows documentation.

With this milestone in our HPC journey in the cloud, we’d like to re-emphasize our excitement in bringing world-class High Performance Computing infrastructure capabilities through the Cloud to every engineer and scientist in the world. 
Quelle: Azure

ISO 22301 highlights Azure's unmatched business continuity & disaster recovery preparedness

Many of you have asked us about how we plan and prepare in Azure so that you can learn from our best practices as well as have the peace of mind that your applications, and data, are safe and available in Azure. Today we are pleased to announce that Microsoft Azure has achieved ISO 22301 certification. Microsoft is the first hyper-scale cloud provider to achieve this important certification that ensures your Azure applications are backed by the highest standard for business continuity and disaster preparedness.

For years, we’ve heard from organizations about the importance of disaster preparedness and continuous improvement in their operations to ensure their IT systems can survive, and be restored, in the aftermath of major incidents (such as natural disasters, power outages, or cyber-attacks). As of today, we are the only major cloud provider to prove our commitment of being fully prepared for all eventualities through this internationally recognized standard for business continuity, ISO 22301. Our ISO 22301 certification is applicable across both our Azure public and Azure Government clouds. 

What does Azure achieving ISO 22301 provide? It gives you the assurance that you can trust Microsoft Azure with your mission critical applications by providing an extensive independent 3rd party audit of all aspects of Azure’s business continuity. This includes the following:

how backups are validated
how recovery is tested
the competency/training of critical staff
the level of resources available
buy-in by senior management
how risks are assessed/mitigated
adherence to legal/regularly requirements
the process for response to incidents
the process for learning from incidents

Being prepared for whatever happens is not easy, but it’s something that we in Azure take seriously. We test at all levels of our infrastructure to ensure that every day we are working to improve the cloud experience for you. We do tests as small as fault injections at the individual service layer, all the way up to entire region fail-over tests. We’ve been doing these tests for years and this work has helped us continuously improve the Azure infrastructure and services you use every day. ISO 22301 reviews and validates that we are selecting the right tests of our cloud services, that we’ve created programs to continuously run those tests, and that we implement improvements based on those test results.

Achieving the ISO 22301 certification demonstrates the seriousness of our commitment to providing you the highest quality of service, and our achievement of this rigorous third party attestation is part of our promise to provide you the most robust infrastructure possible for deploying your applications in the cloud. To learn more about Microsoft Azure’s ISO 22301 certification and download a copy of the certification, please visit https://aka.ms/iso22301cert.
Quelle: Azure

Square Says Its New Credit Card Chip Reader Is Faster Than What You're Used To

Square, the mobile payments processing company led by Twitter CEO Jack Dorsey, is introducing a chip card reader it says will process the payments in 4.2 seconds, compared to the industry average of between eight and 13 seconds. That’s big news for a couple reasons.

Giphy / Via giphy.com

Chip cards started rolling out in the US about a year ago, and, as we know by now, they’re glacially slow.

Giphy / Via giphy.com

They are, however, more secure, according to Square&;s head of hardware product development, Jesse Dorogusker. That’s why they exist in the first place. Or, rather, why the government mandated that US stores have chip-card-accepting technology by October 2015, which still hasn’t really happened, yet. As much as we want the speed of the old magnetic stripe, our money is safer with the chips, for the most part.

Here’s what a faster credit card chip reader could mean for your life:

1. Maybe you won&039;t lose your card

Chip cards are so slow that people have been leaving them at cash registers in droves. Maybe a faster reader will keep people’s attention.

2. A path to the future

Square’s new readers can also process payment methods like Apple Pay and Android Pay — which means you don’t even have to touch your card to the reader. According to Square’s Dorogusker, these are “far superior technology” because these methods typically require a second piece of authentication like a fingerprint.

The US desperately needs more secure credit cards. We have 25 percent of the world’s credit cards but 50 percent of all credit card fraud.

Dorogusker blames the risk of fraud squarely on the magnetic stripe technology. “They’re like cassette tapes,” he told BuzzFeed News.

Giphy / Via giphy.com

3. Shorter lines

During a lunch rush or the holidays at a retail store, every second counts. You might be quick to abandon a line if it’s not moving. Retail workers of the world, rejoice: much less awkward small talk with frustrated customers.

Giphy / Via giphy.com

Don’t expect to see Square’s readers at H&M or Whole Foods any time soon, though. Sellers who make over $500,000 per year only account for 14 percent of the company’s business, according to its Q2 2016 investor letter. However, the company hopes to bring the transaction speed down to three seconds in the near future — maybe that’ll attract more big clients.

Quelle: <a href="Square Says Its New Credit Card Chip Reader Is Faster Than What You&039;re Used To“>BuzzFeed