Amazon EMR-Integration mit AWS Lake Formation liegt nun als Betaversion vor und unterstützt Zugriffskontrollen auf Datenbank-, Tabellen- und Spaltenebene für Apache Spark

Amazon EMR unterstützt nun die Durchsetzung von aufAWS Lake Formation basierenden differenzierten Zugriffskontrollrichtlinien für Apache Spark. Sie können Richtlinien auf Datenbank-, Tabellen- und Spaltenebene für Daten, die in Amazon S3 gespeichert sind, durchsetzen. In AWS Lake Formation definierte Richtlinien werden durchgesetzt, wenn Spark-Anwendungen mittels Apache Zeppelin oder EMR Notebooks gesendet werden. Ebenfalls in dieser Version enthalten ist SAML-basierter Single Sign-On (SSL)-Zugriff auf EMR Notebooks und Apache Zeppelin, was die Authentifizierung für Organisationen, welche Active Directory Federation Services (ADFS), Okta oder Auth0 verwenden, vereinfacht. Mit der Kombination aus SAML-basiertem SSO und AWS Lake Formation-Richtlinien können Kunden Spark-Anwendungen auf gemeinsamen Mehrmandanten-Clustern mit Zugriff auf Spaltenebene für in Amazon S3 gespeicherten Daten sicher ausführen.
Quelle: aws.amazon.com

Your single source for Azure best practices

Optimizing your Azure workloads can feel like a time-consuming task. With so many services that are constantly evolving it’s challenging to stay on top of, let alone implement, the latest best practices and ensure you’re operating in a cost-efficient manner that delivers security, performance, and reliability.

Many Azure services offer best practices and advice. Examples include Azure Security Center, Azure Cost Management, and Azure SQL Database. But what if you want a single source for Azure best practices, a central location where you can see and act on every optimization recommendation available to you? That’s why we created Microsoft Azure Advisor, a service that helps you optimize your resources for high availability, security, performance, and cost, pulling in recommendations from across Azure and supplementing them with best practices of its own.

In this blog, we’ll explore how you can use Advisor as your single destination for resource optimization and start getting more out of Azure.

What is Azure Advisor and how does it work?

Advisor is your personalized guide to Azure best practices. It analyzes your usage and configurations and offers recommendations to help you optimize your Azure resources for high availability, security, performance, and cost. Each of Advisor’s recommendations includes suggested actions and sharing features to help you quickly and easily remediate your recommendations and optimize your deployments. You can also configure Advisor to only show recommendations for the subscriptions and resource groups that mean the most to you, so you can focus on critical fixes. Advisor is available from the Azure portal, command line, and via REST API, depending on your needs and preferences.

Ultimately, Advisor’s goal is to save you time while helping you get the most out of Azure. That’s why we’re making Advisor a single, central location for optimization that pulls in best practices from companion services like Azure Security Center.

How Azure Security Center integrates with Advisor

Our most recent integration with Advisor is Azure Security Center. Security Center helps you gain unmatched hybrid security management and threat protection. Microsoft uses a wide variety of physical, infrastructure, and operational controls to help secure Azure—but there are additional actions you need to take to help safeguard your workloads. Security Center can help you quickly strengthen your security posture and protect against threats.

Advisor has a new, streamlined experience for reviewing and remediating your security recommendations thanks to a tighter integration with Azure Security Center. As part of the enhanced integration, you’ll be able to:

See a detailed view of your security recommendations from Security Center directly in Advisor.
Get your security recommendations programmatically through the Advisor REST API, CLI, or PowerShell.
Review a summary of your security alerts from Security Center in Advisor.

The new Security Center experience in Advisor will help you more quickly and easily remediate security recommendations.

How Azure Cost Management integrates with Advisor

Another Azure service that provides best practice recommendations is Azure Cost Management, which helps you optimize cloud costs while maximizing your cloud potential. With Cost Management, you can monitor your spending, increase your organizational accountability, and boost your cloud efficiency.

Advisor and Cost Management are also tightly integrated. Cost Management’s integration with Advisor means that you can see any cost recommendation in either service and act to optimize your cloud costs by taking advantage of reservations, rightsizing, or removing idle resources.

Again, this will help you streamline your optimizations.

Azure SQL DB Advisor, Azure App Service Advisor, and more

There’s no shortage of advice in Azure. Many other services including Azure SQL Database and Azure App Service include Advisor-like tools designed to help you follow best practices for those services and succeed in the cloud.

Advisor pulls in and displays recommendations from these services, so the choice is yours. You can review the optimizations in context—in a given instance of an Azure SQL database, for example—or in a single, centralized location in Advisor.

We often recommend the Advisor approach. This way, you can see all your optimizations in a broader, more holistic context and remediate with the big picture in mind, without worrying that you’re missing anything. Plus, it’ll save you time switching between different resources.

Review your recommendations in one place with Advisor

Our recommendation? Use Advisor as your core resource optimization tool. You’ll find everything in a single location rather than having to visit different, more specialized locations. With the Advisor API, you can even integrate with your organization’s internal systems—like a ticketing application or dashboard—to get everything in one place on your end and plug into your own optimization workflows.

Visit Advisor in the Azure portal to get started reviewing, sharing, and remediating your recommendations. For more in-depth guidance, visit the Azure Advisor documentation. Let us know if you have a suggestion for Advisor by submitting an idea here in the Advisor forums.
Quelle: Azure

Amazon Rekognition verbessert Gesichtsanalyse

Amazon Rekognition bietet umfassende Funktionen zur Gesichtsermittlung, -analyse und -erkennung für die Bild- und Videoanalyse. Heute führen wir Verbesserungen an der Genauigkeit und Funktionalität unserer Gesichtsanalysefunktionen ein. Die Gesichtsanalyse generiert Metadaten zu erkannten Gesichtern in Form von Geschlecht, Altersbereich, Stimmung, Attributen wie "Lächeln", Gesichtsposition, Gesichtsbildqualität und Gesichtsmerkmalen. Mit dieser Version führen wir auch Verbesserungen an der Genauigkeit der Geschlechteridentifikation ein. Zusätzlich haben wir die Genauigkeit für die Stimmungserkennung (für alle 7 Stimmungen: "Fröhlich", "Traurig", "Wütend", "Überrascht", "Angeekelt", "Gleichmütig" und "Irritiert") verbessert sowie eine neue Stimmung hinzugefügt: "Angst". Außerdem haben wir die Genauigkeit der Altersbereichsvorhersage verbessert. Sie erhalten jetzt auch engere Altersbereiche für die meisten Altersgruppen. 
Quelle: aws.amazon.com

Amazon Managed Blockchain unterstützt jetzt AWS CloudFormation

Amazon Managed Blockchain unterstützt jetzt AWS CloudFormation für die Erstellung und Konfiguration von Netzwerken, Mitgliedern und Peer-Knoten. Managed Blockchain ist ein voll verwalteter Service, der die Erstellung und Verwaltung skalierbarer Blockchain-Netzwerke für mehrere AWS-Konten vereinfacht. CloudFormation ermöglicht die Modellierung und Bereitstellung von Cloud-Ressourcen als Code auf sichere, vorhersehbare und konsistente Weise. Mit der Unterstützung von CloudFormation für Managed Blockchain können Sie neue Blockchain-Netzwerke erstellen, Netzwerkkonfigurationen definieren, ein Mitgliedskonto erstellen und einem bestehenden Netzwerk beitreten. Darüber hinaus können Sie Details zum Mitglied und zum Netzwerk (wie z. B. Abstimmungsrichtlinien) beschreiben. Sie können auch Peer-Knoten für Ihre Mitglieder in dem Netzwerk erstellen und konfigurieren.
Quelle: aws.amazon.com

New for developers: Azure Cosmos DB .NET SDK v3 now available

The Azure Cosmos DB team is announcing the general availability of version 3 of the Azure Cosmos DB .NET SDK, ​released in July. Thank you to all who gave feedback during our preview. 

In this post, we’ll walk through the latest improvements that we’ve made to enhance the developer experience in .NET SDK v3.

You can get the latest version of the SDK through NuGet and contribute on GitHub.

//Using .NET CLI
dotnet add package Microsoft.Azure.Cosmos

//Using NuGet
Install-Package Microsoft.Azure.Cosmos

What is Azure Cosmos DB?

Azure Cosmos DB is a globally distributed, multi-model database service that enables you to read and write data from any Azure region. It offers turnkey global distribution, guarantees single-digit millisecond latencies at the 99th percentile, 99.999 percent high availability, and elastic scaling of throughput and storage.

What is new in Azure Cosmos DB .NET SDK version 3?

Version 3 of the SDK contains numerous usability and performance improvements, including a new intuitive programming model, support for stream APIs, built-in support for change feed processor APIs, the ability to scale non-partitioned containers, and more. The SDK targets .NET Standard 2.0 and is open sourced on GitHub.

For new workloads, we recommend starting with the latest version 3.x SDK for the best experience. We have no immediate plans to retire version 2.x of the .NET SDK.

Targets .NET Standard 2.0

We’ve unified the existing Azure Cosmos DB .NET Framework and .NET Core SDKs into a single SDK, which targets .NET Standard 2.0. You can now use the .NET SDK in any platform that implements .NET Standard 2.0, including your .NET Framework 4.6.1+ and .NET Core 2.0+ applications.

Open source on GitHub

The Azure Cosmos DB .NET v3 SDK is open source, and our team is planning to do development in the open. To that end, we welcome any pull requests and will be logging issues and tracking feedback on GitHub.

New programming model with fluent API surface

Since the preview, we’ve continued to improve the object model for a more intuitive developer experience. We’ve created a new top level CosmosClient class to replace DocumentClient and split its methods into modular database and container classes. From our usability studies, we’ve seen that this hierarchy makes it easier for developers to learn and discover the API surface.

We’ve also added in fluent builder APIs, which make it easier to create CosmosClient, Container, and ChangeFeedProcessor classes with custom options.

View all samples on GitHub.

Stream APIs for high performance

The previous versions of the Azure Cosmos DB .NET SDKs always serialized and deserialized the data to and from the network. In the context of an ASP.NET Web API, this can lead to performance overhead. Now, with the new stream API, when you read an item or query, you can get the stream and pass it to the response without deserialization overhead, using the new GetItemQueryStreamIterator and ReadItemStreamAsync methods. To learn more, refer to the GitHub sample.

Easier to test and more extensible

In .NET SDK version 3, all APIs are mockable, making for easier unit testing.

We also introduced an extensible request pipeline, so you can pass in custom handlers that will run when sending requests to the service. For example, you can use these handlers to log request information in Azure Application Insights, define custom retry polices, and more. You can also now pass in a custom serializer, another commonly requested developer feature.

Use the Change Feed Processor APIs directly from the SDK

One of the most popular features of Azure Cosmos DB is the change feed, which is commonly used in event-sourcing architectures, stream processing, data movement scenarios, and to build materialized views. The change feed enables you to listen to changes on a container and get an incremental feed of its records as they are created or updated.

The new SDK has built-in support for the Change Feed Processor APIs, which means you can use the same SDK for building your application and change feed processor implementation. Previously, you had to use the separate change feed processor library.

To get started, refer to the documentation "Change feed processor in Azure Cosmos DB."

Ability to scale non-partitioned containers

We’ve heard from many customers who have non-partitioned or “fixed” containers that they wanted to scale them beyond their 10GB storage and 10,000 RU/s provisioned throughput limit. With version 3 of the SDK, you can now do so, without having to create a new container and move your data.

All non-partitioned containers now have a system partition key “_partitionKey” that you can set to a value when writing new items. Once you begin using the _partitionKey value, Azure Cosmos DB will scale your container as its storage volume increases beyond 10GB. If you want to keep your container as is, you can use the PartitionKey.None value to read and write existing data without a partition key.

Easier APIs for scaling throughput

We’ve redesigned the APIs for scaling provisioned throughput (RU/s) up and down. You can now use the ReadThroughputAsync method to get the current throughput and ReplaceThroughputAsync to change it. View sample.

Get started

To get started with the new Azure Cosmos DB .NET SDK version 3, add our new NuGet package to your project. To get started, follow the new tutorial and quickstart. We’d love to hear your feedback! You can log issues on our GitHub repository.

Stay up-to-date on the latest Azure #CosmosDB news and features by following us on Twitter @AzureCosmosDB. We can't wait to see what you will build with Azure Cosmos DB and the new .NET SDK!
Quelle: Azure

APN-Partner erfahren in neuen AWS-Schulungskursen, wie sie ihre Kunden besser unterstützen

Wir haben drei Kurse aktualisiert, die speziell darauf ausgelegt sind, APN-Partner bei der Ermittlung von Bereichen zu unterstützen, in denen ihre Kunden Kosten einsparen und die Wirtschaftlichkeit des Rechenzentrums erhöhen können. Die Partner lernen, wie sie mit Marktdaten, Kundenbindungsmodellen und Kunden-Anwendungsfällen Chancen bei Kunden erkennen, um Workloads auf AWS zu verschieben. Alle Kurse verfügen über aktualisierte APN Partner-Tools und -Ressourcen, mit denen Partner ihre Geschäfte mit AWS ausweiten können.
Quelle: aws.amazon.com

Announcing the preview of Azure Actions for GitHub

On Thursday, August 8, 2019, GitHub announced the preview of GitHub Actions with support for Continuous Integration and Continuous Delivery (CI/CD). Actions makes it possible to create simple, yet powerful pipelines and automate software compilation and delivery. Today, we are announcing the preview of Azure Actions for GitHub.

With these new Actions, developers can quickly build, test, and deploy code from GitHub repositories to the cloud with Azure.

You can find our first set of Actions grouped into four repositories on GitHub, each one containing documentation and examples to help you use GitHub for CI/CD and deploy your apps to Azure.

azure/actions (login): Authenticate with an Azure subscription.
azure/appservice-actions: Deploy apps to Azure App Services using the features Web Apps and Web Apps for Containers.
azure/container-actions: Connect to container registries, including Docker Hub and Azure Container Registry, as well as build and push container images.
azure/k8s-actions: Connect and deploy to a Kubernetes cluster, including Azure Kubernetes Service (AKS).

Connect to Azure

The login action (azure/actions) allows you to securely connect to an Azure subscription.

The process requires using a service principal, which can be generated using the Azure CLI, as per instructions. Use the GitHub Actions’ built-in secret store for safely storing the output of this command.

If your workflow involves containers, you can also use the azure/k8s-actions/docker-login and azure/container-actions/aks-set-context Actions for connecting to Azure services like Container Registry and AKS respectively.

These Actions help setting the context for the rest of the workflow. For example, once you have used azure/container-actions/docker-login, the next set of Actions in the workflow can perform tasks such as building, tagging, and pushing container images to Container Registry.

Deploy a web app

Azure App Service is a managed platform for deploying and scaling web applications. You can easily deploy your web app to Azure App Service with the azure/appservice-actions/webapp and azure/appservice-actions/webapp-container Actions.

The azure/appservice-actions/webapp action takes the app name and the path to an archive (*.zip, *.war, *.jar) or folder to deploy.

The azure/appservice-actions/webapp-container supports deploying containerized apps, including multi-container ones. When combined with azure/container-actions/docker-login, you can create a complete workflow which builds a container image, pushes it to Container Registry and then deploys it to Web Apps for Containers.

Deploy to Kubernetes

azure/k8s-actions/k8s-deploy helps you connect to a Kubernetes cluster, bake and deploy manifests, substitute artifacts, check rollout status, and handle secrets within AKS.

The azure/k8s-actions/k8s-create-secret action takes care of creating Kubernetes secret objects, which help you manage sensitive information such as passwords and API tokens. These notably include the Docker-registry secret, which is used by AKS itself to pull a private image from a registry. This action makes it possible to populate the Kubernetes cluster with values from the GitHub Actions’ built-in secret store.

Our container-centric Actions, including those for Kubernetes and for interacting with a Docker registry, aren’t specific to Azure, and can be used with any Kubernetes cluster, including self-hosted ones, running on-premises or on other clouds, as well as any Docker registry.

Full example

Here is an example of an end-to-end workflow which builds a container image, pushes it to Container Registry and then deploys to an AKS cluster by using manifest files.

on: [push]

jobs:
build:
runs-on: ubuntu-latest
steps:
– uses: actions/checkout@master

– uses: azure/container-actions/docker-login@master
with:
login-server: contoso.azurecr.io
username: ${{ secrets.REGISTRY_USERNAME }}
password: ${{ secrets.REGISTRY_PASSWORD }}

– run: |
docker build . -t contoso.azurecr.io/k8sdemo:${{ github.sha }}
docker push contoso.azurecr.io/k8sdemo:${{ github.sha }}

# Set the target AKS cluster.
– uses: azure/k8s-actions/aks-set-context@master
with:
creds: '${{ secrets.AZURE_CREDENTIALS }}'
cluster-name: contoso
resource-group: contoso-rg

– uses: azure/k8s-actions/k8s-create-secret@master
with:
container-registry-url: contoso.azurecr.io
container-registry-username: ${{ secrets.REGISTRY_USERNAME }}
container-registry-password: ${{ secrets.REGISTRY_PASSWORD }}
secret-name: demo-k8s-secret

– uses: azure/k8s-actions/k8s-deploy@master
with:
manifests: |
manifests/deployment.yml
manifests/service.yml
images: |
demo.azurecr.io/k8sdemo:${{ github.sha }}
imagepullsecrets: |
demo-k8s-secret

More Azure Actions

Building on the momentum of GitHub Actions, today we are releasing this first Azure Actions in preview. In the next few months we will continue improving upon our available Actions, and we will release new ones to cover more Azure services.

Please try out the GitHub Actions for Azure and share your feedback via Twitter on @AzureDevOps, or using Developer Community. If you encounter a problem during the preview, please open an issue on the GitHub repository for the specific action.
Quelle: Azure

Deploying Dockerized .NET Apps Without Being a DevOps Guru

This is a guest post by Julie Lerman. She is a Docker Captain, published author, Microsoft Regional Director and a long-time Microsoft MVP who now counts her years as a coder in decades. She makes her living as a coach and consultant to software teams around the world. You can follow Julie on her blog at thedatafarm.com/blog, or on Twitter at @julielerman.

.NET Developers who use Visual Studio have access to a great extension to help them create Docker images for their apps. The Visual Studio Tools for Docker simplify the task of developing and debugging apps destined for Docker images. But what happens when you are ready to move from debugging in Visual Studio to deploying your image to a container in the cloud?

This blog post will demonstrate first using the tooling to publish a simple ASP.NET Core API in an image to the Docker hub, and then creating a Linux virtual machine in Azure to host the API. It will also engage Docker Compose and Microsoft SQL Server for Linux in a Docker container, along with a Docker Volume for persistence. The goal is to create a simple test environment and a low-stress path to getting your first experience with publishing an app in Docker.

Using the Docker Tools to aid in building and debugging the API is the focus of a series of articles that were published in the April, May and June 2019 issues of MSDN Magazine. So I’ll provide only a high level look at the solution.

Overview of the Sample App

The API allows me to track the names of Docker Captains. It’s not a real-world solution, but enough to give me something to work with. You can download the solution from github.com/julielerman/dockercaptains. I’ll provide a few highlights here.

public class Captain
{
public int CaptainId { get; set; }
public string Name { get; set; }
}

The API leverages Entity Framework Core (EF Core) for its data persistence. This requires a class that inherits from the EF Core DbContext. My class, CaptainContext, specifies a DbSet to work from and defines a bit of seed data for the database.

Enabling a Dynamic Connection String

The startup.cs file uses ASP.NET Core’s dependency injection to configure a SQL Server provider for the CaptainContext. There is also code to read a connection string from an environment variable within the Docker container and update a password placeholder that’s less visible to prying eyes.

public void ConfigureServices(IServiceCollection services)
{
services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_2);
var conn = Configuration[“ConnectionStrings:CaptainDB”];
conn = connectionstring.Replace(“ENVPW”, Configuration[“DB_PW”]);
services.AddDbContext<CaptainContext>(options => options.UseSqlServer(conn));
}

The VS Tools generated a Dockerfile and I only made one change to the default — adding the CaptainDB connection string ENV variable with its ENVPW placeholder:

ENV ConnectionStrings:CaptainDB “Server=db;Database=CaptainDB;User=sa;Password=ENVPW;”

ASP.NET Core can discover Docker environment variables when running in a Docker container.

Orchestrating with a docker-compose file

Finally comes the docker-compose.yml file. This sets up a service for the API image, another for the database server image and a volume for persisting the data.

version: ‘3.4’

services:
dataapidocker:
image: ${DOCKER_REGISTRY-}dataapidocker
build:
context: .
dockerfile: DataAPIDocker/Dockerfile
environment:
– DB_PW
depends_on:
– db
ports:
– 80:80
db:
image: mcr.microsoft.com/mssql/server
volumes:
– mssql-server-julie-data:/var/opt/mssql/data
environment:
SA_PASSWORD: “${DB_PW}”
ACCEPT_EULA: “Y”
ports:
– “1433:1433″
volumes:
mssql-server-julie-data: {}

Notice that I’m declaring the DB_PW environment variable in the API’s service definition and referencing it in the db’s service definition.

There’s also an .env file in the solution where the value of DB_PW is hidden.

DB_PW=P@ssword1

Docker will read that file by default. I go into more detail about the .env file on my blog. 

I got this solution set up and running from within Visual Studio on my development machine. And I love that even when the debugger publishes the app to a local container, I can still debug while it’s running in that container. That’s a super-power of the tools extension.

Using the Tools to Publish to Docker Hub

Once I was happy with my progress, I wanted to get this demo running in the cloud.  Although I can easily use the CLI to push and pull, I love that the Docker Tools in VS can handle this part. The Dockerfile created by the tool has instructions for a multi-stage build. When you target Visual Studio to a release build, the tools will build the release image described in the Dockerfile. Publishing will rebuild that release image and publish it to your destination registry.

You can see my full solution in the screenshot below. My API project is called DataAPIDocker. Notice there is also a docker-compose project. This was created by the Docker Tools.  But it is the DataAPIDocker project that will be published first into an image and then to a repository.

This will present a Publish page where you can choose to create a New Profile. A publish profile lets you define where to publish your app and also predefine any needed credentials. Creating a profile begins with selecting from a list of targets; for publishing a Docker image, select Container Registry. That option then gives you predefined registries to choose, such as Azure Container Registry, Docker Hub,  or a custom registry – which could be an instance of Docker Trusted Registry. 

I’ll choose Docker Hub and click Publish. 

The last step is to provide your Docker Hub repository name. If you don’t already have docker.config set up with your credentials, then you also need to supply your password. 

After creating a profile, it gets stored in the Visual Studio project.

You’ll be returned to the Publish overview page with this profile selected, where you can edit the default “latest” tag name. Click the Publish button to trigger the Docker Tools to do their job. 

A window will open up showing the progress of the docker push command run by the tools.

After the push is complete you can open the repository to see your new repository which by default is public.

Setting up an Azure Linux VM to Host the Containers

Now that the image is hosted in the cloud, you can turn your sights to hosting a container instance for running the app. Since my Visual Studio Subscription includes credits on Azure, I’ll use those.  I will create a Linux Virtual Machine on Azure with Docker and Docker Compose, then run an instance of my new image along with a SQL Server and a data volume.

I found two interesting paths for doing this at the command line. One was by using the Azure CLI at the command line in Windows, macOS or Linux. It is so much easier than doing it through the Azure Portal.

I found this doc to be really helpful as I was doing this for the first time. The article walks you through installing the Azure CLI, logging into Azure, creating a Linux VM with Docker already installed then installing Docker Compose. Keep in mind that this will create a default machine using “Standard DS1 v2 (1 vcpus, 3.5 GB memory)” setup. That VM size has an estimated cost of about $54 (USD) per month. 

Alternatively, you can use Docker Machine, a Docker tool for installing Docker on virtual hosts and managing the hosts. This path is a little more automated but it does require that you use bash and that you start by using the Azure CLI to log into your Azure account using the command az login.

Once that’s done, you can use parameters of docker-machine to tell it you’re creating  this in Azure, specify your subscription, ssh username, port and size of the machine to create. The last uses standard Azure VM size names. 

I found it interesting to use the Azure CLI workflow which was educational and then consider the docker-machine workflow as a shortcut version.

Since I was still working on my Windows machine, and don’t have the Windows Subsystem for Linux installed there, I opened up Visual Studio Code and switched my terminal shell to use bash.  That let me use docker-machine without issue.I also have the Azure Login extension in VS Code, so I was already logged in to Azure.

I first had to get the subscription ID of my Azure Account which I did using the CLI. Then I plugged the id into the docker-machine command:

docker-machine create -d azure
–azure-subscription-id [this is where I pasted my subscript id]
–azure-ssh-user azureuser
–azure-open-port 80
–azure-size “Standard_DS1_v2″
mylinuxvm

There are more settings you can apply, such as defining the resource and location. The output from this command will pause, providing you with details for how to allow docker-machine authorization to the VM by plugging a provided code into a browser window.  Once that’s done the command will continue its work and the output will forge ahead.

When it’s finished, you’ll see  the message “Docker is up and running!” (on the new VM), Followed by a very important message to configure a shell on the VM by running:

“C:Program FilesDockerDockerResourcesbindocker-machine.exe” env mylinuxvm

Recall that I’m doing these tasks on Windows, so docker-machine is ensuring that I know where to find the executable. After performing this task, I can see the machine up and running in the Azure Portal. This lets me inspect other default configuration choices made because I didn’t specify them in the docker-machine command.

By default, all of the needed ports are set up for access such as 80 for http and 22 for ssh.

Re-Creating Docker-Compose and .env on the VM

We only need two files on this machine: the docker-compose.yml and the .env file.

Docker-machine allows you to easily ssh into the VM in order for your command line commands to execute on that machine.

docker-machine ssh mylinuxvm

Then you can use a linux editor such as nano to re-create the two files.

nano docker-compose.yml

And you can paste the contents of your docker-compose file into there. This is the docker-compose file in my solution for the sample app. However, there are two edits you’ll need to make.

The original file depends on a variable supplied by the VS Docker Tools for the registry location. Change the value of image to point to your Docker Hub image: image: julielerman/dataapidocker:formylinuxvmYou’ll also need to change the version of docker-compose specified at the top of the file to 2.0 since you’re moving from hosting on Windows to hosting on Linux.

In nano, you can save the docker-compose file with ^O. Then exit nano and run it again to create the .env file using the command:

nano .env

Paste the key value pair environment variable from the app and save the .env file.

Running the Container

I still had to install docker-compose on the new machine. Docker is nice enough to feed you the command for that if you try to run docker-compose before installing it.
sudo apt install docker-compose

Then I was able to run my containers with: sudo docker-compose up

One important thing I learned: The VS Docker tooling doesn’t define port mapping for the API service in docker-compose. That’s hidden in a docker-compose.override.yml file used by the debugger. If you look at the docker-compose file listed earlier in this article, you’ll see that I added it myself. Without it, when you try to browse to the API, you will get a Connection refused error.

My ASP.NET Core API is now running and I can browse to it at public IP address specified for the VM. The HTTP Get of my Captains controller returns a list of the captains seeded in the database. 

DevOps are for Devs, Too

As a developer who is often first in line to claim “I don’t do DevOps”, I was surprised at how simple it turned out to be to deploy the app I had created. So often I have allowed my development machine to be a gate that defined the limitations of my expertise. I can build the apps and watch them work on my development machine but I’ve usually left deployment to someone else.

While I have ventured into the Azure Portal frequently, the fact that the Docker Tools and the Azure CLI made it so simple to create the assets I needed for deploying the app made me wonder why I’d waited so long to try that out. And in reality, I didn’t have to deploy the app, just an image and then a docker-compose file. That the Docker Machine made it even easier to create those cloud assets was something of a revelation. 

Part of this workflow leveraged the Docker Tools for Visual Studio on Windows. But because I spend a lot of time in Visual Studio Code on my MacBook, I now have the confidence to explore using the Docker CLI for publishing the image to Docker Hub. After that I can just repeat the Docker Machine path to create the Azure VM where I can run my containers. 

If you want to learn more, these posts and articles are a great place to start:

5 Things to Try with Docker Desktop WSL 2 Tech Preview Video Series: Modernizing .NET Apps for Developers EF Core in a Docker Containerized App (3 part series in MSDN Magazine)Julie’s blog posts on docker-compose and publishing Docker images to Azure

Get Started with Docker Desktop

How to Deploy Dockerized .NET Apps Without Being a DevOps Guru by #Docker Captain @julielermanClick To Tweet

The post Deploying Dockerized .NET Apps Without Being a DevOps Guru appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/