Deploying a Minecraft Docker Server to the cloud

One of the simplest examples that people have used over the years of demoing Docker is quickly standing up and running a Minecraft server. This shows the power of using Docker and has a pretty practical application!

Recently I wanted to set up a server but I wanted to persist one and as I have given away my last raspberry pi I needed to find a new way to do this. I decided that I would have a go at running this in Azure using the $200 free credits you get in your first month.

The first thing I decided to do was to check out the existing Docker Images for Minecraft servers to see if there were any that looked good to use, to do this I went to Docker Hub and searched for minecraft:

I liked the look of minecraft-server repo, so I clicked through to have a look at the image and link through to the Github repo.

To start I decide to just test out running this locally on my machine with the ‘simple get started’ Docker Run command:

$ docker run -d -p 25565:25565 –name mc -e EULA=TRUE
itzg/minecraft-server

In the Docker Desktop Dashboard, I can see I have the container running and check the server logs to make sure everything has been initialized properly:

If I load up Minecraft I can connect to my server using local host and my open port: 

From there, I can try to deploy this in Azure to just get my basic server running in the cloud. 

With the Docker ACI integration, I can log into Azure using: 

$ docker login azure

Once logged in, I can create a context that will let me deploy containers to an Azure resource group (this proposes to create a new azure resource group or use an existing one): 

$ docker context create aci acicontext
Using only available subscription : My subscription (xxx)
? Select a resource group [Use arrows to move, type to filter]
> create a new resource group
gtardif (westeurope)

I can then use this new context : 

$ docker context use acicontext

I will now try to deploy my minecraft server using the exact same command I ran previously locally :

$ docker run -d -p 25565:25565 –name mc -e EULA=TRUE itzg/minecraft-server
[+] Running 2/2
⠿ Group mc Created 4.6s
⠿ mc Done 36.4s
mc

Listing my azure containers, I’ll see the public IP that has been provided for my Minecraft server:

$ docker ps
CONTAINER ID IMAGE COMMAND STATUS PORTS
mc itzg/minecraft-server Running 51.105.116.56:25565->25565/tcp

However, if I follow the logs of the ACI container, the server seems to be stuck in the initialization, and I cannot connect to it from Minecraft. 

$ docker logs –follow mc

In the logs we see the Minecraft server reserves 1G of memory, which happens to be the default memory allocated to the entire container by ACI ; let’s increase a bit the ACI limit with the –memory option : 

$ docker run -d –memory 1.5G -p 25565:25565 –name mc -e EULA=TRUE
itzg/minecraft-server

The server logs from ACI now show that the server initialized properly. I can run $ docker ps again to get the public IP of my container, and connect to it from Minecraft and start playing ! 

This is great, but now I want to find a way to make sure my data persists and reduce the length of the command I need to use to run the server.

To do this I will use a Compose file to document the command I am using, and next I will add a volume to this that I can mount my data to. 

version: ‘3.7’
services:
minecraft:
image: itzg/minecraft-server
ports:
– "25565:25565"
environment:
EULA: "TRUE"
deploy:
resources:
limits:
memory: 1.5G

Looking at our command from before we have moved our image name into the image section, our -p for ports into the ports and added our EULA acceptance into the environment variables. We also ensure the server container has enough memory to start.

The command to start this locally is now much simpler:

$ docker-compose –project-name mc up

And to deploy to ACI, still using the ACI context I created previously: 

$ docker compose –project-name mc2 up
[+] Running 2/2
⠿ Group mc2 Created 6.7s
⠿ minecraft Done 51.7s

Of course with compose, this allows the compose application to include multiple containers (here we only have the “minecraft” one). The containers are visible in the progress display (here the “minecraft” line).And listing the containers shows the application name and the container name mc2_minecraft : 

$ docker ps
CONTAINER ID IMAGE COMMAND STATUS PORTS
mc itzg/minecraft-server Running 20.50.245.84:25565->25565/tcp
mc2_minecraft itzg/minecraft-server Running 40.74.20.143:25565->25565/tcp

Next we will want to add a volume to include our Minecraft data and where we can load in other maps if we want. To do this I need to know what folder has the Minecraft data in the Docker image, if I go and inspect our running container in the Docker Dashboard I can see that it is the /Data directory:

If I wanted to add this back in my command line I would need to extend my command with something like:

docker run -d -p 25565:25565 –v /path/on/host:/data –name mc -e
EULA=TRUE itzg/minecraft-server

I can add this under the volumes in my Compose file: 

version: ‘3.7’
services:
minecraft:
image: itzg/minecraft-server
ports:
– "25565:25565"
environment:
EULA: "TRUE"
deploy:
resources:
limits:
memory: 1.5G
volumes:
– "~/tmp/minecraft_data:/data"

Now when I do a docker compose up and come back to inspect I can see the /data folder in the container is now mounted to my local folder as expected. Navigating to this local folder I can see all Minecraft data.

Now let’s create an Azure File Share and deploy our application to mount /data to the Azure shared persistent folder so we can do the same thing in ACI. 

First I need to create an Azure storage account. We can do this using the Azure “az” command line, or through the Azure portal, I have decided to use the portal : 

I need to specify a name for the storage account, select the resource group to attach to it, then I let the other options default for this example. 

Once the ”minecraftdocker” storage account is created, I’ll create a file share that will hold all Minecraft files: 

I just need to specify a name for this file share and a size quota ; let’s call it “minecraft-volume”:

I’ll need to specify an access key to reference that in my compose file, I can get the storage account access key in the left hand side menu, Settings > Access keys. 

Then in my compose file, I’ll update the volume specification to point to this Azure File Share:

version: ‘3.7’
services:
minecraft:
image: itzg/minecraft-server
ports:
– "25565:25565"
environment:
EULA: "TRUE"
deploy:
resources:
limits:
memory: 1.5G
volumes:
– "minecraft:/data"
volumes:
minecraft:
driver: azure_file
driver_opts:
share_name: minecraftdocker
storage_account_name: minecraft-volume
storage_account_key: xxxxxxx

Note that the syntax for specifying ACI volumes in Compose files is likely to change in the future.

I can then redeploy my compose application to ACI, still with the same command line as before:

$ docker –context acitest compose –project-name minecraft up
[+] Running 2/2
⠿ Group minecraft Created 5.3s
⠿ minecraft Done 56.4s

And I can check it’s using the Azure File Share, just selecting the minecraft-volume Share in the Azure portal:

I can connect again to my server from Minecraft, share the server address and enjoy the game with friends!

To get started running your own Minecraft server you can download the latest Edge version of Docker Desktop. You can find the Minecraft image we used on Docker Hub, or start creating your own content from Dockers Official images.  Or if you want to create content like this to share, create a Docker account and start sharing your ideas in your public repos. 
The post Deploying a Minecraft Docker Server to the cloud appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Daemonic Dispatches

daemonology.net – The first of these features is automounting Amazon Elastic File System filesystems. For those of you not familiar with it, Amazon EFS is essentially NFS-as-a-service; it provides a POSIX filesystem w…
Quelle: news.kubernauts.io

Deploying Helm Charts w. Terraform

medium.com – One interesting Terraform provider is the Helm provider that can install Helm Charts. This can be useful for situations like: This article demonstrates how to use create a module that deploy Helm Cha…
Quelle: news.kubernauts.io

How To Use the Official NGINX Docker Image

NGINX is one of the most popular web servers in the world. Not only is NGINX a fast and reliable static web server, it is also used by a ton of developers as a reverse-proxy that sits in front of their APIs. 

In this tutorial we will take a look at the NGINX Official Docker Image and how to use it. We’ll start by running a static web server locally then we’ll build a custom image to house our web server and the files it needs to serve. We’ll finish up by taking a look at creating a reverse-proxy server for a simple REST API and then how to share this image with your team.

Prerequisites

To complete this tutorial, you will need the following:

Free Docker Account You can sign-up for a free Docker account and receive free unlimited public repositoriesDocker running locallyInstructions to download and install DockerAn IDE or text editor to use for editing files. I would recommend VSCode

NGINX Official Image

The Docker Official Images are a curated set of Docker repositories hosted on Docker Hub that have been scanned for vulnerabilities and are maintained by Docker employees and upstream maintainers.

Official Images are a great place for new Docker users to start. These images have clear documentation, promote best practices, and are designed for the most common use cases.

Let’s take a look at the NGINX official image. Open your favorite browser and log into Docker. If you do not have a Docker account yet, you can create one for free.

Once you have logged into Docker, enter “NGINX” into the top search bar and press enter. The official NGINX image should be the first image in the search results. You will see the “OFFICIAL IMAGE” label in the top right corner of the search entry.

Now click on the nginx result to view the image details.

On the image details screen, you are able to view the description of the image and it’s readme. You can also see all the tags that are available by clicking on the “Tags” tab

Running a basic web server

Let’s run a basic web server using the official NGINX image. Run the following command to start the container.

$ docker run -it –rm -d -p 8080:80 –name web nginx

With the above command, you started running the container as a daemon (-d) and published port 8080 on the host network. You also named the container web using the –name option. 

Open your favorite browser and navigate to http://localhost:8080   You should see the following NGINX welcome page.

This is great but the purpose of running a web server is to serve our own custom html files and not the default NGINX welcome page.

Let’s stop the container and take a look at serving our own HTML files.

$ docker stop web

Adding Custom HTML

By default, Nginx looks in the /usr/share/nginx/html directory inside of the container for files to serve. We need to get our html files into this directory. A fairly simple way to do this is use a mounted volume. With mounted volumes, we are able to link a directory on our local machine and map that directory into our running container.

Let’s create a custom html page and then serve that using the nginx image.

Create a directory named site-content. In this directory add an index.html file and add the following html to it:

<!doctype html>
<html lang=”en”>
<head>
<meta charset=”utf-8″>
<title>Docker Nginx</title>
</head>
<body>
<h2>Hello from Nginx container</h2>
</body>
</html>

Now run the following command, which is the same command as above, but now we’ve added the -v flag to create a bind mount volume. This will mount our local directory ~/site-content locally into the running container at: /usr/share/nginx

$ docker run -it –rm -d -p 8080:80 –name web -v ~/site-content:/usr/share/nginx nginx

Open your favorite browser and navigate to http://localhost:8080 and you should see the above html rendered in your browser window.

Build Custom NGINX Image

Bind mounts are a great option for running locally and sharing files into a running container. But what if we want to move this image around and have our html files moved with it?

There are a couple of options available but one of the most portable and simplest ways to do this is to copy our html files into the image by building a custom image.

To build a custom image, we’ll need to create a Dockerfile and add our commands to it.

In the same directory, create a file named Dockerfile and paste the below commands.

FROM nginx:latest
COPY ./index.html /usr/share/nginx/html/index.html

We start building our custom image by using a base image. On line 1, you can see we do this using the FROM command. This will pull the nginx:latest image to our local machine and then build our custom image on top of it.

Next, we COPY our index.html file into the /usr/share/nginx/html directory inside the container overwriting the default index.html file provided by nginx:latest image.

You’ll notice that we did not add an ENTRYPOINT or a CMD to our Dockerfile. We will use the underlying ENTRYPOINT and CMD provided by the base NGINX image.

To build our image, run the following command:

$ docker build -t webserver .

The build command will tell Docker to execute the commands located in our Dockerfile. You will see a similar output in your terminal as below:

Now we can run our image in a container but this time we do not have to create a bind mount to include our html.

$ docker run -it –rm -d -p 8080:80 –name web webserver

Open your browser and navigate to http://localhost:8080 to make sure our html page is being served correctly.

Setting up a reverse proxy server

A very common scenario for developers, is to run their REST APIs behind a reverse proxy. There are many reasons why you would want to do this but one of the main reasons is to run your API server on a different network or IP then your front-end application is on. You can then secure this network and only allow traffic from the reverse proxy server.

For the sake of simplicity and space, I’ve created a simple frontend application in React.js and a simple backend API written in Node.js. Run the following command to pull the code from GitHub.

$ git clone https://github.com/pmckeetx/docker-nginx.git

Once you’ve cloned the repo, open the project in your favorite IDE. Take a look at Dockerfile in the frontend directory.

FROM node:12.18.2 as build

ARG REACT_APP_SERVICES_HOST=/services/m

WORKDIR /app

COPY ./package.json /app/package.json
COPY ./package-lock.json /app/package-lock.json

RUN yarn install
COPY . .
RUN yarn build

FROM nginx
COPY ./nginx/nginx.conf /etc/nginx/conf.d/default.conf
COPY –from=build /app/build /usr/share/nginx/html

The Dockerfile sets up a multi-stage build. We first build our React.js application and then we copy the nginx.conf file from our local machine into the image along with our static html and javascript files that were built in the first phase.

We configure the reverse proxy in the frontend/nginx/nginx.conf file. You can learn more about configuring Nginx in their documentation.

server {
listen 80;
server_name frontend;
location / {
# This would be the directory where your React app’s static files are stored at
root /usr/share/nginx/html;
try_files $uri /index.html;
}

location /services/m {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://backend:8080/services/m;
proxy_ssl_session_reuse off;
proxy_set_header Host $http_host;
proxy_cache_bypass $http_upgrade;
proxy_redirect off;
}
}

As you can see in the second location section thatall traffic targeted to /services/m will be proxy_pass to http://backend:8080/services/m

In the root of the project is a Docker Compose file that will start both our frontend and backend services. Let’s start up our application and test if the reverse proxy is working correctly.

$ docker-compose -up
Creating network “docker-nginx_frontend” with the default driver
Creating network “docker-nginx_backend” with the default driver
Creating docker-nginx_frontend_1 … done
Creating docker-nginx_backend_1 … done
Attaching to docker-nginx_backend_1, docker-nginx_frontend_1
frontend_1 | /docker-entrypoint.sh: Configuration complete; ready for start up
backend_1 | Listening on port 8080

You can see that our nginx web server has started and also our backend_1 service has started and is listening on port 8080.

Open your browser and navigate to http://localhost. You should see the following web page:

Open the developer tools window and click on the “network” tab. Now back in the browser, enter an entity name. This can be anything. I’m going to use “widgets”. Then click the “Submit” button.

Over in the developer tools window, click on the network request for widgets and see that the request was made to http://localhost and not to http://localhost:8080.

Open your terminal and notice that request that was made from the browser was proxied to the backend_1 service and handled correctly.

Shipping Our Image

Now let’s share our images on Docker so others on our team can pull the images and run them locally. This is also a great way to share your application with others outside of your team such as testers and business owners. 

To push your images to Docker’s repository run the docker tag and then the docker push commands. You will first need to login with your Docker ID. If you do not have a free account, you can create one here.

$ docker login
$ docker tag nginx-frontend <dockerid>/nginx-frontend
$ docker push <dockerid>/nginx-frontend

Conclusion

In this article we walked through running the NGINX official image, adding our custom html files, building a custom image based off of the official image and configuring the NGINX as a reverse proxy. We finished up by pushing our custom image to Docker so we could share with others on our team. 

If you have any questions, please feel free to reach out on Twitter @pmckee and join us in our community slack.
The post How To Use the Official NGINX Docker Image appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Hub Incident Review – 5 July 2020

Background

This is Docker’s first time publishing an incident report publicly. While we have always done detailed post mortems on incidents internally, as part of the changing culture at Docker, we want to be more open externally as well. For example, this year we have started publishing our roadmap publicly and asking our users for their input. You should expect to see us continue publishing reports for most significant incidents.In publishing these reports, we hope others can learn from the issues we have faced and how we have dealt with them. We hope it builds trust in our services and our teams. We also think this one is pretty interesting due to the complex interaction between multiple services and stakeholders.

Incident Summary

Amazon Linux users in several regions encountered intermittent hanging downloads of Docker images from the Docker Hub registry between roughly July 5 19:00 UTC and July 6 06:30 UTC. The issue stemmed from an anti-botnet protection mechanism our CDN provider Cloudflare had deployed. Teams from Docker, Cloudflare, and AWS worked together to pinpoint the issue and the mechanism in question was disabled, leading to full service restoration.

What Happened

At about 01:45 UTC on Monday July 6th (Sunday evening Pacific time), Docker was contacted by AWS about image pulls from Docker Hub failing for a number of their services and users. Both the Docker Hub and Infrastructure teams immediately started digging into the problem.The initial troubleshooting step was of course to try doing image pulls from our local machines. These all worked, and combined with our monitoring and alerting showing no issues, ruled out a service-wide issue with the registry. 

Next, we checked pulls in our own infrastructure running in AWS. As expected by the lack of alarms in our own monitoring, this worked. This told us that the issue was more specific than “all AWS infrastructure” – it was either related to region or a mechanism in the failing services themselves.

Based on some early feedback from AWS engineers that the issue affected systems that used Amazon Linux (including higher level services like Fargate), the Docker team started spinning up instances with Amazon Linux and another OS in multiple AWS regions. Results here showed two things – both operating systems in AWS region us-east-1 worked fine, and in some other regions, Amazon Linux failed to pull images successfully where the other OS worked fine.The fact that us-east-1 worked for both operating systems told us the problem was related to our CDN, Cloudflare. This is because Docker Hub image data is stored in S3 buckets in us-east-1, so requests from that region are served directly from S3. Other regions, where we saw issues, were served via the CDN. Docker opened an incident with Cloudflare at 02:35 UTC.Because we only observed the issue on Amazon Linux, engineers from all three companies began digging into the problem to figure out what the interaction between that OS and Cloudflare was. A number of avenues were examined – was Amazon Linux using custom docker/containerd packages? No. Did the issue still exist when replicating a pull using curl rather than Docker Engine? Yes. It’s now pretty clear that the issue is some sort of low-level network implementation detail, and all teams start focusing on this.

At about 05:00 UTC, engineers from AWS examining networking differences between Amazon Linux and other operating systems discover that modifying a network packet attribute to match other systems makes the issue disappear. This info is shared with Cloudflare.

Cloudflare investigates given this new information and finds that some traffic to Docker Hub is being dropped due to an anti-botnet mitigation system. This system had recently had a new detection signature added that flagged packets with a certain attribute as potentially part of an attack.

The fact that packets from Amazon Linux matched this signature combined with the large scale of traffic to Docker Hub activated this mechanism in several regions. While Cloudflare had been monitoring this change for some time before enabling it, this interaction had not been uncovered before the mechanism was switched from monitoring to active. 

Cloudflare then disabled this mechanism for Docker Hub traffic and all parties confirmed full resolution at about 06:30 UTC.

Conclusion

So what did we learn?

First, we learned that our visibility into end-user experience of our CDN downloads was limited. In our own monitoring, we’ve identified that we can track increases in 206 response codes to indicate that such an issue may be occurring; when downloads hang, the client often attempts to reconnect and download the partial content it did not previously receive. This monitoring is now in place, and this information will lead to much quicker resolution in future such incidents.Additionally, we will work with Cloudflare to increase our visibility into mitigations that are actively affecting traffic for Docker Hub.

Lastly – this reaffirmed that the internet is a complicated place. This issue involved details all the way up from low-level network implementation to higher-order services that abstract away such things. Never underestimate the impact of every layer of your stack and it’s dependencies on other parties.

We’d like to thank our partners at Cloudflare and AWS for their work in diagnosing the problem. It took all three parties working together to resolve this issue for our joint users. We know developers and operators rely on many different providers to power their systems, and the closer we work together, the better we can support you.
The post Docker Hub Incident Review – 5 July 2020 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Deploying WordPress to the Cloud

I was curious the other day how hard it would be to actually set up my own blog or rather I was more interested in how easy it is now to do this with containers. There are plenty of platforms that host blogs for you but is it really now as easy to just run one yourself?

In order to get started, you can sign up for a Docker ID, or use your existing Docker ID to download the latest version of Docker Desktop Edge which includes the new Compose on ECS experience. 

Start with the local experience

To start I setup a local WordPress instance on my machine, grabbing a Compose file example from the awesome-compose repo.

Initially I had a go at running this locally on with Docker Compose:

$ docker-compose up -d

Then I can get the list of running containers:

$ docker-compose ps
Name Command State Ports
————————————————————————————–
deploywptocloud_db_1 docker-entrypoint.sh –def … Up 3306/tcp, 33060/tcp
deploywptocloud_wordpress_1 docker-entrypoint.sh apach … Up 0.0.0.0:80->80/tcp

And then lastly I had a look to see that this was running correctly:

Deploy to the Cloud

Great! Now I needed to look at the contents of the Compose file to understand what I would want to change when moving over to the cloud.

I am going to be running this in Elastic Container Service on AWS using the new Docker ECS integration in the Docker CLI. This means I will be using some of the new docker ecs compose commands to run things rather than the traditional docker-compose commands. (In the future we will be moving to just docker compose everywhere!)

version: ‘3.7’
services:
db:
image: mysql:8.0.19
command: ‘–default-authentication-plugin=mysql_native_password’
restart: always
volumes:
– db_data:/var/lib/mysql
environment:
– MYSQL_ROOT_PASSWORD=somewordpress
– MYSQL_DATABASE=wordpress
– MYSQL_USER=wordpress
– MYSQL_PASSWORD=wordpress
wordpress:
image: wordpress:latest
ports:
– 80:80
restart: always
environment:
– WORDPRESS_DB_HOST=db
– WORDPRESS_DB_USER=wordpress
– WORDPRESS_DB_PASSWORD=wordpress
– WORDPRESS_DB_NAME=wordpress
volumes:
db_data:

Normally here I would move my DB passwords into a secret, but secret support is still to come in the ECS integration so for now we will keep our secret in our Compose file.

The next step is to now consider how we are going to run this in AWS, to continue you will need to have an AWS account setup. 

Choosing a Database service

Currently the Compose support for ECS in Docker doesn’t support volumes (please vote on our roadmap here), so we probably want to choose a database service to use instead. In this instance, let’s pick RDS. 

To start let’s open up our AWS console and get our RDS instance provisioned.

Here I have gone into the RDS section and I will choose the MySQL instance type to match what I was using locally and also choose the lowest tier of DB as that is all I think I need. 

I now enter the details of my DB making sure to note the password to include in my Compose file:

Great, now we need to update our Compose file to no longer use our local MYSQL and instead use the RDS instance. For this I am going to make a ‘prod’ Compose file to use, I will also need to grab my DB host name from RDS. 

Adapting our Compose File

I can now update my compose file by removing the DB running in a container and adding my environment information. 

version: ‘3.7’
services:
wordpress:
image: wordpress:latest
ports:
– 80:80
restart: always
environment:
WORDPRESS_DB_HOST: wordpressdbecs.c1unvilqlnyq.eu-west-3.rds.amazonaws.com:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress123
WORDPRESS_DB_NAME: wordpressdbecs

What we can see is that the Compose file is much smaller now as I am taking a dependency on the manually created RDS instance. We only have a single service (“wordpress”) and there is no more “db” service needed. 

Creating an ecs context and Deploying

Now we have all the parts ready to deploy, we will start by setting up our ECS context by following these steps 

Create a new ECS context by running: docker ecs setup

We will be asked to name our context, I am just going to hit enter to name my context ecsWe will then be asked to select an AWS profile, I don’t already have the AWS extension installed so I will select new profile and name it ‘myecsprofile’I will then need to select a region, I am based in Europe so will enter eu-west-3 (make sure you do this in the same region you deployed your DB earlier!)Now either you will need to enter an AWS access key here or if you are already using something like AWS okta or an AWS CLI you will be able to say N here to use your existing credentialsWith all of this done you may still get the error message that you need to ‘migrate to the new ARN format’ (Amazon Resource name). You can read more about this on the Amazon blog post here.To complete the change you will need to go to the console settings for your AWS account and move over your opt in to an ‘enabled’ state then save the setting:Let’s now check that our ECS context has been successfully created by listing the available contexts using docker context lsWith this all in place we can now use our new ECS context to deploy! We will need to set our ECS context as our current focus : docker context use ecs

Then we will be able to have our first go at deploying our Compose Application to ECS using the compose up command : docker ecs compose up 

With this complete we can check the logs to our WordPress to see that everything is working correctly : docker ecs compose logs 

It looks like our WordPress instance cannot access our DB, if we jump back into the Amazon web console and have a look at our DB settings using the ‘modify’ button on the overview page, we can see in our security groups that our WordPress deployment is not included as only the default group is:

You should be able to see your container project name (I have a couple here from prepping this blog post a couple of times!). You will want to add this group in with the same project name and then save you changes to apply immediately. 

Creating an ecs context and deploying

Now I run: docker ecs compose ps

From the command output  I can grab the full URL including my port and navigate to my site newly deployed to the Cloud using my web browser: 

Great! Now we have 2 compose files, one that lets us work on this locally and one that lets us run this in the cloud with our RDS instance. 

Cleaning resources

Remember when you are done if you don’t want to keep your website running (and continuing to pay for it) to use a docker compose down and you may want to remove your RDS instance as well.

Conclusion

There you have it, we have now got a wordpress instance we can deploy either locally with persistent state or in the cloud!

To get started remember you will need the latest Edge version of Docker Desktop, if you want to do this from scratch you can get started with the WordPress Official Image or you could try this with one of the other Official images on Hub. And remember if you want to run anything you have created locally in your ECS instance you will need to have saved it to Docker Hub first.  To start sharing your content on Hub check out our get started guide for Hub.
The post Deploying WordPress to the Cloud appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Guide to Quarkus on Kubernetes

piotrminkowski.com – Quarkus is usually described as a Kubernetes-native Java framework. It allows us to automatically generate Kubernetes resources based on the defaults and user-provided configuration. It also provides…
Quelle: news.kubernauts.io

Docker’s sessions at KubeCon 2020

In a few weeks, August 17-20, lots of us at Docker in Europe were looking forward to hopping on the train down to Amsterdam for KubeCon CloudNativeCon Europe. But like every other event since March, this one is virtual so we will all be at home joining remotely. Most of the sessions are pre recorded with live Q&A, the format that we used at DockerCon 2020. As a speaker I really enjoyed this format at DockerCon, we got an opportunity to clarify and answer extra questions during the talk. It will be rather different from the normal KubeCon experience with thousands of people at the venue though!

Our talks

Chris Crone has been closely involved with the CNAB (Cloud Native Application Bundle) project since the launch in late 2018. He will be talking about how to Simplify Your Cloud Native Application Packaging and Deployments, and will explain why CNAB is a great tool for developers. Packaging up entire applications into self contained artifacts is a really useful tool, an extension of packaging up a single container. The tooling, especially Porter has been making a lot of progress recently so if you heard about CNAB before and are wondering what has been happening this talk is for you, or if you are new to CNAB.

On the subject of putting new things in registries, Silvin Lubecki and Djordje Lukic from our Paris team will be giving a talk about storing absolutely anything into a container registry, Sharing is Caring! Push Your Cloud Application to a Container Registry. The movement for putting everything into container registries is taking off now, once they were just for containers, but now we are seeing Helm charts and lots more cloud native artifacts being put into registries, but there are some difficulties which Silvin and Djordje will help you out with.

I am giving a talk about working in security, How to Work in Cloud Native Security, Demystifying the Security Role. Have you ever wanted to work in security? It is a really interesting field, with a real shortage of people, so if you are working in tech or about to start, I will talk about how to get into the field. It is actually surprisingly accessible and a fascinating field.

Since the last KubeCon, Docker, Microsoft, Amazon and many others have been working on a new version of Notary, the CNCF project that is a tool for signing containers. With Steve Lasker from Microsoft and Omar Paul from Amazon we will cover the current progress and the roadmap in the Intro and Update.

Finally I will be in the open CNCF meeting and public Q&A, which will be held live, along with Chris Aniszczyk, Liz Rice, Saad Ali, Michelle Noorali, Sheng Liang and Katie Gamanji. Come along and ask questions about the CNCF!

What about Docker Captains?

In addition, don’t miss the talks from the Docker Captains. Lee Calcote, is talking about the intricacies of service mesh performance and giving the introduction to the CNCF SIG Network. Adrian Mouat will be talking at the Cloud Native Security Day on day 0 of the conference, on Image Provenance and Security in Kubernetes.
The post Docker’s sessions at KubeCon 2020 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/