Taking Your App Live with Docker and the Uffizzi App Platform

Tune in December 10th 1pm EST for our Live DockTalk:  Simplify Hosting Your App in the Cloud with Uffizzi and DockerWe’re excited to be working with Uffizzi on this joint blog.  Docker and Uffizzi have very similar missions that naturally complement one another.  Docker helps you bring your ideas to life by reducing the complexity of application development and Uffizzi helps you bring your ideas to life by reducing the complexity of cloud application hosting. This blog is a step-by-step guide to setting up automated builds from your Github repo via Docker Hub and enabling Continuous Deployment to your Uffizzi app hosting environment.

PrerequisitesTo complete this tutorial, you will need the following:

Free Docker Account You can sign-up for a free Docker account and receive free unlimited public repositoriesAn IDE or text editor to use for editing files. I would recommend VSCodeFree Uffizzi App Platform Account Free Github Account

Docker Overview

Docker is an open platform for developing, shipping, and running applications. Docker containers separate your applications from your infrastructure so you can deliver software quickly. 

With Docker, you can manage your infrastructure in the same ways you manage your applications. By taking advantage of Docker’s methodologies for shipping, testing, and deploying code quickly, you can significantly reduce the delay between writing code and running it in production.Uffizzi App Platform OverviewUffizzi is a Docker-centric cloud application platform.  Uffizzi helps Devs by reducing the complexity of hosting their app in the cloud.   Uffizzi automates over a dozen cloud processes and provides push-button app hosting environments that are reliable, scalable, and secure.With Uffizzi you can set up and deploy your application directly from Docker Hub or, as we’ll show in this blog, from Github through Docker Hub’s automated build process.  Uffizzi is built upon the open source container orchestrator Kubernetes and allows you to leverage this powerful tool without the complexities of managing cloud infrastructure.

Fork and Clone Example Application

We’ll use an example “Hello World” application for this demonstration, but you can use this workflow with any app that answers HTTP requests.

Login to your GitHub account and “fork” your own copy of this example repository: https://github.com/UffizziCloud/hello-world

To fork the example repository, click the Fork button in the header of the repository:

Wait just a few moments for GitHub to copy everything into your account. When it’s finished, you’ll be taken to your forked copy of the example repository. You can read more about forking GitHub repositories here: https://guides.github.com/activities/forking/

Of course to actually make any changes you’ll need to “clone” your new Git repository onto your workstation. This will be a little different depending on which operating system your workstation is running, but once you have Git installed, `git clone` will usually succeed. The green “Code” button in your repository header provides the URL to clone.  You could also use GitHub’s desktop application. You can learn more about Git here: https://guides.github.com/introduction/git-handbook/

Review Code and Dockerfile

Confirm that you have a viable clone by reviewing some of the files within, especially the `Dockerfile`. This file is required to build a Docker image for any application, and it will later be used to automatically build and deploy your new container images. This example `Dockerfile` is extremely simple; your application may require a more sophisticated `Dockerfile`. You can read more about `Dockerfile` anatomy here: https://docs.docker.com/engine/reference/builder/

Create Private Docker Hub Repository

Next, we need somewhere for those built images to reside where Uffizzi can find them. Log in to your Docker Hub account and click on Repositories and then Create Repository.

Be sure to create a Private Repository (not Public) for later Continuous Deployment to Uffizzi. 

And now’s a good time to link your GitHub account and add a Build Rule to configure automatic building. Click the GitHub logo and authorize Docker to view your GitHub repository. Click the blue plus sign and create a Build Rule for your `master` branch.

Once you click “Create & Build” you can navigate to your new repository and select the “Builds” tab to see it working (see below screenshot). Once it finishes, your application is ready to deploy on Uffizzi! You can read more about linking GitHub accounts and automatic builds here: https://docs.docker.com/docker-hub/builds/

Setting Up Your Uffizzi App Hosting Environment

Go to https://uffizzi.com and sign up for a free account – there is no credit card required.  From the dashboard choose “Get Started Now”. Now choose an environment for your app – “Free” is appropriate for this tutorial.For naming you can use the default name or you can call it “Continuous Deployment Demo” if you’d like.At the “Import Your Application” Step choose Docker Hub and log in to your Docker Hub account. 

Once authenticated with Docker Hub select the repo that you’ve created for this demo.  This is the repository that later Docker Hub will push your updated image to and kick off the continuous deployment process to your Uffizzi App Hosting environment. After choosing the repo, select your image under “My Images”.  You should now be able to indicate the port number that your container listens on – for this tutorial it will be `80`.  

We are not connecting to any other services or databases for this demo so environment variables are not required. Now Select “Import”. (Note- if you selected an environment other than “Free” there will be an option to add a database –  you can choose “Skip” – a database is not required for this tutorial.)Now you should see your shopping cart with your `hello-world` image.  Go ahead and hit the “Deploy” button.  Uffizzi will take a few minutes to automate about a dozen cloud processes, from allocating Kubernetes resources to scheduling your container to configuring load-balanced networking to securing your environment.  You can “Explore your environment” while you wait.When these steps are complete you will see your container “Running” and you should also see “Continuous Deployment” enabled. Go ahead and click “Open application”  to see the application live in your web browser.  Later in this tutorial we will come back to this browser tab to see the updates we push from our repository. Later you can configure HTTPS encryption and add a custom domain within the Uffizzi UI, but that’s not necessary for this demo.  

Demonstrate Continuous Deployment

Now we can demonstrate Continuous Deployment on Uffizzi. Make a small change to index.html within your workstation’s cloned Git repository, then git push it up to GitHub. Because we connected your GitHub, Docker, and Uffizzi accounts, your changes will immediately deploy to Uffizzi within a new Docker image. This may take a few minutes; check the status in your Docker Hub “Builds” tab.

Confirm Your Update is Live

Now we can see the updates we just made to our application live on Uffizzi! Once you set up your Uffizzi App Hosting environment with Continuous Deployment you don’t have to do anything within Uffizzi to push your code updates.  The goal is to make it easy so you can focus on your application!

Confirm Your Update is Live

Conclusion

In this post, we learned about creating private repositories and setting up automated builds with Docker Hub.  Next we covered how to deploy our Docker image direct from our Docker Hub private repository into a Uffizzi App Hosting environment. Once our application was live on Uffizzi we ensured “Continuous Deployment” was enabled.  This allows Uffizzi to watch our connected Docker Hub repository and automatically deploy new images built there. Next we updated our demo app on our workstation then deployed it to Uffizzi Cloud by executing a `git push` command.  This push initiated an automated sequence that took our app from new code pushed to GitHub to a Docker image on Docker Hub to a deployed Uffizzi Hosting Environment.  If you have any Docker-related questions, please feel free to reach out on Twitter @pmckee and join us in our community slack.

If you have any Uffizzi-related questions, please feel free to reach out on Twitter to @uffizzi_ and join us in our uffizzi users community slack – Grayson Adkins (grayson.adkins@uffizzi.cloud) or Josh Thurman  (josh.thurman@uffizzi.cloud).
The post Taking Your App Live with Docker and the Uffizzi App Platform appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Rate Limiting Questions? We have answers

As we have been implementing rate limiting on Docker Hub for free anonymous and authenticated image pulls, we’ve heard a lot of questions from our users about how this will affect them. And we’ve also heard a number of statements that are inaccurate or misleading about the potential impacts of the change. I want to provide some answers here to help Docker users clearly understand the changes, quantify what is involved, and help developers choose the right Docker subscription for their needs.

First let’s look at the realities of what rate limiting looks like, and quantify what is still available for free to authenticated Docker users. Anyone can use a meaningful number of Docker Hub images for free. Anonymous, unauthenticated Docker users get 100 container pull requests per six hours. And when a user signs up for a free Docker ID, they get 2X the quantity of pulls. At 200 pulls per six hours, that is approximately 24,000 container image pulls per month per free Docker ID. This egress level is adequate for the bulk of the most common Docker Hub usage by developers. (Docker users can check their usage levels at any time through the command line. Docker developer advocate Peter McKee shows how to get your usage stats in this blog.)

Here is the schedule for final implementation of the rate limits for unauthenticated and free Docker Hub users: these do NOT apply to Docker Pro and Team Subscribers:

DateSpike Hours (PST)Anonymous LImit (per 6 hours)Free Limit w/ Docker ID (per 6 hours)11/12/2020No Spike50050011/16/20203am–3pm (12 hrs)25025011/18/2020No Spike (final)100200

Images hosted on Docker Hub range in size from megabytes to gigabytes, so many common container images will quickly consume multiple GB of repository data in a CI/CD pipeline. With Docker Hub, you don’t have to worry about the size of images being pulled, but rather you can focus on the frequency and contents of your builds instead. And not all repositories are created equal: Docker Hub features Docker Official and Docker Certified images of hundreds of popular packages, so you can be confident in official images as being vetted by Docker before incorporating into your CI/CD pipeline. 

Mirror, Mirror

Another common question we get is about using an internal mirror to pull images for an organization. This is absolutely a supported deployment model and a best practice that allows your developers to access the images they use the most from within your firewall. In this case all you would need to do is create a Docker Team account and assign one of the users as a service account to pull the images needed. Mirroring can be done with a range of technologies including Docker Registry. Our engineering team is reaching out to users who are making an unusually high number of requests to Docker Hub. In many cases, the excessive use is a result of misconfigured code that hasn’t been caught previously. For many of these users, once their code is remediated, the quantity of requests decreases dramatically. 

Designed for Developers

Along with a different approach to measuring pulls vs image sizes, we also believe our pricing model is right for developers. Docker subscription pricing is straightforward and predictable. Starting at $5 per month, per user, developers and dev teams can sign up, budget, and grow at a rate that is predictable and transparent. Many of the “free” offerings out there instead bill against cloud resources consumed by images being stored or consumed. Budgeting for developer seats against variable resources can be a challenge with this model: we believe our model is both easier to understand and budget, and delivers meaningful value to subscribers. 

We also recognize the needs of Open Source Software projects, who need advanced capabilities and greater throughput while operating as non-profit entities. We are committed to open source and providing sustainable solutions for open source projects. For qualifying projects, we announced our Open Source Software program so these projects can use Docker resources without charge. To date, over 40 projects of all sizes have been approved as part of this program. OSS teams can submit their projects for consideration through this form.

Finally, Docker Subscriptions are about more than just additional access to container pulls. Last week we announced a number of new features for Docker Pro and Team subscribers, including enhanced support and access to vulnerability scan results in Docker Desktop. Docker subscribers will continue to see new capabilities and functionality added every month. For a good overview of the features available to Docker Pro and Team subscribers, visit the Docker Pricing Page. 

I want to thank the Docker community for their support during this transition. Reducing excessive use of resources allows us to build a sustainable Docker for the long term. Not only does this help us invest more money into being responsible stewards of the Docker project, but it also improves performance and stability of Docker resources for all of our users.

Additional resources:

Docker Pricing OverviewDocker Hub Rate Limiting Information Page
The post Rate Limiting Questions? We have answers appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Combining Snyk Scans in Docker Desktop and Docker Hub to Deploy Secure Containers

Last week, we announced that the Docker Desktop Stable release includes vulnerability scanning, the latest milestone in our container security solution that we are building with our partner Snyk. You can now run Snyk vulnerability scans directly from the Docker Desktop CLI.  Combining this functionality with Docker Hub scanning functionality that we launched in October provides you with the flexibility of including vulnerability scanning along multiple points of your development inner loop, and provides better tooling for deploying secure applications.

You can decide if you want to run your first scans from the Desktop CLI side, or from the Hub.  Customers that have used Docker for a while tend to prefer starting from the Hub. The easiest way to jump in is to configure the Docker Hub repos to automatically trigger scanning every time that you push an image into that repo. This option is configurable for each repository, so that you can decide how to onboard these scans into your security program. (Docker Hub image is available only for Docker Pro and Team subscribers, for more information about subscriptions visit the Docker Pricing Page.)

Once you enable scanning, you can view the scanning results either in the Docker Hub, or directly from the Docker Desktop app as described in this blog. 

From the scan results summary you can drill down to first view the more detailed data for each scan and get more detailed information about each vulnerability type. The most useful information in vulnerability data is the Snyk recommendation on how to remediate the detected vulnerability, and if a higher package version is available where the specific vulnerability has already been addressed.  

Detect, Then Remediate 

If you are viewing vulnerability data from the Docker Desktop, you can start remediating vulnerabilities, and testing remediations directly from your CLI.  Triggering scans from Docker Desktop is simple – just run docker scan, and you can run iterative tests that confirm successful remediation before pushing the image back into the Hub.  

For new Docker users, consider running your first scans from the Desktop CLI. Docker Desktop Vulnerability Scanning CLI Cheat Sheet is a fantastic resource for getting started.  

The CLI Cheat Sheet starts from the basics, which are also described in the Docker Documentation page on Vulnerability scanning for Docker local images – including steps for running your first scans, description of the vulnerability information included with each scan result, and docker scan flags that help you specify the scan results that you want to view.  Some of these docker scan flags are – 

–dependency-tree – displaying the list of all the package underlying dependencies that include the reported vulnerability–exclude base – running an image scan, without reporting vulnerabilities associated with the base layer–f – including the vulnerability data for the associated Dockerfile –json – displaying the vulnerability data in JSON format

The really cool thing about this Cheat Sheet is that it shows you how to combine these flags to create a number of options for viewing your data – 

Show only high severity vulnerabilities from layers other than the base image: $ docker scan myapp:mytag –exclude-base –file path/to/Dockerfile –json | jq ‘[.vulnerabilities[] | select(.severity==”high”)]’ Show high severity vulnerabilities with an CVSSv3 network attack vector: $ docker scan myapp:mytag –json | jq ‘[.vulnerabilities[] | select(.severity==”high”) | select(.CVSSv3 | contains(“AV:N”))]’ Show high severity vulnerabilities with a fix available: $ docker scan myapp:mytag –json | jq ‘[.vulnerabilities[] | select(.nearestFixedInVersion) | select(.severity==”high”)]’ 

Running the CLI scans and remediating vulnerabilities before you push your images to the Hub, reduces the number of vulnerabilities reported in the Hub scan, providing your team with a faster and more streamlined build cycle  

To learn more about running vulnerability scanning on Docker images, you can watch “Securing Containers Directly from Docker Desktop” session, presented during SnykCon.  This is a joint presentation by Justin Cormack, Docker security lead, and Danielle Inbar, Snyk product manager, discussing what you can do to leverage this new solution in the security programs of your organization
The post Combining Snyk Scans in Docker Desktop and Docker Hub to Deploy Secure Containers appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Compose CLI ACI Integration Now Available

Today we are pleased to announce that we have reached a major milestone, reaching GA and our V1 of both the Compose CLI and the ACI integration.

In May we announced the partnership between Docker and Microsoft to make it easier to deploy containerized applications from the Desktop to the cloud with Azure Container Instances (ACI). We are happy to let you know that all users of Docker Desktop now have the ACI experience available to them by default, allowing them to easily use existing Docker commands to deploy and manage containers running in ACI. 

As part of this I want to also call out a thank you to the MSFT team who have worked with us to make this all happen! That is a big thank you to Mike Morton, Karol Zadora-Przylecki, Brandon Waterloo, MacKenzie Olson, and Paul Yuknewicz.

Getting started with Docker and ACI 

As a new starter, to get going all you will need to do is upgrade your existing Docker Desktop to the latest stable version (2.5.0.0 or later), store your image on Docker Hub so you can deploy it (you can get started with Hub here) and then lastly you will need to create an ACI context to deploy it to. 

We have done a few blog posts now on the different types of things you can achieve with the ACI integration. 

Getting started locally using Docker & ACI Paul’s blog over at MSFT on getting started with Docker & ACIDeploying a Minecraft server & using volumes with ACI Deploying to ACI as part of Github actions for CD Docker & ACI integration into VSCodeDockers docs on the integration 

If you have other questions on the experience or would like some other guides then drop us a message in the Compose CLI repo so we can update our docs. 

What’s new in V1.0 

Since the last release of the CLI we have added a few new commands to make it easier to manage your working environment and also make it simpler for you to understand what you can clear up to save you money on resources you are not using.

To start we have add a volume inspect command alongside the volume create to allow you better management of your volumes: 

We are also very excited by our new top level prune command to allow you to better clear up your ACI working environment and manage your costs. 

docker prune –help

We have also added in a couple of interesting flags in here, we have the —dry-run flag to let you see what would be cleared up:

(OK I am not running a great deal here!) 

As you can see, this also lets you know the amount of compute resources you will be reclamining as well. At the end of a development session being able to do a force prune allows you to remove ‘all the things you have run’, giving you the confidence you won’t have left something running and get an unexpected bill. 

Lastly we have started to add a few more flags in based on your feedback, a couple of examples of these are the addition of the –format json and –quiet in commands ps, context ls, compose ls, compose ps, volume ls to output json or single IDs.

We are really excited about the new experience we have built with ACI, if you have any feedback on the experience or have ideas for other backends for the Compose CLI please let us know via our Public Roadmap
The post Compose CLI ACI Integration Now Available appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Updates on Hub Rate Limits, Partners and Customer Exemptions

The gradual enforcement of the Docker Hub progressive rate limiting enforcement on container image pulls for anonymous and free users began Monday, November 2nd. The next three hour enforcement window on Wednesday, November 4th from 9am to 12 noon Pacific time. During this window, the eventual final limit of 100 container pull requests per six hours for unauthenticated users and 200 for free users with Docker IDs will be enforced. After that window, the limit will rise to 2,500 container pull requests per six hours. 

As we implement this policy, we are looking at the core technologies, platforms and tools used in app pipelines to ensure a transition that supports developers across their entire development lifecycle. We have been working with leading cloud platforms, CI/CD providers and other ISVs to ensure their customers and end users who use Docker have uninterrupted access to Docker Hub images. Among these partners are the major cloud hosting providers, CI/CD vendors such as CircleCI, and OSS entities such as Apache Software Foundation (ASF). You can find more information about programs on our Pricing Page as well as links to contact us for information about programs for ISVs and companies with more than 500 users. 

Besides the Apache Software Foundation, we are working with many Open Source Software projects from Cloud Foundry and Jenkins to many other open source projects of all sizes, so they can freely use Docker in their project development and distribution. Updates and details on the program are available in this blog from Docker’s Marina Kvitnitsky. 

We have assembled a page of information updates, as well as relevant resources to understand and manage the transition, at https://www.docker.com/increase-rate-limits.

We’ve had a big week delivering new features and integrations for developers this week. Along with the changes outlined above, we also announced new vulnerability scan results incorporated into Docker Desktop, a faster, integrated path into production from Desktop into Microsoft Azure, and improved support for Docker Pro and Team subscribers. We are singularly focused on creating a sustainable, innovative company focused on the success of developers and development teams, both today and tomorrow, and we welcome your feedback.
The post Updates on Hub Rate Limits, Partners and Customer Exemptions appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Expanded Support for Open Source Software Projects

Docker remains committed to providing a platform where the non-commercial open source developers can continue collaborating, innovating and pushing this industry into new directions.  

In August, we announced to our dedicated community and ecosystem that we are creating new policies for image retention and data pull rates. We made these changes to make Docker a sustainable business for the long term, so that we can continue supporting the developer community and ecosystem that depends on the Docker platform. We got great feedback from our extensive user base, and adjusted our policies to delay the policies on image retention until mid-2021. The plan for data pull rates is moving forward, and starting today limits will be gradually enforced, with the plan to be fully applied in the coming weeks. The final limits will be:

Unauthenticated users will be restricted to 100 pulls every 6 hoursAuthenticated free users will be restricted to 200 pulls every 6 hours

To support the open source community, Docker has created a special program for Open Source projects to get continued free access and freedom from restrictions for their communities and their users. For the approved, non-commercial, open source projects, we are thrilled to announce that we will suspend data pull rate restrictions, where no egress restrictions will apply to any Docker users pulling images from the approved OSS namespaces.

Open Source Project Qualification Criteria

To qualify for the Open Source Program status, all the repos within the Publisher’s Docker namespace must:

Be public and non-commercialMeet the Open Source Initiative (OSI) definition (shown here), including definitions for free distribution, source code, derived works, integrity of source code, licensing and no tolerance for discrimination Distribute images under OSI approved open source licenseProduce Docker images used to run applications

Review and Approval

The process for applying for Open Source status is summarized below –

The Publisher submits the Open Source Community Application form. Docker reviews the form, and determines if the Publisher qualifies for open source status.If the Publisher qualifies, Docker will waive the pull rate policy for the Publisher’s namespace, for a period of one yearEvery 12 months, Docker will review if the Publisher’s namespace qualifies with the Docker Open Source Program criteria, and extend the Open Source Program status for another 12 months.  Docker may, at its discretion, also review eligibility criteria within the 12-months periodThe publisher may have other namespaces, that either partially comply or do not comply with open source policy requirements, and therefore, will not qualify for open source status

Joint Promotional Programs

In order to support the OSS community Docker will collaborate on joint promotional programs with the Open Source Program publishers, including blogs, webinars, solutions briefs and other collateral.  The OSS partner also agrees to become Docker public reference, and display links to Docker on all the appropriate pages. As Chris Clark, Technical Operations Manager, Cloud Foundry Foundation, said, “DockerHub has been a de-facto standard for open source container image distribution for years now, and the continued support of Docker to provide this resource to open source communities is very much appreciated.”

We have created the Docker Open Source Program this summer, and so far, more than 40 organizations and counting from around the world have qualified for this program.  These organizations run the gamut in size from the Apache Software Foundation (ASF), Cloud Foundry and Jenkins to numerous smaller projects. These organizations work on projects as diverse as new programming languages, frameworks for machine learning, software packages for analyzing molecular data, gaming engines and projects for civic tech.  We are privileged to work with such great global talent, including groups from US, France, Switzerland, Netherlands, India, Singapore, New Zealand and South AfricaOpen Source projects can apply for the Docker Open Source program by filling out this form.
The post Expanded Support for Open Source Software Projects appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/