Docker’s Response to the Invasion of Ukraine

Docker is closely following the events surrounding the Russian invasion of Ukraine. The community of Docker employees, Docker Captains, developers, customers, and partners is committed to creating an open, collaborative environment that fosters the free and peaceful exchange of ideas. The tragedy unfolding in Ukraine is in opposition to what our community stands for and weighs heavily on our minds and hearts.

Docker stands with the members in our Ukrainian community and the sovereign nation of Ukraine. As the situation continues to evolve, we want to provide an update on Docker’s response. We will not do business with Russian companies during this period. As such, we have removed the ability to purchase Docker subscriptions from Russia and Belarus. We are continuing to monitor the situation and will keep you informed with updates from Docker. 

Additionally, we are committed to supporting Ukraine’s fight for continued sovereignty and independence. On behalf of all Docker employees, we are making donations to UNICEF , Razom and Doctors without Borders earmarked to help Ukrainian citizens. 

#StandWithUkraine

The post Docker’s Response to the Invasion of Ukraine appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

How Kubernetes works under the hood with Docker Desktop

Docker Desktop makes developing applications for Kubernetes easy. It provides a smooth Kubernetes setup experience by hiding the complexity of the installation and wiring with the host. Developers can focus entirely on their work rather than dealing with the Kubernetes setup details. 

This blog post covers development use cases and what happens under the hood for each one of them. We analyze how Kubernetes is set up to facilitate the deployment of applications, whether they are built locally or not, and the ease of access to deployed applications.

1. Kubernetes setup

Kubernetes can be enabled from the Kubernetes settings panel as shown below.

Checking the Enable Kubernetes box and then pressing Apply & Restart triggers the installation of a single-node Kubernetes cluster. This is all a developer needs to do.

What exactly is happening under the hood? 

Internally, the following actions are triggered in the Docker Desktop Backend and VM:

Generation of certificates and cluster configurationDownload and installation of Kubernetes internal componentsCluster bootupInstallation of additional controllers for networking and storage

The diagram below shows the interactions between the internal components of Docker Desktop for the cluster setup.

Generating cluster certs, keys and config files

Kubernetes requires certificates and keys for authenticated connections between its internal components, and with the outside. Docker Desktop takes care of generating these server and client certificates for the main internal services: kubelet (node manager), service account management, frontproxy, api server, and etcd components.

Docker Desktop installs Kubernetes using kubeadm, therefore it needs to create the kubeadm runtime and cluster-wide configuration. This includes configuration for the cluster’s network topology, certificates, control plane endpoint etc.  It uses Docker Desktop-specific naming and is not customizable by the user. The current-context, user and cluster names are always set to docker-desktop while the global endpoint of the cluster is using the DNS name https://kubernetes.docker.internal:6443. Port 6443 is the default port the Kubernetes control plane is bound to. Docker Desktop forwards this port on the host which facilitates the communication with the control plane as it would be installed directly on the host.

Download and installation of Kubernetes components 

Inside the Docker Desktop VM, a management process named Lifecycle service takes care of deploying and starting services such as Docker daemon and notifying their state change.

Once the Kubernetes certificates and configuration have been generated, a request is made to the Lifecycle service to install and start Kubernetes. The request contains the required certificates (Kubernetes PKI) for the setup.

The lifecycle service then starts pulling all the images of the Kubernetes internal components from Docker Hub. These images contain binaries such as kubelet, kubeadm, kubectl, crictl etc which are extracted and placed in `/usr/bin`.

Cluster bootup

Once these binaries are in place and the configuration files have been written to the right paths, the Lifecycle service runs `kubeadm init` to initialize the cluster and then start the kubelet process. As this is a single-node cluster setup, only one kubelet instance is being run.

The Lifecycle service then waits for the following system pods to be running in order to notify Docker Desktop host service that Kubernetes is started: coredns, kube-controller-manager and the kube-apiserver. 

Install additional controllers

Once Kubernetes internal services have started, Docker Desktop triggers the installation of additional controllers such as storage-provisioner and vpnkit-controller. Their roles concern persisting application state between reboots/upgrades and how to access applications once deployed.

Once these controllers are up and running, the Kubernetes cluster is fully operational and the Docker Dashboard is notified of its state.

We can now run kubectl commands and deploy applications.

$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1m

Checking system pods at this state should return the following:

$ kubectl get pods -n kube-systemNAME                                     READY   STATUS    RESTARTS       AGEcoredns-78fcd69978-7m52k                 1/1     Running   0              99mcoredns-78fcd69978-mm22t                 1/1     Running   0              99metcd-docker-desktop                      1/1     Running   1              99mkube-apiserver-docker-desktop            1/1     Running   1              99mkube-controller-manager-docker-desktop   1/1     Running   1              99mkube-proxy-zctsm                         1/1     Running   0              99mkube-scheduler-docker-desktop            1/1     Running   1              99mstorage-provisioner                      1/1     Running   0              98mvpnkit-controller                        1/1     Running   0              98m

2. Deploying and accessing applications

Let’s take as an example a Kubernetes yaml for the deployment of docker/getting-started, the Docker Desktop tutorial. This is a generic Kubernetes yaml deployable anywhere, it does not contain any Docker Desktop-specific configuration.


apiVersion: v1
kind: Service
metadata:
name: tutorial
spec:
ports:
– name: 80-tcp
port: 80
protocol: TCP
targetPort: 80
selector:
com.docker.project: tutorial
type: LoadBalancer
status:
loadBalancer: {}


apiVersion: apps/v1
kind: Deployment
metadata:
labels:
com.docker.project: tutorial
name: tutorial
spec:
replicas: 1
selector:
matchLabels:
com.docker.project: tutorial
strategy:
type: Recreate
template:
metadata:
labels:
com.docker.project: tutorial
spec:
containers:
– image: docker/getting-started
name: tutorial
ports:
– containerPort: 80
protocol: TCP
resources: {}
restartPolicy: Always
status: {}

On the host of Docker Desktop, open a terminal and run:

$ kubectl apply -f tutorial.yaml
service/tutorial created
deployment.apps/tutorial created

Check services:

$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 118m
tutorial LoadBalancer 10.98.217.243 localhost 80:31575/TCP 12m

Services of type LoadBalancer are exposed outside the Kubernetes cluster. Opening a browser and navigating to localhost:80 displays the Docker tutorial.

What needs to be noticed here is that service access is trivial as if running directly on the host. Developers do not need to concern themselves with any additional configurations. 

This is due to Docker Desktop taking care of exposing service ports on the host to make them directly accessible on it. This is done via the additional controller installed previously.

Vpnkit-controller is a port forwarding service which opens ports on the host and forwards

connections transparently to the pods inside the VM. It is being used for forwarding connections

to LoadBalancer type services deployed in Kubernetes.

3. Speed up the develop-test inner loop

We have seen how to deploy and access an application in the cluster. However, the development cycle consists of developers modifying the code of an application and testing it continuously. 

Let’s take as an example an application we are developing locally. 

$ cat main.go
package main

import (
“fmt”
“log”
“net/http”
)
func handler(w http.ResponseWriter, r *http.Request) {
fmt.Println(r.URL.RawQuery)
fmt.Fprintf(w, `
## .
## ## ## ==
## ## ## ## ## ===
/”””””””””””””””””___/ ===
{ / ===-
______ O __/
__/
___________/

Hello from Docker!

`)
}
func main() {
http.HandleFunc(“/”, handler)
log.Fatal(http.ListenAndServe(“:80″, nil))
}

The Dockerfile to build and package the application as a Docker image:

$ cat Dockerfile
FROM golang:1.16 AS build

WORKDIR /compose/hello-docker
COPY main.go main.go
RUN CGO_ENABLED=0 go build -o hello main.go

FROM scratch
COPY –from=build /compose/hello-docker /usr/local/bin/hello
CMD [”/usr/local/bin/hello”]

To build the application, we run docker build as usual:

$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE

$ docker build -t hellodocker .
[+] Building 0.9s (10/10) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 38B 0.0s
. . .
=> => naming to docker.io/library/hellodocker 0.0s

We can see the image resulting from the build stored in the Docker engine cache.

$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hellodocker latest 903fe47400c8 4 hours ago 6.13MB

But now we have a problem!

Kubernetes normally pulls images from a registry, which would mean we would have to push and pull the image we have built after every change. Docker Desktop removes this friction by using  dockershim to share the image cache between the Docker engine and Kubernetes. Dockershim is an internal component of Kubernetes that acts like a translation layer between kubelet and Docker Engine.

For development, this provides an essential advantage: Kubernetes can create containers from images stored in the Docker Engine image cache. We can build images locally and test them right away without having to push them to a registry first. 

In the kubernetes yaml from the tutorial example, update the image name to hellodocker and set the image pull policy to IfNotPresent. This ensures that the image from the local cache is going to be used.


containers:
– name: hello
image: hellodocker
ports:
– containerPort: 80
protocol: TCP
resources: {}
imagePullPolicy: IfNotPresent
restartPolicy: Always

Re-deploying applies the new updates:

$ kubectl apply -f tutorial.yaml
service/tutorial configured
deployment.apps/tutorial configured

$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
tutorial LoadBalancer 10.109.236.243 localhost 80:31371/TCP 4s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6h56m

$ curl localhost:80

## .
## ## ## ==
## ## ## ## ## ===
/”””””””””””””””””___/ ===
{ / ===-
______ O __/
__/
___________/

Hello from Docker!

To delete the application from the cluster run:

$ kubectl delete -f tutorial.yaml

4. Updating Kubernetes

When this is the case, the Kubernetes version can be upgraded after a Docker Desktop update. However, when a new Kubernetes version is added to Docker Desktop, the user needs to reset its current cluster in order to use the newest version.

As pods are designed to be ephemeral, deployed applications usually save state to persistent volumes. This is where the storage-provisioner helps in persisting the local storage data.

Conclusion

Docker Desktop offers a Kubernetes installation with a solid host integration aiming to work without any user intervention. Developers in need of a Kubernetes cluster without concerning themselves about its setup can simply install Docker Desktop and enable the Kubernetes cluster to have everything in place in a matter of a few minutes. 

To get Docker Desktop, follow the instructions in the Docker documentation. It also contains a dedicated guide on how to enable Kubernetes.

Join us at DockerCon 2022

DockerCon is the world’s largest development conference of its kind and it’s coming to you virtually and completely free on May 10th, 2022. DockerCon 2022 is an amazing opportunity for you and your developers to learn directly from the community, get tips, tricks, and best practices that will elevate your Docker knowledge, and to learn about what’s coming up on the Docker Roadmap. You can register for DockerCon now, pre-registration is free and open.
The post How Kubernetes works under the hood with Docker Desktop appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

What you need to know about macOS X 10.14 Deprecation

Docker supports Docker Desktop on the most recent versions of macOS. That is, the current release of macOS and the previous two releases. As new major versions of macOS are made generally available, Docker stops supporting the oldest version and supports the newest version of macOS (in addition to the previous two releases). Keeping with our version support policy, Docker Desktop expanded macOS versions support with the Apple’s launch of  macOS Monterey (12) in October of 2021 and dropped support for version macOS Mojave (10.14).

Currently less than 3% of users of Docker Desktop on version 4.0 or above are on mac OS version 10.14. In order to continue to give the best experience to the majority of users, we need  to focus our efforts on support for the more recent OS versions. 

What does this mean for 10.14 users?

Starting with the April 2022 release of Docker Desktop, users on macOS 10.14 will be warned that support of OS X 10.14 has been deprecatedUsers that want to stay on OS X 10.14 can do so, but will not be able to update to new versions of Docker Desktop that are released in April of 2022 or after. We will not be addressing bug fixes or security issues for this OS version. Users that want to use the latest versions of Docker Desktop must have macOS version 10.15 or higher. That is, Catalina, Big Sur, or Monterey. We recommend upgrading to the latest version of macOS.

Learning from our 10.13 deprecation

We know that when we dropped support for macOS version 10.13, we missed the mark as users were frequently interrupted by the check for updates pop up, but were then told that the new version was not supported on their OS version. There were two issues here: users couldn’t turn this off and  it was not clear that this was coming from Docker.

With this deprecation, Docker Desktop won’t check for updates at all if you are on OS X 10.14. If you choose to manually check for updates it will be clear that Docker Desktop is the source of this message. 
The post What you need to know about macOS X 10.14 Deprecation appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

AppDev Challenges and Trends to Watch in 2022

Over the last few years, development teams have been pushed to do a lot more with less. The global supply chain disruptions caused by the pandemic and the chip manufacturing shortage in particular impacted the tech industry. These factors have moved developer workloads toward the cloud, created a more asynchronous and remote workforce, and increased demand for modern applications. 

All of these changes have come with their own set of challenges. In our recent webinar, AppDev Challenges and Trends to Watch in 2022 (available to watch on-demand) Docker Captain and Solutions Architect for BoxBoat (an IBM company) Brandon Mitchell shared his insights on the critical challenges and trends he’s been seeing from his work helping companies through their containerization journey. Throughout the webinar, Brandon identified valuable opportunities where development teams can continue to build modern and innovative solutions that are also secure and compliant with their organizations’ policies. 

Keep reading for a recap of the webinar and to learn more about our new market report, The State of Application Development in 2022 and Beyond.

Today’s AppDev Challenges

Brandon identified the top challenges he’s been seeing in the software development space including: 

Updating legacy systemsModerning components without disrupting software delivery pipelinesKeeping up with demand for cloud-native apps

To address these challenges, many industry leaders have moved their applications to containers. 

Trend #1: Containerization as the norm

According to Brandon, “If you haven’t already started your containerization journey, you’re probably already behind your industry peers so now is the time to jump on that. Yesterday was the time, but there’s no time like the present.” Organizations are migrating everything in containers, including legacy applications, in order to standardize a more efficient software delivery pipeline. 

One of the nice things about this trend is that developers get to move toward managing a single workflow, rather than having to manage multiple workflows–one for developers managing the old legacy model and the other for managing all of the different microservices in the new system. Managing multiple workflows leads to friction both in production and in development environments. 

Trend #2: CI/CD as the building block to software

CI/CD tends to be the core environment that everyone depends on because everything goes through it–from developer code check-ins to deployments to production. Brandon stated that he’s been seeing a transition away from domain specific language and toward a declarative environment. Developers are spinning up ephemeral containers–when they have a new version, they spin up a new container. The new design for how organizations are deploying out to production is a clean environment that resets to a clean state with all data mapped in terms of volumes. 

Modernizing the application stack requires modernizing the CI/CD pipeline. The first step toward this is to try to break things into microservices. One challenge Brandon has seen organizations run into is the combination of services that teams test in development doesn’t always match the combination of services they run in production. 

Brandon also described seeing a shift left in security checks. Security tooling is moving earlier in the CI/CD pipeline for faster feedback; ideally, this is happening right on developers’ desktops. Docker Scan is a great tool which allows developers to scan for vulnerabilities right on their machines so they can test their code before they commit it to the Git repo. This process gives the security awareness right to developers when they commit their code and empowers them with the tools they need to be proactive. 

Trend #3: Secure Software Supply Chain with DevSecOps

Brandon discussed the trend he’s been seeing of moving security left in further detail. This major shift is driven by a combination of government regulations (e.g., the White House Executive Order) and recent attacks like the SolarWinds attack, Heartbleed, and the OpenSSL vulnerability. 

Attackers are going after the supply chain and are looking for vulnerabilities upstream or in build infrastructure. Potentially malicious developers are finding ways to push code into upstream builds, and that code gets pulled into environments and deployed in applications as if it is trusted code because dependencies aren’t being checked. This code can get pulled in with a single command on the developer environment, and is then deployed out to production environments. Some solutions to this that Brandon suggested include:

Hardening build environmentsGenerating software bill of materials (SBOMs)Adding signingLooking to reproducible buildsShifting scanning earlier in the workflow

You can think of SBOMs as an ingredient list where teams can track what they have in all of their compiled applications. Development teams can use SBOMs to identify if what they’ve deployed has a vulnerability, and if it does, they can easily track down every area where this vulnerability in the code is deployed in production. Another important solution is image signing and, along with that, making sure that teams are only signing images that are trusted and verified.

Brandon refers to reproducible builds as the “holy grail” because they’re effective but difficult to implement in a production environment. After building all the way through on an organization’s normal build infrastructure, reproducible builds require running that build in a completely new and separate environment. If the builds don’t match byte for byte, the organization knows something went wrong and needs to investigate. That something could just be a weird configuration, but it could also be an indication that an attacker got in and injected malicious code that needs to be stopped before it goes into production. 

The State of Application Development in 2022 and Beyond

During the webinar, Brandon went into more detail on each of the above topics and had a live discussion with Docker Product Marketing Manager, Cat Siemer. He also addressed live Q&A from webinar attendees so be sure to check out the full webinar recording to catch these additional insights. 

If you want to learn more about this topic, check out the new market report that we just published, The State of Application Development in 2022 and Beyond which highlights six trends that we predict will be key to the success of any development team and developer centered organizations in 2022. Read the report to learn how development teams keep a competitive edge by modernizing the way they build, share, and run their applications with Docker Business and our other subscription offerings.

Join us at DockerCon 2022

DockerCon is the world’s largest development conference of its kind and it’s coming to you virtually and completely free on May 10th, 2022. DockerCon 2022 is an amazing opportunity for you and your developers to learn directly from the community, get tips, tricks, and best practices that will elevate your Docker knowledge, and to learn about what’s coming up on the Docker Roadmap. You can register for DockerCon now, pre-registration is free and open. If you’re interested in speaking at DockerCon, the DockerCon 2022 Call for Papers is also open, submit your talk here. 

Additional resources from the webinar

SPDXCycloneDXNotary v2OCI Reference Types Working GroupCNCF Supply Chain Security Working GroupOpenSSFSLSAReproducible Builds

The post AppDev Challenges and Trends to Watch in 2022 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Black Innovators That Paved the Way

While diverse experiences and perspectives should be sought after and celebrated every day, Black History Month is a wonderful opportunity to reflect on and celebrate the many contributions of Black Americans. Recognizing the ingenuity of Black people in technology is incredibly important –  especially when a large diversity gap of historically overrepresented groups is so prevalent in the sector. Today, we are highlighting a few among the many incredible Black innovators that play a profound role in shaping the world’s technology.

Alan Emtage conceived of and implemented Archie, the world’s first Internet search engine in 1989 while he was student. In doing so, he pioneered many of the techniques used by public search engines today. In 2017, he was inducted into the Internet Society’s Internet Hall of Fame.

Marie Van Brittan Brown invented the first closed-circuit television security system and paved the way for modern home security systems used today. In 1969, Brown received a U.S. patent and her contribution to home security led her invention to be cited in 32 subsequent patent applications. Her invention formed a system that is still relevant in today’s society with use in places such as banks, office buildings, and apartment complexes.

Mark Dean spent his career working to make computers more accessible and powerful and played a pivotal role at IBM developing the personal computer (PC). He holds three of nine PC patents for being the co-creator of the IBM personal computer released in 1981. He is also responsible for creating the ISA bus technology that allows devices, such as keyboards, mice, and printers, to be plugged into a computer and communicate with each other.

Clarence Ellis was the first Black man to receive a Ph.D. in Computer Science (1969). After his Ph.D., he continued his work on supercomputers at Bell Telephone Laboratories and worked as a researcher and developer at IBM, Xerox, Microelectronics, and Computer Technology Corporation.

Dr. Marian Croak is best known for developing Voice Over Internet Protocols (VoIP). VoIP is technology that converts your voice into a digital signal, allowing you to make a call directly from a computer or other digital device, which she received the first of many patents for in 2006. She also invented the technology that allows people to send text-based donations to charity. She holds hundreds of patents that are still in use today and is currently the VP of Engineering at Google. 

Gerald A. Lawson pioneered home video gaming in the 1970s by helping create the Fairchild Channel F, the first home video game system with interchangeable games. Lawson was largely a self-taught engineer and the first major Black figure in the video game industry.

Janet E. Bashen is the first Black woman to hold a patent for a web-based software invention. The patented software, LinkLine, created in 1994, is a web-based application for Equal Employment Opportunity (EEO) claims intake and tracking, claims management, and document management. As a result of her work with equal employment opportunity and diversity and inclusion, Bashen is regarded as a social justice advocate.

Black innovators that continue to lead the way 

Tope Awotona is the founder and CEO of Calendly, the scheduling platform for high-performing teams and individuals. Awotona grew up in Lagos, Nigeria and came to the US in 1996, where he eventually founded Calendly in 2013.  Docker is a proud customer of Calendly!

David Steward is the chairman and founder of World Wide Technology Inc., one of the largest Black-owned businesses in America. World Wide Technology Inc helps customers discover, evaluate, architect, and implement advanced technology lab testing. WWT even employs Docker with their own technology!

Kelsey Hightower is a principal engineer for Google Cloud Computing and an advocate for open-source software. In an even less diverse space within the tech industry, Hightower has become a leading voice on cloud computing and software infrastructure. In 2015, he co-founded the Kubernetes-focused conference KubeCon, and is one of the most well-known speakers on Kubernetes. 

DockerCon2022

Join us for DockerCon2022 on Tuesday, May 10. DockerCon is a free, one day virtual event that is a unique experience for developers and development teams who are building the next generation of modern applications. If you want to learn about how to go from code to cloud fast and how to solve your development challenges, DockerCon 2022 offers engaging live content to help you build, share and run your applications. Register today at https://www.docker.com/dockercon/
The post Black Innovators That Paved the Way appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Captain Take 5 – Martin Terp

Docker Captains are select members of the community that are both experts in their field and are passionate about sharing their Docker knowledge with others. “Docker Captains Take 5” is a regular blog series where we get a closer look at our Captains and ask them the same broad set of questions ranging from what their best Docker tip is to whether they prefer cats or dogs (personally, we like whales and turtles over here). Today, we’re interviewing Martin Terp who just joined as a Docker Captain. Martin is a Splunk Consultant at Netic.dk and is based in Aalborg, Denmark.

How/when did you first discover Docker?

I was working as a Sysadmin for a big Danish ISP, and when you work with IT, you always try to expand your knowledge: How can I improve this application? How can I make deployments easier? What is the next big thing? So back in 2016, I came across Docker, and instantly fell in love with it and containerization in general. I started testing it out, on my own little projects and quickly I could see the benefits for it in the way we deploy and maintain applications.

What is your favorite Docker command?

I must say “docker scan”, making images and running containers is pretty sweet, but you also need to think about the vulnerabilities that can (will) occur. Looking back at the log4j vulnerability, it’s pretty important to detect it, also when running your own, and building your own images. Docker scan provides this functionality and I love it, also its great to see that https://hub.docker.com/ also have implemented a vulnerability scanner

What is your top tip for working with Docker that others may not know?

Use Dockerfiles! I have seen many, mainly newcomers, that are building containers by “docker exec” into the container, and executing commands/scripts that they need. Do the right thing from the start, and do yourself a favor by looking into Dockerfiles, it’s worth it.

What’s the coolest Docker demo you have done/seen ?

Dockercon 2017, docker captains Marcos Nils and Jonathan Leibiusky showcased “Play With Docker” (https://labs.play-with-docker.com/). I was really impressed and thought it would be really handy for people to get into docker.

What have you worked on in the past six months that you’re particularly proud of?

I don’t think there is a particular project, something I’m personally proud of, is that I switched jobs after being in the same place for 14 years, and started specializing in another awesome analytics tool called Splunk. Also a feel-good thing, is my contribution to the community forums. I have spent a lot of time on the forums trying to help others with their issues.

What do you anticipate will be Docker’s biggest announcement this year?

It’s really hard to tell, surprise me

What are some personal goals for the next year with respect to the Docker community?

I spend a lot of time on the community forums, and there are questions that get asked again and again with “how do i”, so the plan is to create some blog posts, addressing some of the common things that get asked on the forums.

What talk would you most love to see at DockerCon 2022?

I love to see when they are presenting real-world scenarios. It’s always very interesting to see what people have come up with, how they solved their head-scratching projects, and every year I’m amazed!But I also hope to see myself at Dockercon in the future, I hope that covid hasn’t put a complete stop to events where we can attend IRL. I would like to meet some of my fellow docker enthusiasts.

Looking to the distant future, what is the technology that you’re most excited about and that you think holds a lot of promise?

I love containers and I’m very excited to see where it will go in the future, its evolving constantly, there is always a new way to doing this, doing that, doing things smarter, and that’s also something I like with this community, people want to share their findings, we occasionally see posts on the forums with “Look what I made!” and those are the most fun to see  

Rapid fire questions…

What new skill have you mastered during the pandemic?

I have on and off tried learning to play the guitar, but couldn’t really find the time for it, so I tried picking it up again doing the pandemic, and after all this time, I can proudly say; I suck at it!

Cats or Dogs?

Why not both?

Salty, sour or sweet?

Salty

Beach or mountains?

Beach!

Your most often used emoji?

The post Docker Captain Take 5 – Martin Terp appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

New Docker Menu & Improved Release Highlights with Docker Desktop 4.5

We’re excited to announce the release of Docker Desktop 4.5 which includes enhancements we’re excited for you to try out. 

New Docker Menu: Improved Speed and Unified Experience Across Operating Systems

We’ve launched a new version of the Docker Menu which creates a consistent user experience across all operating systems (including Docker Desktop for Linux, follow the roadmap item for updates and pre-release builds!). The Docker Menu looks and works exactly as it did before, so no need to learn anything new, just look forward to potential enhancements in the future. This change has also significantly sped up the time it takes to open the Docker Dashboard, so actions from the Docker Menu that take you to the Docker Dashboard are now instantaneous.

If you do run into any issues, you can still go back to the old version by doing the following:

Quit Docker Desktop, then add a features-overrides.json  file with the following content:

{
“WhaleMenuRedesign”: {
“enabled”: false
}
}

Depending on your operating system, you will need to place this file in one of the following location:

On Mac: ~/Library/Group Containers/group.com.docker/features-overrides.jsonOn Windows: %APPDATA%Dockerfeatures-overrides.json

Docker Dashboard Release Highlights

Continuing the revamp of the update experience, we’ve moved the release highlights into the software updates section in the Docker Dashboard, creating one centralized place for all update information, so you can easily refer back to it. We’ve also included new information about the update: the version number of the newest version available, as well as the build number for both the version you are on and the latest version. 

For now, when you manually check for updates from the Docker Menu, you will still see the release highlights pop-up outside of the Docker Dashboard, but that will be removed in future versions, and direct you instead to this Software Updates section.

Reducing the Frequency of Docker Desktop Feedback Prompts

We’ve seen your comments that we’re asking for feedback too often and it’s disrupting your workflows. We really appreciate the time you take to let us know how our product is doing, but we’ve made sure you get asked less often. 

To give you an overview, previously, we asked for feedback 14 days after a new installation, and then users were prompted again for feedback every 90 days. Now, new installations of Docker Desktop will prompt users to give initial feedback after 30 days of having the product installed. Users can then choose to give feedback or decline. You then won’t be asked again for 180 days since the last prompt for a rating.

These scores help us understand how the user experience of a product is trending so we can continue to make improvements to the product, and the comments you leave helps us make changes like this when we’ve missed the mark. 

What’s missing from making Docker great for you?

We strive to put developers first in everything we do. As we mentioned in this blog, your feedback is how we prioritize features and is why we’re working on improving Mac filesystem performance (check out the roadmap item for the latest build), and implementing Docker Desktop for Linux. We’d love to know what you think we should work on next. Upvote, comment or add new ideas to our public roadmap. 

DockerCon2022

Join us for DockerCon2022 on Tuesday, May 10. DockerCon is a free, one day virtual event that is a unique experience for developers and development teams who are building the next generation of modern applications. If you want to learn about how to go from code to cloud fast and how to solve your development challenges, DockerCon 2022 offers engaging live content to help you build, share and run your applications. Register today at https://www.docker.com/dockercon/
The post New Docker Menu & Improved Release Highlights with Docker Desktop 4.5 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

The Impacts of an Insecure Software Supply Chain

Today, software regularly integrates open-source code from third-party sources into applications. While this practice empowers developers to create more capable software in a shorter time frame, it brings with it the risk of introducing inadequately vetted code. How aware are we of the security of our open-source code?

Most of us use pip or npm to freely install software, making decisions based on functionality and support. Efficiency is the goal when we have delivery targets to meet. If we choose not to use open-source solutions, we miss out on their significant productivity benefits. But, if we decide to use open-source solutions, we take the chance of potentially introducing insecure components into our software supply chains and must mitigate any risk with the right tools and processes.

So what is the software supply chain? The software supply chain comprises the steps it takes to develop code before it makes its way into an organization’s application. The chain includes all open-source contributors who wrote the code, the dependencies the code relies on, the repositories where developers downloaded the code, and the organization’s internal review. Each link in the chain represents a potential weak point where unsafe or malicious code can make its way into a production application. 

What Can Go Wrong

Google’s security policy points out that “if an attacker successfully injects any code at all, it’s pretty much game over.” Unfortunately, with continuous deployment (CD) becoming more commonplace, the window to spot such attacks before releasing infected code to users has narrowed.

Attackers’ goals are varied. Hijacking resources for cryptocurrency mining, harvesting username and password combinations for credential stuffing, and data scraping are just a few examples. The consequences are often dire.

Let’s explore some of the potential risks of using open-source solutions.

Common Forms of Attack

Malicious software posing as genuine packages routinely shows up in package management software. Two types of supply chain attacks take advantage of modern software’s numerous dependencies: typosquatting and dependency confusion. In both, the assailant uses a variety of tactics to trick the developer or management software into downloading a dependency file that can execute malicious code. 

Typosquatting

Typosquatting relies on the proximity of keys on a keyboard and common misspelling to gain entry. This method of attack transcends programming language bases. Since the early days of the web, domain name typosquatting has been a problem. Package and image deception has become increasingly prevalent, banking on developers’ quick fingers.

In typosquatting, a publisher has uploaded a package with a misspelled name. The misspelled name is so similar to the original package’s that the developer fails to notice they have misspelled the word, unknowingly downloading malicious code.

Dependency Confusion

Dependency confusion exploits the mix of private and public dependencies that software uses. Hackers typically inspect package.json files for Node.js applications to find internal unclaimed packages on npm. They create malicious packages with this same namespace, and automated developer tools install these external malicious packages instead of the intended internal package.

This tactic isn’t limited to npm. For example, Python’s pip displays insecurities ripe for exploitation. Here, a hacker can register a package on PyPi with the same name as an internal package, identified in a requirements.txt file. At registration, they select a higher version number than the genuine package. When this higher number is in builds that include –extra-index-url, it takes precedence, and this seemingly newer version replaces the older.

Like typosquatting, dependency confusion is a problem for all languages. Similar attacks could happen with a Maven pom.xml file or Gradle settings file in Java or a .csproj file referencing NuGet packages in .NET. Tools incorrectly substitute the code, leaving our application vulnerable.

It’s important to remember that these issues can be present deep in the chain, in transitive dependencies. In the diagram above, we can see that an attacker has targeted a dependency, of a dependency, of a dependency. This nesting makes our ecosystem unmanageable and challenging to audit. 

We may trust our developers not to make malicious codebase changes and perhaps feel content that a four-eye review policy will protect our software. When a dependency gets upgraded, we trust that someone else is carrying out reviews with our rigor. But this may not be the case for packages maintained in small teams or by individuals.

An Example Scenario

A malicious actor creates a seemingly innocent package, like some utility functions. Let’s call this Package X.

They then publish their code to a package distribution site. Remember, just because code is inspectable on GitHub doesn’t necessarily mean it’s the same code on the package management site.

They use Package X as part of a pull request into the code for Package Y, fixing minor bugs on open-source projects. Their genuinely-useful bug-fix has pulled in the malicious dependency (Package X), and presto! The malicious code is in the repository.

If our code uses Package Y, then our software inherits the vulnerability in Package X.

Organizations must update their open-source code constantly to mitigate the risk of hidden vulnerabilities. These organizations must also use detection methods such as automated vulnerability scanning to identify known vulnerabilities before they cause damage.

With global news coverage quick to publicize any data or security breach, these incidents can damage an organization’s reputation. Rebuilding that trust is highly challenging. 

Empowering Developers to Secure Your Supply Chain

All of the security risks that come with development means developers need access to tools that make security as easy as possible for them. Luckily, Docker seamlessly integrates security measures across our platform to help mitigate risks and secure the software supply chain. The first step is attestation and verification. To establish trust, we must verify code and not make any assumptions about its security. 

Docker’s Image Access Management feature, available to Docker Business users, enables organizations to control where they get their software. This approach shifts the control of access and permissions from developers to the site level, with toggle options to easily set options and role-based access control (RBAC) available to manage authorization at scale. This control helps ensure developers are only using approved and secure images.

Image Access Management is also a good way of guaranteeing the sources are legitimate. When your business has hundreds of developers, tracking what each software engineer is innocently installing becomes challenging. Docker Business users get an audit log that records the creation, deletion, and editing of teams and repositories to enhance visibility.

To help developers make better container image decisions, Docker provides a visual way of validating images via badging in Docker Hub. These badges are specific to Docker Verified Publisher Images and Docker Official Images and signal to developers that what they’re pulling is trusted, secure, and ready for use.

Docker also provides vulnerability scanning tools via our security partner, Snyk. Developers can use the Snyk scanner right in their CLI for the insight and visibility they need into the security posture of their local Dockerfiles and local images. This includes a list of Common Vulnerabilities and Exposures (CVEs), the sources, such as OS packages and libraries, versions in which they were introduced, and a recommended fixed version (if available) to remediate the CVEs discovered. When this step is automated, we no longer rely solely on our developers to manually scan for insecurities.

Gaining Peace of Mind

Returning to our Package X example, we now have multiple layers to prevent that worst-case scenario:

Image Access Management ensures that our developers can only pull base images from trusted, verified sources.Role-Based Access Control enables developers to reduce the blast radius by controlling which developers can bring in new content and potentially isolating the breach to a single team’s work.Vulnerability Scan automatically scans for CVEs when we build a new image. To quickly resolve our insecure dependency issues, Vulnerability Scanning vets and flags dependencies for our developers and offers remediation options.Audit Log provides three months of history capturing all activities. This record helps organizations discover all affected internal supply chains quickly.

This layered approach to security offers increased opportunities for checks. While vigilance and manual checks are still ideal, it’s reassuring to have these tools available to help prevent and combat attacks.

Conclusion

In this article, we’ve explored the genuine supply chain security problem, one that President Biden has even addressed.

By targeting an app’s dependencies, cybercriminals can reach multiple organizations with the hope of evading scrutiny. It’s often the easiest way to break in when organizations are locked down. Everyone is a potential target, and attacks can have wide-reaching implications. Assailants seem happy to play the long game and use social engineering to gain access, perhaps offering their time as repository maintainers. As developers and automated tools integrate components into systems, there become multiple points where an adversary could inject malware.Docker offers a robust suite of tools for vetting open-source code and dependencies before making their way into an application. Docker Business enables software development to continue to benefit from the productivity gains of containers without being hampered by security concerns.

Get started with Docker Business to discover how Docker helps keep your business safe and secure.

DockerCon2022

Join us for DockerCon2022 on Tuesday, May 10. DockerCon is a free, one day virtual event that is a unique experience for developers and development teams who are building the next generation of modern applications. If you want to learn about how to go from code to cloud fast and how to solve your development challenges, DockerCon 2022 offers engaging live content to help you build, share and run your applications. Register today at https://www.docker.com/dockercon/
The post The Impacts of an Insecure Software Supply Chain appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/