Learn About Modern Apps on Azure with Docker at Microsoft Ignite

The Docker team will be on the show floor at Microsoft Ignite the week of November 4. We’ll be talking about the state of modern application development, how to accelerate innovation efforts, and the role containerization, Docker, and Microsoft Azure play in powering these initiatives.
Come by booth #2414 at Microsoft Ignite to check out the latest developments in the Docker platform. Learn why over 1.8 million developers build modern applications on Docker, and over 800 enterprises rely on Docker Enterprise for production workloads. 
At Microsoft Ignite, we will be talking about:
How to Develop and Deliver Modern Applications for Azure Kubernetes Service (AKS)
Docker Enterprise 3.0 shipped back in April 2019, making it the first and only desktop-to-cloud container platform in the market that lets you build and share any application and securely run them anywhere – from hybrid cloud to the edge. At Microsoft Ignite, we’ll have demos that shows how Docker Enterprise 3.0 simplifies Kubernetes for Azure Kubernetes Service (AKS) and enables companies to more easily build modern applications with Docker Desktop Enterprise and Docker Application. 
Learn how to accelerate your journey to the cloud with Docker’s Dev Team Starter Bundle for AKS. This offer combines industry-leading Docker Desktop Enterprise (DDE) and Docker Trusted Registry (DTR) to ensure success with modern app development and delivery lifecycle.

Unifying the Dev to Ops Experience
There’s no question that modern, distributed applications are becoming more complex.  You need a seamless and repeatable way to build, share and run all of your company’s applications efficiently. A unified end-to-end platform addresses these challenges by improving collaboration, providing greater control, and ensuring security across the entire application lifecycle.
At Ignite, we’ll show you how your developers can easily build containerized applications with Docker Enterprise – without disrupting their existing workflows. And for IT ops pros, we’ll explain how you can deploy new services faster with the confidence of knowing security has been baked in from the start – all under one unified platform. Talk to a Docker expert at Microsoft Ignite about how Docker Enterprise provides the developer tooling, security and governance, and ease of deployment needed for a seamless dev to ops workflow. 
We hope to see you at the show! 
Want a quick preview? Watch the Docker Enterprise 3.0 demo:

You can also dive deeper with these resources: 

Learn more about Docker App – Docker App: Cloud Native Application Bundles (CNAB).
Watch the webinar series: Drive High-Velocity Innovation with Docker Enterprise 3.0

Learn how #Docker helps you build modern apps and modernize existing apps on #Azure at #MSIgniteClick To Tweet
 
The post Learn About Modern Apps on Azure with Docker at Microsoft Ignite appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Don’t Be Scared of Kubernetes

5 Reasons You Might Be Afraid to Get Started with Kubernetes
Kubernetes has the broadest capabilities of any container orchestrator available today, which adds up to a lot of power and complexity. That can be overwhelming for a lot of people jumping in for the first time – enough to scare people off from getting started. There are a few reasons it can seem intimidating:

It’s complicated, isn’t it? As we noted in a previous post, jumping into the cockpit of a state-of-the-art jet puts a lot of power under you, but how to actually fly the thing is not obvious. If you’ve never done more than play a flight simulator game, it can be downright scary.
Is it production-ready? Everyone is talking about Kubernetes, but it’s only emerged as a major technology in the past few years. Many companies take a wait-and-see approach on new technologies. Building out a Kubernetes deployment on your own means solving challenging problems without enterprise support. 
Do I have the people and skills to support it? IT teams are just beginning to learn Kubernetes. If it’s complicated, it means you’ll need people with the right experience to support it. According to industry data, jobs for Kubernetes were up 176 percent in 2018. Anyone with Kubernetes experience is in high demand, so they’re hard to hire.
Does it automate and replace my job in IT? Kubernetes does automate a lot of tasks tied to application infrastructure and allows developers and DevOps teams to treat infrastructure as code. Anyone who builds and maintains application infrastructure could look at it and see a future where their job isn’t relevant.
Is it just a fad? Some technologies have a brief moment where they shine, but fade into relative obscurity. Interest in Objective-C rose quickly in 2010 and 2011, but a few years later it barely registers in conversations on the popular Stack Overflow site. Kubernetes is popular now, but will it be in 5 years?

5 Reasons You Can Get Started with Kubernetes Today
Thankfully, Kubernetes is a robust platform with broad industry support both in the Open Source community and beyond. Here are 5 reasons you don’t need to be scared to get started with Kubernetes:

It doesn’t have to be complicated. Docker makes it easy to on-board and use Kubernetes for both Day 1 and Day 2 operations. With the Docker platform, Kubernetes is easy for both dev and ops teams to use as their default orchestration platform.
Big companies use it at scale, in production. GSK runs a global data science platform on Kubernetes. Visa is building a machine-learning and analytics platform on Kubernetes. McKesson, the #6 firm on the Fortune 500, has an internal developer platform based on Kubernetes. All of these companies run Kubernetes on Docker Enterprise for critical workloads.
You can get started without knowing everything. You don’t need detailed knowledge or certifications to get going. The Docker platform provides a highly available and secure set up of Kubernetes out-of-the-box, surfaces the controls and features you need at the beginning, and lets you begin using Kubernetes from the desktop to the cloud right away. As your team’s skills grow, they can still directly interact with the certified Kubernetes distribution underneath – giving them full control over the advanced configuration and settings.
It helps ops teams grow professionally. Kubernetes helps expand the role of IT ops in an organization. With Kubernetes and Docker, you can provide a complete platform to your developers that works on any machine or any cloud.
Kubernetes has a mature ecosystem. All the major cloud providers support Kubernetes. Docker, Red Hat/IBM, VMware and other vendors have Kubernetes-specific solutions. Hundreds of solutions now plug in to Kubernetes for everything from storage, networking, monitoring, and alerting to security, IoT and AI.

If you’re looking at Kubernetes, there’s never been a better time to get started – that’s after you are done with the Halloween parties and Trick or Treating!
To learn more about how Docker can help you get started with Kubernetes:

Learn about designing your first application in Kubernetes.
Read the Kubernetes Made Easy eBook.
Follow this tutorial and quickstart guide. 

Don’t be scared of #Kubernetes. Here’s 5 reasons to get started with Kubernetes today:Click To Tweet

The post Don’t Be Scared of Kubernetes appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Understanding Kubernetes Security on Docker Enterprise 3.0

This is a guest post by Javier Ramírez, Docker Captain and IT Architect at Hopla Software. You can follow him on Twitter @frjaraur or on Github.
Docker began including Kubernetes with Docker Enterprise 2.0 last year. The recent 3.0 release includes CNCF Certified Kubernetes 1.14, which has many additional security features. In this blog post, I will review Pod Security Policies and Admission Controllers.
What are Kubernetes Pod Security Policies?
Pod Security Policies are rules created in Kubernetes to control security in pods. A pod will only be scheduled on a Kubernetes cluster if it passes these rules. These rules are defined in the  “PodSecurityPolicy” resource and allow us to manage host namespace and filesystem usage, as well as privileged pod features. We can use the PodSecurityPolicy resource to make fine-grained security configurations, including:

Privileged containers.
Host namespaces (IPC, PID, Network and Ports).
Host paths and their permissions and volume types.
User and group for containers process execution and setuid capabilities inside container.
Change default containers capabilities.
Behaviour of Linux security modules.
Allow host kernel configurations using sysctl.

The Docker Universal Control Plane (UCP) 3.2 provides two Pod Security Policies by default – which is helpful if you’re just getting started with Kubernetes.These default policies will allow or prevent execution of privileged containers inside pods. To manage Pod Security Policies, you need to have administrative privileges on the cluster.
Reviewing and Configuring Pod Security Policies
To review defined Pod Security Policies in a Docker Enterprise Kubernetes cluster, we connect using an administrator’s UCP Bundle:
$ kubectl get PodSecurityPolicies
NAME           PRIV    CAPS   SELINUX    RUNASUSER   FSGROUP    SUPGROUP   READONLYROOTFS   VOLUMES                                                
privileged     true    *      RunAsAny   RunAsAny    RunAsAny   RunAsAny   false            *
unprivileged   false          RunAsAny   RunAsAny    RunAsAny   RunAsAny   false            *
These default policies control the execution of privileged containers inside pods.
Let’s create a policy to disallow execution of containers using root for main process. If you are not familiar with Kubernetes, we can reuse the “unprivileged” Pod Security Policy content as a template:
$ kubectl get psp  privileged -o yaml –export > /tmp/mustrunasnonroot.yaml
We removed non-required values and will have the following Pod Security Policy file: /tmp/mustrunasnonroot.yaml 
Change the runAsUser rule with “MustRunAsNonRoot” value:
apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:  
  name: psp-mustrunasnonroot
spec:
  allowPrivilegeEscalation: false
  allowedHostPaths:
  – pathPrefix: /dev/null
    readOnly: true
  fsGroup:
    rule: RunAsAny
  hostPorts:
  – max: 65535
    min: 0
  runAsUser:
    rule: MustRunAsNonRoot
  seLinux:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  volumes:
  – ‘*’
We create this new policy as an administrator user in the current namespace (if none was selected, the policy will be applied to the “default” namespace):
$ kubectl create -f mustrunasnonroot.yaml                      
podsecuritypolicy.extensions/psp-mustrunasnonroot created
Now we can review Pod Security Policies:
$ kubectl get PodSecurityPolicies –all-namespaces
NAME               PRIV    CAPS   SELINUX    RUNASUSER          FSGROUP    SUPGROUP   READONLYROOTFS   VOLUMES
psp-mustrunasnonroot   true    *      RunAsAny   MustRunAsNonRoot   RunAsAny   RunAsAny   false            *
privileged         true    *      RunAsAny   RunAsAny           RunAsAny   RunAsAny   false            *
unprivileged       false          RunAsAny   RunAsAny           RunAsAny   RunAsAny   false            *
Next, we create a Cluster Role that will allow our test user to use the Pod Security Policy we just created, using role-mustrunasnonroot.yaml.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: role-mustrunasnonroot
rules:
– apiGroups:
  – policy
  resourceNames:
  – psp-mustrunasnonroot
  resources:
  – podsecuritypolicies
  verbs:
  – use
Next, we add a Cluster Role Binding to associate a new non-admin role to our user (jramirez for this example). We created rb-mustrunasnonroot-jramirez.yaml with following content:
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rb-mustrunasnonroot-jramirez
  namespace: default
roleRef:
  kind: ClusterRole
  name: role-mustrunasnonroot
  apiGroup: rbac.authorization.k8s.io
subjects:
– kind: User
  name: jramirez
  namespace: default
We create both the Cluster Role and Cluster Role Binding to allow jramirez to use the defined Pod Security Policy:
$ kubectl create -f role-mustrunasnonroot.yaml
clusterrole.rbac.authorization.k8s.io/role-mustrunasnonroot created

$ kubectl create -f rb-mustrunasnonroot-jramirez.yaml
rolebinding.rbac.authorization.k8s.io/rb-mustrunasnonroot-jramirez created
Now that we’ve applied this policy, we should delete the default rules (privileged or unprivileged). In this case, the default “ucp:all:privileged-psp-role” was applied.
$ kubectl delete clusterrolebinding ucp:all:privileged-psp-role
clusterrolebinding.rbac.authorization.k8s.io “ucp:all:privileged-psp-role” deleted
We can review jramirez’s permissions to create new pods on the default namespace.
$ kubectl auth can-i create pod –as jramirez
yes
Now we can create a pod using the following manifest from nginx-as-root.yaml:
apiVersion: v1
kind: Pod
metadata:
 name: nginx-as-root
 labels:
   lab: nginx-as-root
spec:
 containers:
 – name: nginx-as-root
   image: nginx:alpine
We’ll now need to login as jramirez using ucp-bundle, our test non-admin user. We can then test deployment to see if it works:
$ kubectl create -f nginx-as-root.yaml
pod/nginx-as-root created
We will get a CreateContainerConfigError because the image doesn’t have any users defined, so the command will try to create a root container, which the policy blocks.
Events:
 Type     Reason     Age                    From               Message
 —-     ——     —-                   —-               ——-
 Normal   Scheduled  6m9s                   default-scheduler  Successfully assigned default/nginx-as-root to vmee2-5
 Warning  Failed     4m12s (x12 over 6m5s)  kubelet, vmee2-5   Error: container has runAsNonRoot and image will run as root
 Normal   Pulled     54s (x27 over 6m5s)    kubelet, vmee2-5   Container image “nginx:alpine” already present on machine
What can we do to avoid this? As a best practice,  we should not allow containers with root permissions. However, we can create an Nginx image without root permissions. Here’s a lab image that will work for our purposes (but it’s not production ready):
FROM alpine

RUN addgroup -S nginx
&& adduser -D -S -h /var/cache/nginx -s /sbin/nologin -G nginx -u 10001 nginx
&& apk add –update –no-cache nginx
&& ln -sf /dev/stdout /var/log/nginx/access.log
&& ln -sf /dev/stderr /var/log/nginx/error.log
&& mkdir /html

COPY nginx.conf /etc/nginx/nginx.conf

COPY html /html

RUN chown -R nginx:nginx /html

EXPOSE 1080

USER 10001

CMD [“nginx”, “-g”, “pid /tmp/nginx.pid;daemon off;”]
We created a new user nginx to launch the nginx main process under this one (in fact, the nginx installation will provide a special user www-data or nginx, depending on base operating system). We added the user under a special UID because we will use that UID on Kubernetes to specify the user that will be used to launch all containers in our nginx-as-nonroot pod.
You can see that we are using a new nginx.conf. Since we are not using root to start Nginx, we can’t use ports below 1024. Consequently, we exposed port 1080 in the Dockerfile. This is the simplest Nginx config required.
worker_processes  1;

events {
   worker_connections  1024;
}

http {
   include       mime.types;
   default_type  application/octet-stream;
   sendfile        on;
   keepalive_timeout  65;
   server {
       listen       1080;
       server_name  localhost;

       location / {
           root   /html;
           index  index.html index.htm;
       }

       error_page   500 502 503 504  /50x.html;
       location = /50x.html {
           root   /html;
       }

   }

}
We added a simple index.html with just one line:
$ cat html/index.html  
It worked!!
And our pod definition has new security context settings:
apiVersion: v1
kind: Pod
metadata:
 name: nginx-as-nonroot
 labels:
   lab: nginx-as-root
spec:
 containers:
 – name: nginx-as-nonroot
   image: frjaraur/non-root-nginx:1.2
   imagePullPolicy: Always
 securityContext:
   runAsUser: 10001
We specified a UID for all containers in that pod. Therefore, the Nginx main process will run under 10001 UID, the same one specified in image.
If we don’t specify the same UID, we will get permission errors because the main process will use pod-defined settings with different users and Nginx will not be able to manage files:
nginx: [alert] could not open error log file: open() “/var/lib/nginx/logs/error.log” failed (13: Permission denied)
2019/10/17 07:36:10 [emerg] 1#1: mkdir() “/var/tmp/nginx/client_body” failed (13: Permission denied)
If we do not specify any security context, it will use the image-defined UID with user 10001. It will work correctly since the process doesn’t require root access.  
We can go back to the previous situation by deleting the custom Cluster Role Binding we created earlier (rb-mustrunasnonroot-jramirez) and adding the UCP role again:
ucp:all:privileged-psp-role
Create rb-privileged-psp-role.yaml with following content:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: ucp:all:privileged-psp-role
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: privileged-psp-role
subjects:
– kind: Group
  name: system:authenticated
  apiGroup: rbac.authorization.k8s.io
– kind: Group
  name: system:serviceaccounts
  apiGroup: rbac.authorization.k8s.io
And create the ClusterRoleBinding object using $ kubectl create -f rb-privileged-psp-role.yaml as administrator.
Kubernetes Admission Controllers
Admission Controllers are a feature added to Kubernetes clusters to manage and enforce default resource values or properties and prevent potential risks or misconfigurations. They occur before workload execution, intercepting requests to validate or modify its content. The Admission Controllers gate user interaction with cluster API, applying policies to any actions on Kubernetes.
We can review which Admission Controllers are defined in Docker Enterprise by taking a look at the ucp-kube-apiserver command-line used to start this Kubernetes API Server container. On any of our managers, we can describe container configuration:
$ docker inspect ucp-kube-apiserver –format ‘json {{ .Config.Cmd }}’  
json [–enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,
NodeRestriction,ResourceQuota,PodNodeSelector,PodSecurityPolicy, UCPAuthorization,CheckImageSigning,UCPNodeSelector


These are the  Admission Controllers deployed with Docker Enterprise Kubernetes:

NamespaceLifecycle will manage important namespace features. It will prevent users from removing the default, kube-system and kube-public namespaces, and it will provide the integrity for other namespaces deletion, removing all objects on it prior to deletion (for example). It will also prevent new object creation on a namespace that is in the process of being removed (it can take time because running objects must be removed).
LimitRanger will manage default resource requests to pods that don’t specify any. It also verifies that Namespace associated resources doesn’t pass its defined limit.  
ServiceAccount will associate pods to a default ServiceAccount if they don’t provide one, and ensure that one exists if it is present on Pod definition. It will also manage API account accessibility.
PersistentVolumeLabel will add special labels for regions or zones to ensure that right volumes are mounted per region or zone.
DefaultStorageClass will add a default StorageClass when none was declared, and a PersistentVolumeClaim ask for storage.
DefaultTolerationSeconds will set default pod toleration values, evicting nodes not ready or unreachable for more than 300 seconds.
NodeRestriction will allow only kubelet modifications to its own Node or Pods.
ResourceQuota will manage resource quota limits not reached within namespaces.
PodNodeSelector provides default node selections within namespaces.
PodSecurityPolicy reviews Pod Security Policies to determine if a Pod can be executed or not.
UCPAuthorization provides UCP Roles to Kubernetes integration, preventing deletion of system-required cluster roles and bindings. It will also prevents using host paths volumes or privileged containers for non-admins (or non-privileged accounts), even if it is allowed in Pod Security Policies.  
CheckImageSigning prevents execution of Pods based on unsigned images by authorized users.
UCPNodeSelector manages execution of non-system Kubernetes workloads only on non-mixed UCP hosts.

The last few are Docker designed and created to ensure UCP and Kubernetes integration and improved access and security. These Admission Controllers will be set up during installation. They can’t be disabled since doing so can compromise cluster security, or even break some unnoticeable but important functionalities.
As we learned, Docker Enterprise 3.0 now provides Kubernetes security features by default that will complement and improve users interaction with the cluster, maintaining the highest security environment out-of-box.
To learn more about you can run Kubernetes with Docker Enterprise:

Read the Kubernetes Made Easy eBook.
Try Play with Kubernetes, powered by Docker.

#Kubernetes Security on Docker Enterprise 3.0 by #DockerCaptain @frjaraurClick To Tweet

The post Understanding Kubernetes Security on Docker Enterprise 3.0 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Designing Docker Hub Two-Factor Authentication

We recognize the central role that Docker Hub plays in modern application development and are working on many enhancements around security and content. In this blog post we will share how we are implementing two-factor authentication (2FA). 
Using Time-Based One-Time Password (TOTP) Authentication
Two-factor authentication increases the security of your accounts by requiring two different forms of validation that you are the rightful account owner. For Docker Hub, that means providing something you know (your username and a strong password) and something you have in your possession. Since Docker Hub is used by millions of developers and organizations for storing and sharing content – sometimes company intellectual property – we chose to use one of the more secure models for 2FA: software token (TOTP) authentication. 
TOTP authentication is more secure than SMS-based 2FA, which has many attack vectors and vulnerabilities. TOTP requires a little more upfront setup, but once enabled, it is just as simple (if not simpler) than text message-based verification. It requires the use of an authenticator application, of which there are many available. These can be apps downloaded to your mobile device (e.g. Google Authenticator or Microsoft Authenticator) or it can be a hardware key (e.g. YubiKey). To learn about these solutions: 

Download Google Authenticator: 

Apple App Store
Google Play

Download Microsoft Authenticator:

Apple App Store
Google Play

Learn more about YubiKeys from Yubico

Enabling Two-Factor Authentication in Docker Hub
Two-factor authentication is enabled in your Docker Hub Account Settings, under the Security tab. 

The basis of TOTP is that you will need to share a one-time secret between Docker Hub and your authenticator app – either through a unique QR code or 32-character string. After this initial synchronization, your authenticator will run an algorithm to change the passcode at a preset interval (typically under a minute) so it is now a time-sensitive piece of information only you have access to – the second component of 2FA. Subsequent logins into Docker Hub will ask for this passcode in addition to your password.

As the initial synchronization is an important part of the TOTP process, it is also a piece of information that is very sensitive; you do not want someone else gaining access to this initial secret. As a result, we do not share the code after your initial synchronization has been confirmed. If you lose your mobile device or access to your authenticator app, you will not be able to login with 2FA. 
This is why it is critical to save your recovery code. You will need the recovery code that is presented when you enable 2FA the first time. Save it somewhere safe so you can recover your account when needed! 
One additional note: Many Docker users access their Hub account through the CLI. Once you’ve enabled 2FA, you will need to create a personal access token in order to log into your Hub account from the CLI. Traditional username and password combinations will not work once you have enabled 2FA. Personal access tokens can be created from the same Security tab under Account Settings.  
For detailed instructions on enabling and using 2FA during the beta, please refer to the following:

Release notes
Documentation

What’s Next for Docker Hub
We’d love for you to try the two-factor authentication beta in Docker Hub today and give us feedback at https://github.com/docker/hub-feedback/issues 
In addition to moving 2FA to general availability in the near future, we are also preparing to add support for further authentication controls:

WebAuthn support: This allows you to use a security key or supported browsers with WebAuthn support for 2FA
Mandatory enforcement of 2FA for an organization: This allows organization administrators to enforce 2FA for all of their members and provide methods for remediating anyone who is not in compliance

To learn more about Docker Hub:

Read more about Docker Hub
Explore the Docker Hub documentation 
Get started with Docker by creating your Hub account

New! How Docker designed and implemented #DockerHub Two-Factor AuthenticationClick To Tweet

The post Designing Docker Hub Two-Factor Authentication appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Attend a #LearnDocker Workshop This Fall

Join a Docker for Developers Workshop Near You
From October through December, Docker User Groups all over the world are hosting a workshop for their local community! Join us for an Introduction to Docker for Developers, a hands-on workshop we run on Play with Docker. 
This Docker 101 workshop for developers is designed to get you up and running with containers. You’ll learn how to build images, run containers, use volumes to persist data and mount in source code, and define your application using Docker Compose. We’ll even mix in a few advanced topics, such as networking and image building best-practices. There is definitely something for everyone! 
Visit your local User Group page to see if there is a workshop scheduled in your area. Don’t see an event listed? Email the team by scrolling to the bottom of the chapter page and clicking the contact us button. Let them know you want to join in on the workshop fun! 
Join the Docker Virtual Meetup Group
Don’t see a user group in your area? Never fear, join the virtual meetup group for monthly meetups on all things Docker.  

The #LearnDocker for #developers workshop series is coming to a city near you! Learn more on our blog:Click To Tweet

To find out more about Docker for developers:

Read about Docker Developer Tools, including Docker Desktop Enterprise.
Download Docker Desktop for Windows or Mac.

The post Attend a #LearnDocker Workshop This Fall appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Women in Tech Week Profile: Renee Mascarinas

We’re continuing our celebration of Women in Tech Week into this week with another profile of one of many of the amazing women who make a tremendous impact at Docker – this week, and every week – helping developers build modern apps.

Renee Mascarinas is a Product Designer at Docker. You can follow her on Twitter @renee_ners.

What is your job?
Product Designer. 

How long have you worked at Docker?
11 months.

Is your current role one that you always intended on your career path? 
The designer part, yes. But the software product part, not necessarily. My background is in architecture and industrial design and I imagined I would do physical product design. But I enjoy UX; the speed at which you can iterate is great for design.

What is your advice for someone entering the field?
To embrace discomfort. I don’t mean that in a bad way. A mentor once told me that the only time your brain is actually growing is when you’re uncomfortable. It has something to do with the dendrites being forced to grow because you’re forced to learn new things.

Tell us about a favorite moment or memory at Docker or from your career? 
There’s so many, but one that stood out was my visit to the Docker Paris office. It wasn’t so much about going to Paris, which was amazing. It was more about the bonding experience with the product team. It was my third week at Docker and I was forced to learn quickly about applications for our enterprise distribution offering.  I have a long list of favorite people from Docker, and most of them were from that trip.

What are you working on right now that you are excited about?
Enhancing the UX in Docker Hub. There’s so much work being done and released, it’s super exciting!  I have always worked on products that aren’t quite a SaaS offering so user feedback can sometimes be limited.  With Hub, you can dig up opinions from Twitter and Reddit, and feedback is collected in GitHub.  

What do you do to get “unstuck” on a really difficult problem/design/bug?
I do competitive and comparative analysis on a similar product on their UX workflows.  I like to review some of my findings with colleagues to see if I’m on the right path. 
What is your superpower?
I am a super planner. If I’m going on a vacation, there will be an itinerary. I think it was a coping mechanism growing up to defend against my bossy older brother. 

What is your definition of success?
Success in a project is when I’ve reached a level of contentment that I’m not up at night still thinking about it. And everything can be a project, I set a lot of goals to challenge myself and attribute success to achieving goals.

What are you passionate about?
Problem solving. It’s very egotistical but I like to challenge myself to see if I have what it takes to figure things out.

What is something you love to do? And something you dislike?
I love teaching myself new skills., I am a YouTubeyoutube DIY junkie. I hate indecision. 

Share a story about something or someone who has been very impactful on your life or career?
My calculus teacher was really passionate about the subject and it made me passionate as well. I ended up tutoring calculus to high school students for four years because I wanted them to love calculus too.

.@renee_ners shares her experience being a Product Designer at Docker. #WomenInTechWeekClick To Tweet

The post Women in Tech Week Profile: Renee Mascarinas appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Designing Your First App in Kubernetes: An Overview

Kubernetes is a powerful container orchestrator and has been establishing itself as IT architects’ container orchestrator of choice. But Kubernetes’ power comes at a price; jumping into the cockpit of a state-of-the-art jet puts a lot of power under you, but knowing how to actually fly it is not so simple. That complexity can overwhelm a lot of people approaching the system for the first time.
I wrote a blog series recently where I walk you through the basics of architecting an application for Kubernetes, with a tactical focus on the actual Kubernetes objects you’re going to need. The posts go into quite a bit of detail, so I’ve provided an abbreviated version here, with links to the original posts.

Part 1: Getting Started 

Just Enough Kube
With a machine as powerful as Kubernetes, I like to identify the absolute minimum set of things we’ll need to understand in order to be successful; there’ll be time to learn about all the other bells and whistles another day, after we master the core ideas. No matter where your application runs, in Kubernetes or anywhere else, there are four concerns we are going to have to address:

Processes: In Kubernetes, that means using pods and controllers to schedule, maintain and scale processes.
Networking: Kubernetes services allow application components to talk to each other.
Configuration: A well-written application factors out configuration, rather than hard-coding it. In Kubernetes, volumes and configMaps are our tools for this.
Storage: Containers are short-lived, so data you want to keep should be stored elsewhere. For this, we’ll look at Container Storage Interface plugins and persistentVolumes.

Just Enough Design
There are some high-level design points we need so we can understand the engineering decisions that follow, and to make sure we’re getting the maximum benefit out of our containerization platform. Regardless of what orchestrator we’re using, there are three key principles we need to keep in mind that set a standard for what we’re trying to achieve when containerizing applications: portability, scalability, and shareability. Containerization is fundamentally meant to confer these benefits to applications; if at any time when you’re containerizing an app and you aren’t seeing returns in these three areas, something may well need to be rethought.
For more information on Kubernetes and where to start when using them to develop an application, check out Part 1 of our series.
Part 2: Setting up Processes
The heart of any application is its running processes, and in Kubernetes, we create processes as pods, which are used to schedule groups of individual containers. Pods are a bit fancier than individual containers, in that they can schedule whole groups of containers, co-located on a single host, which brings us to our first decision point — How should our processes be arranged into pods?

A pod can contain one or more containers, but containers in the pod must scale together.

There are two important considerations for how we set up pods:
Pods and containers must scale together. If you need to scale your application up, you have to add more pods; these pods will come with copies of every container they include. 
Kubernetes controllers are the best way to schedule pods, since controllers like deployments or daemonSets provide numerous operational tools for scaling and maintenance of workloads beyond what’s offered by bare pods.
To learn more about setting up processes for managing your applications, check out Part 2 of our series. 
Part 3: Communicating via Services
After deploying workloads as pods managed by controllers, you have to establish a way for those pods to reliably communicate with each other without incurring a lot of complexity for developers.
That’s where Kubernetes services come in. They provide reliable, simple networking endpoints for routing traffic to pods via the fixed metadata defined in the controller that created them. For basic applications, two services cover most use cases: clusterIP and nodePort services. That brings us to another decision point: What kind of services should route to each controller?
The simplest way to decide between them is to determine whether the target pods are meant to be reachable from outside the cluster or not.

A Kubernetes nodePort service allows external traffic to be routed to the pods
A Kubernetes clusterIP service only accepts traffic from within the cluster.

You can learn more about communication via Kubernetes services and how to decide between clusterIP and nodePort services in Part 3 of our series. 
Part 4: Configuration
One of the core design principles of any containerized app is portability. When you build an application with Kubernetes, you’ll want to address any problems with the environment-specific configuration expected by that app.
A well-designed application should treat configuration like an independent object — separate from the containers themselves and provisioned to them at runtime. When we design applications, we need to identify what configurations we want to make pluggable in this way — which brings us to another decision point:
What application configurations will need to change from environment to environment?
Typically, these will be environment variables or config files that change from environment to environment, such as access tokens for different services used in staging versus production or different port configurations.
Once we’ve identified the configs in our application that should be pluggable, we can enable the behavior we want by using Kubernetes’ system of volumes and configMaps.

The configMap and Volume interact to provide configuration for containers.

You can read more about configuration in Part 4 of the series.
Part 5: Storage
The final component we want to think about when we build applications for Kubernetes is storage. Remember, a container’s filesystem is transient, and any data kept there is at risk of being deleted along with your container if that container ever exits or is rescheduled. 
Any container that generates or collects valuable data should be pushing that data out to stable external storage; conversely, any container that requires the provisioning of a lot of data should be receiving that data from an external storage location. 
Which brings us to our last decision point: What data does your application gather or use that should live longer than the lifecycle of a pod?
Tackling that requires working with the Kubernetes storage model. The full model has a number of moving parts: The Container Storage Interface (CSI) Plugins, StorageClass, PersistentVolumes (PV), PersistentVolumeClaims (PVC), and Volumes.

To learn more about how to leverage the Kubernetes storage model for your applications, be sure to check out Part 5 of the series. 
The Future
I’ve walked you through the basic Kubernetes tooling you’ll need to containerize a wide variety of applications, and provided you with next-step pointers on where to look for more advanced information. Try working through the stages of containerizing workloads, networking them together, modularizing their config, and provisioning them with storage to get fluent with the ideas above.
After mastering the basics of building a Kubernetes application, ask yourself, “How well does this application fit the values of portability, scalability and shareability we started with?” Containers themselves are engineered to easily move between clusters and users, but what about the entire application you just built? How can we move that around while still preserving its integrity and not invalidating any unit and integration testing you’ll perform on it?
Docker App sets out to solve that problem by packaging applications in an integrated bundle that can be moved around as easily as a single image. Stay tuned to Docker’s blog for more guidance on how to use this emerging format with your Kubernetes applications.
To learn more about Kubernetes and Docker:

Find out more about running Kubernetes on Docker Enterprise and Docker Desktop.
Check out Play with Kubernetes, powered by Docker.

We will also be offering training on Kubernetes starting in early 2020. In the training, we’ll provide more specific examples and hands on exercises.To get notified when the training is available, sign up here:
Get Notified About Training

Building Your First App in #Kubernetes – a summary of our 5 part blog seriesClick To Tweet

The post Designing Your First App in Kubernetes: An Overview appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Women in Tech Week Profile: Clara McKenzie

We’re continuing our celebration of Women in Tech Week with another profile of one of many of the amazing women who make a tremendous impact at Docker – this week, and every week – helping developers build modern apps.
Clara McKenzie (center) is a Support Escalation Engineer.

What is your job?
SEG Engineer (Support Escalation Engineer).

How long have you worked at Docker?
4 months.

Is your current role one that you always intended on your career path? 
The SEG role is a combination that probably doesn’t exist as a general rule. I’ve always liked to support other engineers and work cross-functionally, as well as unravel hard problems, so it’s a great fit for me.

What is your advice for someone entering the field?
The only thing constant about a career in tech is change. When in doubt, keep moving. By that, I mean keep learning, keep weighing new ideas, keep trying new things.  

Tell us about a favorite moment or memory at Docker or from your career? 
In my first month at Docker, we hosted a summer cohort of students from Historical Black Colleges who were participating in a summer internship. As part of their visit a few of us were asked to share our insights about working in tech, career paths with a BS in engineering, and how different roles work together to build and release a product. My colleagues Savaugn and Shanea were able to give them deeply personal and practical advice. I was able to give them the advice I’d give my own children who are just a bit ahead of them in their career. The students had lots of questions and it was a really nice event.

What are you working on right now that you are excited about?
There is so much for me to learn at Docker. I wasn’t familiar with the Enterprise products when I arrived.  Now I’m learning mostly DTR and UCP from both inside and out. Escalations keep you on your toes. It’s a unique experience to hear the client dilemmas and piece together the story. Was this an unintended consequence, a bug? How did we get here? And more importantly, how are we going to address it?

What do you do to get “unstuck” on a really difficult problem/design/bug?
I ask my colleagues! This happens quite often – these are hard problems. It takes the skills sets both Support and Product to solve escalations.  You need to know what to ask the clients for in terms of logs and debugging data, sometimes issues get solved just in the process of doing that. 

What is your superpower?
Persistence.

What is your definition of success?
Happiness.

What are you passionate about?
I love dogs. I enjoy genealogy, puzzles, and I like volunteering. I was on the board of Berkeley Ballet Theater as Treasure for a while; that was really rewarding. My husband and I collect and sometimes race old Volkswagens too, which is a lot of fun.

What is something you want non-women in tech to know?
We have not yet created the level playing field we all want, one where we get the best out of everyone.  It’s complicated but we can make strides. Docker can be the kind of workplace you refer back to your entire career as the way things should be done.

Who do you look up to?
Leaders who take risks for the greater good: Martin Luther King Jr., Ruth Bader Ginsburg, Mahatma Gandhi.

What is something you love to do? And something you dislike?
I love hanging out with my family.  And I hate sitting in traffic.

Share a story about something or someone who has been very impactful on your life or career.
A director I had in a very successful startup I was at,  would say “Success is failing less than the next guy”. He also showed me how to take everything in stride, the good and the bad, which was good because there was a lot of rolling with the punches on that job. The company was acquired for a lot of money and we all remember it with pride and exhaustion.

Clara McKenzie, Docker Support Escalation Engineer, on her career and why you always need to be learning. #WomeninTechWeekClick To Tweet

The post Women in Tech Week Profile: Clara McKenzie appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Women in Tech Week Profile: Amn Rahman

We’re continuing our celebration of Women in Tech Week with another profile of one of many of the amazing women who make a tremendous impact at Docker – this week, and every week – helping developers build modern apps. 
Amn Rahman is a Data Engineer. You can follow her on Twitter @amnrahman.

What is your job?
I work as a data engineer – building and maintaining data pipelines and delivery tools for the entire company. 

How long have you worked at Docker?
2 years. 

Is your current role one that you always intended on your career path? 
Not quite! As a teenager, I wanted to become a cryptographer and spent most of my time in undergrad and grad school on research in privacy and security. I eventually realized I liked working with data and was pretty good at dealing with databases, which pushed me into my current role. 

What is your advice for someone entering the field?
Become acquainted with the entire data journey and try to pick up one tool or language for each phase. For example, you may choose to use Python to fetch and transform data from an API and load it in a MySQL database and then expose the data using a BI tool like Tableau.  In the process, you’ll develop a mental model for a data pipeline and will be able to categorize each new tool or pipeline you come across. 
Never be afraid to ask questions and do not get overwhelmed by all that is out there! When applying for jobs, do not worry about meeting 100% of the qualifications!

Tell us about a favorite moment or memory at Docker or from your career? 
Delivering a live demo in the Dockercon keynote! 

What are you working on right now that you are excited about?
I’m working on various exciting projects right now: helping Customer Success in efficiently tracking customer requests and support backlog, providing analytics to HR to track diversity, and surfacing product telemetry for product managers.  

What do you do to get “unstuck” on a really difficult problem/design/bug?
There’s a running joke in my team that I simply sleep on it! I’ve woken up many times with solutions and bug fixes in my head. Sometimes I just like to step away from my computer and phone and just grab a pen and paper or attack a whiteboard and go through the problem step by step. We get so caught up in jumping from one task to the next that we forget to make time for deep reflection. It’s usually in moments like these that I find answers to problems bugging me. 

What is your superpower?
Making people laugh at ridiculous jokes! On a more serious note, communication. 

What is your definition of success?
Bringing a positive impact to lives other than yours. 

What are you passionate about?
Tech for social good, mentoring women in tech and female students, promoting more mindfulness in the use of technology, standing against addictive and biased design patterns and algorithms, creating inclusive spaces and communities. 

Who do you look up to?
Abdul Sattar Edhi – one of the greatest humanitarians to walk the earth. He spent his entire life dedicated to serving the poor and the ill through the world’s largest volunteer run ambulance network and shelters. 

What is something you love to do? And something you dislike?
I love to build stuff whether it’s assembling IKEA furniture or geeking out on an Arduino. I also love to read poetry! And I have an intense dislike for washing dishes! 

Share a story about something or someone who has been very impactful on your life or career?
My undergrad advisor, Dr. Fareed Zaffar at the Lahore University of Management Sciences pushed me into applying to graduate schools when I had no plans of doing so and didn’t believe in myself. I ended up getting accepted by some of the most prestigious universities in the world and I owe him a great deal for being where I am today. He continues to be a mentor who I regularly reach out to for advice.
The post Women in Tech Week Profile: Amn Rahman appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Women in Tech Week Profile: Jenny Fong

We’re continuing our celebration of Women in Tech Week with another profile of one of many of the amazing women who make a tremendous impact at Docker – this week, and every week – helping developers build modern apps. 

Jenny Fong is a Senior Director of Product Marketing at Docker. Follow her on Twitter @TechGalJenny.

What is your job? 
Senior Director of Product Marketing.

How long have you worked at Docker? 
2 ½ years.

Is your current role one that you always intended on your career path? 
Nope! I studied engineering and started in a technical role at a semiconductor company. I realized there that I really enjoyed helping others understand how technology works, and that led me to Product Marketing! What I love about the role is that it’s extremely cross-functional. You work closely with engineering, product management, sales and marketing, and it requires both left brain and right brain skills. My technical background helps me to understand our products, while my creative side helps me communicate our products’ core value propositions. 

What is your advice for someone entering the field?
It’s always good to be self-aware. Know your strengths and weaknesses, and look for opportunities that align to your strengths or give you a chance to work on areas you wish to develop. You’ll be able to shine and you’ll be happier too!

Tell us about a favorite moment or memory at Docker or from your career? 
My second week at Docker was DockerCon in Austin in 2017. I met a few of our customers who were speaking at the event, and they spoke with such passion and excitement about the projects they led and the outcomes they delivered to their organizations. I knew I had made the right decision to join Docker at that moment.

What do you do to get “unstuck” on a really difficult problem/design/bug?
I love using analogies! If you can’t wrap your head around a new problem, is it similar to any other problems? The analogy can sometimes help you test some ideas – if it’s true for the analogy, is it true for your particular problem? 
What is your superpower? 
Puzzles! I love solving puzzles – both literal ones (I’m a daily NY Times crossword solver and always love a good jigsaw puzzle), and business ones (launching a new product to a new market). 

What is your definition of success?
Beyond job titles and awards, I think success is when you’ve helped someone else. That could be helping them to do their job faster, helping them learn about something new, helping them close the deal or finish the project. When you help someone else, you’re impacting her or his life, and that is very rewarding. 

From #engineering to marketing, discover @TechGalJenny’s career transformation journey.Click To Tweet

The post Women in Tech Week Profile: Jenny Fong appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/