Docker for AWS and Azure: Secure By Default Container Platform

Docker for AWS and Docker for Azure are much more than a simple way to setup Docker in the cloud. In fact they provision a secure-by-default infrastructure to give you a secure platform to build, ship and run Docker apps in the cloud. Available for free in Community Edition and as a subscription with support and integrated management in Enterprise Edition, Docker for AWS and Docker for Azure allow you to leverage pre-configured security features for your apps today – without having to be a cloud infrastructure expert.
You don’t have to take our word for it – in February 2017, we engaged NCC Group, an independent security firm, to conduct a security assessment of Docker for AWS and Docker for Azure. Included in this assessment is Docker for AWS and Docker for Azure Community Edition and Enterprise Edition Basic. This assessment took place from February 6-17. NCC Group was tasked with assessing whether these Docker Editions not only provisioned secure infrastructure with sensible defaults, but also leveraged and integrated the best security features of each cloud. We’d like to openly share their findings with you today.
NCC Group validated our security model and defaults, including:

Cloud-specific access control with IAM roles in AWS and Service Principals in Azure to run enterprise workloads in a least-privileged manner
Network configuration settings, including newly provisioned load balancers that are dynamically updated as applications are created and updated
Underlying host network configuration review to provide minimal network exposure

We encourage you to review their full reports for Docker for AWS and Docker for Azure.
NCC Group does bring up some limitations of Docker for AWS and Azure, for example that access is managed with a single SSH key, which makes it impractical for bigger teams of developers and ops to share access. Docker has additional products:

Fleet Management from Docker Cloud to let you share access to a Docker Community Edition (CE) swarm mode cluster using Docker ID, including integration with Docker for Mac and Windows
Docker Enterprise Edition Standard and Advanced tiers (formerly known as Docker Datacenter) for AWS and Azure provide a full Container-as-a-Service environment with integrated user management and granular RBAC

Additionally, NCC Group has previously covered the Docker Engine’s security features in their whitepaper on hardening Linux Containers. This included validating runtime protections such as syscall filtering with seccomp and dropping Linux capabilities by default.
We’ve also worked with NCC Group to validate the cryptography and system security for Notary, our signing and verification framework that ensures Docker images are untampered and always up to date. Read the full report.
Docker is continuing to improve Docker for AWS and Azure (and GCP) to give users an easy-to-use way to configure secure container setups in the cloud. Click here to get started with Docker for AWS and Docker for Azure today.

#Docker for @awscloud and @Azure: Secure By Default #Container PlatformClick To Tweet

The post Docker for AWS and Azure: Secure By Default Container Platform appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Online meetup recap: Introduction to LinuxKit

At DockerCon 2017 we introduced LinuxKit: A toolkit for building secure, lean and portable Linux subsystems. For this Online Meetup, Docker Technical Staff member Rolf Neugebauer gave an introduction to LinuxKit, explained the rationale behind its development and gave a demo on how to get started using it.

Watch the recording and slides

Additional Q&A
You said the ONBOOT containers are run sequentially, does it wait for one to finish before it starts the next?
Yes, the nest ONBOOT container is only started once the previous one finished.
How do you make our own kernel to use?
See ./docs/kernels.md
How you would install other software that is not a container per say – eg sshd?
Everything apart from the init process and runc/containerd run in a container. There is an example under ./examples/sshd.yml on how to run a SSH server.
Can I load kernel modules – iptables/conntrack for example?
Yes. You can compile modules and add them to the image as described in ./docs/kernels.md. There is an open issue to allow compilation of kernel modules at run time.
Does it have to be Alpine linux – can it be say minimal Debian?
We mainly use Alpine for packages. The base rootfile system is basically busybox with a minimal init system, which we are planning to replace with a custom init program. You can create packages with Debian, if you like.
How we make data persistent like docker volumes to outside of linuxkit box?
There are examples on how to format/mount and use persistent disks, e.g., ./examples/docker.yml which uses a persistent disk to store docker images.
Bonus Talk: LinuxKit Security SIG

Learn more about #LinuxKit by @neugebar. Slides and recording from the latest online #meetup now upClick To Tweet

Learn more about LinuxKit and other components of the Moby Project

Attend the Moby Summit on 6/19 in San Francisco
Read more about LinuxKit
Stay up to date! Weekly LinuxKit Status Reports
More questions about LinuxKit? Join the Docker Community Slack: #linuxkit channel

The post Online meetup recap: Introduction to LinuxKit appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Announcing the Docker Student Developer Kit & Campus Ambassador Program!

For quite some time now we have been receiving daily requests from students all over the world, asking for our help learning Docker, using Docker and teaching their peers how to use Docker. We love their enthusiasm, so we decided it was time to reach out to the student community and give them the helping hand they need!

Understanding how to use Docker is now a must have skill for students. Here are 5 reasons why:

Understanding how to use Docker is one of the most important skills to learn if you want to advance in a career in tech, according to Business Insider.
You can just start coding instead of spending time setting up your environment.
You can collaborate easily with your peers and enable seamless group work: Docker eliminates any ‘works on my machine’ issues.
Docker allows you to easily build applications with a modern microservices architecture.
Using Docker will greatly enhance the security of your applications.

Getting Started with Docker
Are you a student who is excited about the prospect of using Docker but still don’t know exactly what Docker is or where to start learning? Now that your finals are over and you have all this free time on your hands, it’s the perfect time for you to get started! Here are a couple of resources to get you up to speed in time for the fall semester:

Get Started with Docker (official documentation)
Introduction to Docker Presentation (slides)
Docker Beginner labs (Self paced training)

The Docker Student Developer Kit
We know that many college students are eager to learn and use Docker but don’t have the money to build, ship and run their apps with Docker on their favorite cloud. We really understand all the associated costs of being a student like tuition, textbooks, lab materials, video games, etc. so we decided they deserved our assistance. That is why we are giving 5 private repos from Docker Cloud to every student for free for one year! What is Docker Cloud? Docker Cloud provides a hosted registry service with build and testing facilities for Dockerized application images; tools to help you set up and manage host infrastructure; and application lifecycle features to automate deploying services created from images (read more here).
The Docker Student Developer Kit will also contain access to many free images from Docker Store publishers (100s of Enterprise grade images and 1000s of Community images)! As if all that wasn’t enough, cloud providers Azure, AWS and DigitalOcean are also feeling generous and are offering substantial cloud credits to the first 150 students who apply for the kit! Get your Student Developer Kit by applying here! Just make sure to put your school-issued email address, an upload of your student card and specify your preferred cloud provider!
Docker in Higher Education Community Directory
In redeeming the Docker Student Developer Kit, students will also have access to the Docker in Higher Education Community Directory and the Docker Community Slack team, including the #docker-students channel. The benefits of this are twofold; students will receive updates on Docker community events, activities, and programs including exclusive invitations and promo codes to DockerCon and other community events; and they will be able to network easily with like-minded students and teachers from across the globe. The directory allows search of other members by location or interest, private messaging and group discussions on the slack channel without sharing email addresses. The possibilities for collaboration are limitless!
The Docker Campus Ambassador Program

For those students who are already using Docker and want to initiate and foster a Docker community on their college campus, we have created the Docker Campus Ambassador program. This program is for students of any discipline, who already have an intermediate to advanced knowledge of Docker and want to run events (workshops, talks, show and tells etc.) to help their peers learn Docker. Students who are accepted into this program will receive exclusive training from Docker to learn both technical and professional skills, privileged access to the latest Docker editions, admission to all Beta programs, discounted and free tickets to community events like DockerCon and of course… lots of swag!  Students who apply to this program should be leaders on campus and have a knack for organizing and catalyzing groups of people. If this is you, please read the guidelines and apply to the Campus Ambassadors program here.
 

Are you a student learning #Docker? Apply to our Campus Ambassador program + free Student Dev Kit!Click To Tweet

The post Announcing the Docker Student Developer Kit & Campus Ambassador Program! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Draft: Kubernetes container development made easy

Today’s post is by Brendan Burns, Director of Engineering at Microsoft Azure and Kubernetes co-founder.About a month ago Microsoft announced the acquisition of Deis to expand our expertise in containers and Kubernetes. Today, I’m excited to announce a new open source project derived from this newly expanded Azure team: Draft. While by now the strengths of Kubernetes for deploying and managing applications at scale are well understood. The process of developing a new application for Kubernetes is still too hard. It’s harder still if you are new to containers, Kubernetes, or developing cloud applications.Draft fills this role. As it’s name implies it is a tool that helps you begin that first draft of a containerized application running in Kubernetes. When you first run the draft tool, it automatically discovers the code that you are working on and builds out the scaffolding to support containerizing your application. Using heuristics and a variety of pre-defined project templates draft will create an initial Dockerfile to containerize your application, as well as a Helm Chart to enable your application to be deployed and maintained in a Kubernetes cluster. Teams can even bring their own draft project templates to customize the scaffolding that is built by the tool.But the value of draft extends beyond simply scaffolding in some files to help you create your application. Draft also deploys a server into your existing Kubernetes cluster that is automatically kept in sync with the code on your laptop. Whenever you make changes to your application, the draft daemon on your laptop synchronizes that code with the draft server in Kubernetes and a new container is built and deployed automatically without any user action required. Draft enables the “inner loop” development experience for the cloud.Of course, as is the expectation with all infrastructure software today, Draft is available as an open source project, and it itself is in “draft” form :) We eagerly invite the community to come and play around with draft today, we think it’s pretty awesome, even in this early form. But we’re especially excited to see how we can develop a community around draft to make it even more powerful for all developers of containerized applications on Kubernetes.To give you a sense for what Draft can do, here is an example drawn from the Getting Started page in the GitHub repository.There are multiple example applications included within the examples directory. For this walkthrough, we’ll be using the python example application which uses Flask to provide a very simple Hello World webserver. $ cd examples/pythonDraft CreateWe need some “scaffolding” to deploy our app into a Kubernetes cluster. Draft can create a Helm chart, a Dockerfile and a draft.toml with draft create:$ draft create–> Python app detected–> Ready to sail$ lsDockerfile  app.py  chart/  draft.toml  requirements.txtThe chart/ and Dockerfile assets created by Draft default to a basic Python configuration. This Dockerfile harnesses the python:onbuild image, which will install the dependencies in requirements.txt and copy the current directory into /usr/src/app. And to align with the service values in chart/values.yaml, this Dockerfile exposes port 80 from the container.The draft.toml file contains basic configuration about the application like the name, which namespace it will be deployed to, and whether to deploy the app automatically when local files change.$ cat draft.toml[environments]  [environments.development]    name = “tufted-lamb”    namespace = “default”    watch = true    watch_delay = 2Draft UpNow we’re ready to deploy app.py to a Kubernetes cluster.Draft handles these tasks with one draft up command:reads configuration from draft.tomlcompresses the chart/ directory and the application directory as two separate tarballsuploads the tarballs to draftd, the server-side componentdraftd then builds the docker image and pushes the image to a registrydraftd instructs helm to install the Helm chart, referencing the Docker registry image just builtWith the watch option set to true, we can let this run in the background while we make changes later on…$ draft up–> Building DockerfileStep 1 : FROM python:onbuildonbuild: Pulling from library/python…Successfully built 38f35b50162c–> Pushing docker.io/microsoft/tufted-lamb:5a3c633ae76c9bdb81b55f5d4a783398bf00658eThe push refers to a repository [docker.io/microsoft/tufted-lamb]…5a3c633ae76c9bdb81b55f5d4a783398bf00658e: digest: sha256:9d9e9fdb8ee3139dd77a110fa2d2b87573c3ff5ec9c045db6009009d1c9ebf5b size: 16384–> Deploying to Kubernetes    Release “tufted-lamb” does not exist. Installing it now.–> Status: DEPLOYED–> Notes:     1. Get the application URL by running these commands:     NOTE: It may take a few minutes for the LoadBalancer IP to be available.           You can watch the status of by running ‘kubectl get svc -w tufted-lamb-tufted-lamb’  export SERVICE_IP=$(kubectl get svc –namespace default tufted-lamb-tufted-lamb -o jsonpath='{.status.loadBalancer.ingress[0].ip}’)  echo http://$SERVICE_IP:80Watching local files for changes…Interact with the Deployed AppUsing the handy output that follows successful deployment, we can now contact our app. Note that it may take a few minutes before the load balancer is provisioned by Kubernetes. Be patient!$ export SERVICE_IP=$(kubectl get svc –namespace default tufted-lamb-tufted-lamb -o jsonpath='{.status.loadBalancer.ingress[0].ip}’)$ curl http://$SERVICE_IPWhen we curl our app, we see our app in action! A beautiful “Hello World!” greets us.Update the AppNow, let’s change the “Hello, World!” output in app.py to output “Hello, Draft!” instead:$ cat <<EOF > app.pyfrom flask import Flaskapp = Flask(__name__)@app.route(“/”)def hello():    return “Hello, Draft!n”if __name__ == “__main__”:    app.run(host=’0.0.0.0′, port=8080)EOFDraft Up(grade)Now if we watch the terminal that we initially called draft up with, Draft will notice that there were changes made locally and call draft up again. Draft then determines that the Helm release already exists and will perform a helm upgrade rather than attempting another helm install:–> Building DockerfileStep 1 : FROM python:onbuild…Successfully built 9c90b0445146–> Pushing docker.io/microsoft/tufted-lamb:f031eb675112e2c942369a10815850a0b8bf190eThe push refers to a repository [docker.io/microsoft/tufted-lamb]…–> Deploying to Kubernetes–> Status: DEPLOYED–> Notes:     1. Get the application URL by running these commands:     NOTE: It may take a few minutes for the LoadBalancer IP to be available.           You can watch the status of by running ‘kubectl get svc -w tufted-lamb-tufted-lamb’  export SERVICE_IP=$(kubectl get svc –namespace default tufted-lamb-tufted-lamb -o jsonpath='{.status.loadBalancer.ingress[0].ip}’)  echo http://$SERVICE_IP:80Now when we run curl http://$SERVICE_IP, our first app has been deployed and updated to our Kubernetes cluster via Draft!We hope this gives you a sense for everything that Draft can do to streamline development for Kubernetes. Happy drafting!–Brendan Burns, Director of Engineering, Microsoft AzurePost questions (or answer questions) on Stack OverflowJoin the community portal for advocates on K8sPortFollow us on Twitter @Kubernetesio for latest updatesConnect with the community on SlackGet involved with the Kubernetes project on GitHub
Quelle: kubernetes

Managing microservices with the Istio service mesh

Today’s post is by the Istio team showing how you can get visibility, resiliency, security and control for your microservices in Kubernetes. Services are at the core of modern software architecture. Deploying a series of modular, small (micro-)services rather than big monoliths gives developers the flexibility to work in different languages, technologies and release cadence across the system; resulting in higher productivity and velocity, especially for larger teams.With the adoption of microservices, however, new problems emerge due to the sheer number of services that exist in a larger system. Problems that had to be solved once for a monolith, like security, load balancing, monitoring, and rate limiting need to be handled for each service.Kubernetes and ServicesKubernetes supports a microservices architecture through the Service construct. It allows developers to abstract away the functionality of a set of Pods, and expose it to other developers through a well-defined API. It allows adding a name to this level of abstraction and perform rudimentary L4 load balancing. But it doesn’t help with higher-level problems, such as L7 metrics, traffic splitting, rate limiting, circuit breaking, etc.Istio, announced last week at GlueCon 2017, addresses these problems in a fundamental way through a service mesh framework. With Istio, developers can implement the core logic for the microservices, and let the framework take care of the rest – traffic management, discovery, service identity and security, and policy enforcement. Better yet, this can be also done for existing microservices without rewriting or recompiling any of their parts. Istio uses Envoy as its runtime proxy component and provides an extensible intermediation layer which allows global cross-cutting policy enforcement and telemetry collection.The current release of Istio is targeted to Kubernetes users and is packaged in a way that you can install in a few lines and get visibility, resiliency, security and control for your microservices in Kubernetes out of the box.In a series of blog posts, we’ll look at a simple application that is composed of 4 separate microservices. We’ll start by looking at how the application can be deployed using plain Kubernetes. We’ll then deploy the exact same services into an Istio-enabled cluster without changing any of the application code — and see how we can observe metrics. In subsequent posts, we’ll focus on more advanced capabilities such as HTTP request routing, policy, identity and security management.Example Application: BookInfoWe will use a simple application called BookInfo, that displays information, reviews and ratings for books in a store. The application is composed of four microservices written in different languages:Since the container images for these microservices can all be found in Docker Hub, all we need to deploy this application in Kubernetes are the yaml configurations.It’s worth noting that these services have no dependencies on Kubernetes and Istio, but make an interesting case study. Particularly, the multitude of services, languages and versions for the reviews service make it an interesting service mesh example. More information about this example can be found here.Running the Bookinfo Application in KubernetesIn this post we’ll focus on the v1 version of the app: Deploying it with Kubernetes is straightforward, no different than deploying any other services. Service and Deployment resources for the productpage microservice looks like this: apiVersion: v1kind: Servicemetadata: name: productpage labels:   app: productpagespec: type: NodePort ports: – port: 9080   name: http selector:   app: productpage—apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: productpage-v1spec: replicas: 1 template:   metadata:     labels:       app: productpage       track: stable   spec:     containers:     – name: productpage       image: istio/examples-bookinfo-productpage-v1       imagePullPolicy: IfNotPresent       ports:       – containerPort: 9080The other two services that we will need to deploy if we want to run the app are details and reviews-v1. We don’t need to deploy the ratings service at this time because v1 of the reviews service doesn’t use it. The remaining services follow essentially the same pattern as productpage. The yaml files for all services can be found here.To run the services as an ordinary Kubernetes app:kubectl apply -f bookinfo-v1.yamlTo access the application from outside the cluster we’ll need the NodePort address of the productpage service:export BOOKINFO_URL=$(kubectl get po -l app=productpage -o jsonpath={.items[0].status.hostIP}):$(kubectl get svc productpage -o jsonpath={.spec.ports[0].nodePort})We can now point the browser to http://$BOOKINFO_URL/productpage, and see:Running the Bookinfo Application with IstioNow that we’ve seen the app, we’ll adjust our deployment slightly to make it work with Istio. We first need to install Istio in our cluster. To see all of the metrics and tracing features in action, we also install the optional Prometheus, Grafana, and Zipkin addons. We can now delete the previous app and start the Bookinfo app again using the exact same yaml file, this time with Istio:kubectl delete -f bookinfo-v1.yamlkubectl apply -f <(istioctl kube-inject -f bookinfo-v1.yaml)Notice that this time we use the istioctl kube-inject command to modify bookinfo-v1.yaml before creating the deployments. It injects the Envoy sidecar into the Kubernetes pods as documented here. Consequently, all of the microservices are packaged with an Envoy sidecar that manages incoming and outgoing traffic for the service.In the Istio service mesh we will not want to access the application productpage directly, as we did in plain Kubernetes. Instead, we want an Envoy sidecar in the request path so that we can use Istio’s management features (version routing, circuit breakers, policies, etc.) to control external calls to productpage, just like we can for internal requests. Istio’s Ingress controller is used for this purpose.To use the Istio Ingress controller, we need to create a Kubernetes Ingress resource for the app, annotated with kubernetes.io/ingress.class: “istio”, like this:cat <<EOF | kubectl create -f -apiVersion: extensions/v1beta1kind: Ingressmetadata: name: bookinfo annotations:   kubernetes.io/ingress.class: “istio”spec: rules: – http:     paths:     – path: /productpage       backend:         serviceName: productpage         servicePort: 9080     – path: /login       backend:         serviceName: productpage         servicePort: 9080     – path: /logout       backend:         serviceName: productpage         servicePort: 9080EOFThe resulting deployment with Istio and v1 version of the bookinfo app looks like this: This time we will access the app using the NodePort address of the Istio Ingress controller:export BOOKINFO_URL=$(kubectl get po -l istio=ingress -o jsonpath={.items[0].status.hostIP}):$(kubectl get svc istio-ingress -o jsonpath={.spec.ports[0].nodePort})We can now load the page at http://$BOOKINFO_URL/productpage and once again see the running app — there should be no difference from the previous deployment without Istio for the user.However, now that the application is running in the Istio service mesh, we can immediately start to see some benefits. Metrics collectionThe first thing we get from Istio out-of-the-box is the collection of metrics in Prometheus. These metrics are generated by the Istio filter in Envoy, collected according to default rules (which can be customized), and then sent to Prometheus. The metrics can be visualized in the Istio dashboard in Grafana. Note that while Prometheus is the out-of-the-box default metrics backend, Istio allows you to plug in to others, as we’ll demonstrate in future blog posts.To demonstrate, we’ll start by running the following command to generate some load on the application:wrk -t1 -c1 -d20s http://$BOOKINFO_URL/productpageWe obtain Grafana’s NodePort URL:export GRAFANA_URL=$(kubectl get po -l app=grafana -o jsonpath={.items[0].status.hostIP}):$(kubectl get svc grafana -o jsonpath={.spec.ports[0].nodePort})We can now open a browser at http://$GRAFANA_URL/dashboard/db/istio-dashboard and examine the various performance metrics for each of the Bookinfo services:Distributed tracing The next thing we get from Istio is call tracing with Zipkin. We obtain its NodePort URL: export ZIPKIN_URL=$(kubectl get po -l app=zipkin -o jsonpath={.items[0].status.hostIP}):$(kubectl get svc zipkin -o jsonpath={.spec.ports[0].nodePort})We can now point a browser at http://$ZIPKIN_URL/ to see request trace spans through the Bookinfo services. Although the Envoy proxies send trace spans to Zipkin out-of-the-box, to leverage its full potential, applications need to be Zipkin aware and forward some headers to tie the individual spans together. See zipkin-tracing for details. Holistic view of the entire fleet The metrics that Istio provides are much more than just a convenience. They provide a consistent view of the service mesh, by generating uniform metrics throughout. We don’t have to worry about reconciling different types of metrics emitted by various runtime agents, or add arbitrary agents to gather metrics for legacy uninstrumented apps. We also no longer have to rely on the development process to properly instrument the application to generate metrics. The service mesh sees all the traffic, even into and out of legacy “black box” services, and generates metrics for all of it. Summary The demo above showed how in a few steps, we can launch Istio-backed services and observe L7 metrics on them. Over the next weeks we’ll follow on with demonstration of more Istio capabilities like policy management and HTTP request routing. Google, IBM and Lyft joined forces to create Istio based on our common experiences building and operating large and complex microservice deployments for internal and enterprise customers. Istio is an industry-wide community effort. We’ve been thrilled to see the enthusiasm from the industry partners and the insights they brought. As we take the next step and release Istio to the wild, we cannot wait to see what the broader community of contributors will bring to it. If you’re using or considering to use a microservices architecture on Kubernetes, we encourage you to give Istio a try, learn about it more at istio.io, let us know what you think, or better yet, join the developer community to help shape its future! –On behalf of the Istio team. Frank Budinsky, Software Engineer at IBM, Andra Cismaru, Software Engineer and Israel Shalom, Product Manager at Google.Get involved with the Kubernetes project on GitHub Post questions (or answer questions) on Stack Overflow Connect with the community on SlackFollow us on Twitter @Kubernetesio for latest updates
Quelle: kubernetes

Docker Security at PyCon: Threat Modeling & State Machines

The Docker Security Team was out in force at PyCon 2017 in Portland, OR, giving two talks focussed on helping the Python Community to achieve better security. First up was David Lawrence and Ying Li with their “Introduction to Threat Modelling talk”.

Threat Modelling is a structured process that aids an engineer in uncovering security vulnerabilities in an application design or implemented software. The great majority of software grows organically, gaining new features as some critical mass of users requests them. These features are often implemented without full consideration of how they may impact every facet of the system they are augmenting.
Threat modelling aims to increase awareness of how a system operates, and in doing so, identify potential vulnerabilities. The process is broken up into three steps: data collection, analysis, and remediation. An effective way to run the process is to have a security engineer sit with the engineers responsible for design or implementation and guide a structured discussion through the three steps.
For the purpose of this article, we’re going to consider how we would  threat model a house, as the process can be applied to both real world scenarios in addition to software.

Data Collection
Five categories of data must be collected in a threat model:

External Dependencies – are services that elements of the model will interact with, but will not be decomposed during the course of the current threat model. Our house has external dependencies on an alarm monitoring service, and various utilities; power, water, etc…
Entry points – defines the ways in which your system can receive input and provide output. A completely closed system is secure by design, but often not very useful. Our house has three intentional entry points, front and back doors, and a garage door. It also has a number of unintentional, but usable entry points: the windows! For the purposes of this model, we’ll keep things simple and assume, as paranoid security wonks, we’ve nailed our windows shut.
Assets – includes anything we care about protecting in our system. These are both things an attacker can carry away with them like sensitive user data and  resources an attacker might consume. In our house, we care about valuables, important papers, and irreplaceable data like family photos. We also care about our utility bills and want to ensure we’re not paying for somebody else’s car wash. Not everything is an asset though, we don’t care about toilet paper, as long as there’s at least one roll left.
Trust Levels – defines the tiers of access within the system. We have four trust levels concerning our house:

The residents – people who live in the house and have the highest levels of access.
Guests – friends and family invited to stay overnight
Visitors – people invited into the home but restricted to common areas like the living room, kitchen, and backyard.
Passers by – strangers who pass by the house but will not be invited inside.

Data Flows – how data moves around the system. The primary data flow of our house is shown below. What we see is that there are Trust Boundaries at our entry points, indicating a change in the trust associated with the data that made it across the boundary. In this case, the boundary itself is the lock on the doors, and the possession of a garage door opener. Once somebody has crossed one of these boundaries though, they have full access to the house, garage, and associated storage.

Analysis
From the data collected, and a deep understanding of how the system works, we can begin to look for and inspect anomalies. For example, if a data flow indicates there is no trust boundary between two processes, this should be carefully analyzed. In our data flow diagram, we see there is no trust boundary between the house and the garage. This is probably undesirable but let us further analyze the data to objectively establish why and how we’ll fix it.
While there are many ways to analyze and score vulnerabilities, we have found the STRIDE classification system, and DREAD scoring system to be effective and straightforward. STRIDE is an acronym denoting 6 categories of vulnerability:

Spoofing – an entity pretending to be something it’s not, generally by capturing a legitimate user’s credentials
Tampering – the modification of data persisted within the system
Repudiation – the ability to perform operations that cannot be tracked, or for which the attacker can actively cover their tracks
Information Disclosure – the acquisition of data by a trust level that should not have access to it
Denial of Service – preventing legitimate users from accessing the service
Elevation of Privilege – an attack aimed at allowing an entity of lower trust level to perform actions restricted to a higher trust level

One takes each category and looks for behaviour permitted by the system that creates vulnerabilities within that category. It is common find a single vulnerability that spans multiple STRIDE categories. Some example vulnerabilities for our house may be:

Spoofing: When a plumber knocks on our door, if we didn’t schedule them directly (maybe they claim a housemate called them out), we don’t necessarily know they are legitimate. They could just be trying to gain entry to steal our valuables while we’re not looking.
Tampering: One of our housemates likes to smoke but doesn’t like going outside in inclement weather, so they disable the smoke alarm in their bedroom.
Repudiation: Some neighbourhood kid kicked a ball through the front window but we have no way to prove who it was.
Information Disclosure: We only just moved in and haven’t gotten around to installing curtains yet. Anybody walking by can see who is in the house!
Denial of Service: a local vandal thought it would be funny to troll the new neighbours by squirting glue in our locks… now we can’t get the doors open and we’re stuck outside.
Elevation of Privilege: It hasn’t happened yet, but we’ve heard garage doors are pretty insecure. Somebody that can get our garage door open can immediately get in to the rest of the house.

Having defined as many vulnerabilities as we can find, we score each one. The DREAD system defines five metrics on which a vulnerability must be scored. Each is generally scored on a consistent scale, often 1 to 10, with 1 being least severity, and 10 being greatest severity. The sum of the scores subsequently allows us to prioritize our vulnerabilities relative to each other.
The 5 DREAD metrics are:

Damage: how bad would the financial and reputation damage be to your organization and its users.
Reproducibility: how easy is it to trigger the vulnerability. Most vulnerabilities will score a “10” here but those that, for example, involve timing attacks would generally receive lower vulnerabilities as they may not be triggered 100% of the time.
Exploitability: a measure of what resources are required to use the attack. The lowest score of 1 would generally be reserved for nation states, while a score of 10 might indicate the attack could be done through something as simple as URL manipulation in a browser.
Affected Users: a measure of how many users are affected by the attack. Maybe for example it only affect a specific class of user.
Discoverability: how easy it is to uncover the vulnerability. A score of 10 would indicate it’s easily findable through standard web scraping tools and open source pentest tools. At the other end of the scale, a vulnerability requiring intimate knowledge of a system’s internals would likely score a 1.

Lets score our Information Disclosure vulnerability against our Elevation of Privilege vulnerability to see how they compare.

Metric

Information Disclosure Score

Elevation of Privilege Score

Explanation

Damage

1

10

Knowing who is in our house is very low damage, and this could also be observed from who enters and leaves. Gaining access to our house however is severe.

Reproducibility

10

10

Both vulnerabilities can be reproduced 100% of the time. There are no timing elements involved.

Exploitability

10

5

While it’s easy to look in the windows, it’s not as easy to get hands on a garage door opener to effect the initial compromise of the garage. We’re relying on our car to be somewhat secure, and for none of our residents, guests, or visitors to leave the garage open.

Affected Users

10

10

All residents, guests, and visitors are affected by both vulnerabilities.

Discoverability

10

7

It’s obvious to everyone that there are no window coverings. It is less obvious to an external observer that there is no boundary between the garage and house. It might be observable from outside under the right conditions, so we’re estimating this to be easy to discover, but not entirely trivial.

Remediation
For each of our categories in STRIDE there is an associate class of security control used to mitigate it. Exactly how the control is implemented will depend on the system being modelled.

Spoofing – Authentication; the ability to confirm the validity of the request. We would agree with our housemates to always use a plumber from a specific service, and be able to call the head office to confirm the credentials of the plumber.
Tampering – Integrity; we would regularly, and randomly, audit the smoke alarms in the house to ensure nobody has disabled them.
Repudiation – Non-Repudiation; we’re going to install some security cameras to ensure we capture images of the next kid to kick a ball through the window.
Information Disclosure – Confidentiality; we’ll install some thick curtains that can be closed when we don’t want the inside of the house to be outwardly visible.
Denial of Service – Availability; we’re going to install a lock that has both traditional keys and a digital code. This gives us multiple ways to unlock the door, should one be broken or otherwise fail.
Elevation of Privilege – Authorization; we’re going to install a single cylinder lock between the house and the garage, requiring a key on the garage side, but no key on the house side. This prevents garage accessing being pivoted to house access, but still makes it easy to move from the highly privileged house, to the relatively less privileged garage.

Having completed all these steps, it’s time to go and implement the actual fixes!
Look out for our next Security Team blog post on Ashwini Oruganti’s talk “Designing Secure APIs with State Machines”.

Introduction to Threat Modeling & State Machines by @endophage @cyli @docker Security team…Click To Tweet

The post Docker Security at PyCon: Threat Modeling & State Machines appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Get involved with the Moby Project by attending upcoming Moby Summits!

Last month at DockerCon, Solomon introduced the Moby Project: a new open-source project to advance the software containerization movement. The idea behind the project is to help the ecosystem take containers mainstream by providing a library of components, a framework for assembling them into custom container-based systems and a place for all container enthusiasts to experiment and exchange ideas. Going forward, Docker will be assembled using Moby, see Moby and Docker or the diagram below for more details.

Moby Summit at DockerCon 2017
Knowing that that a good number of maintainers, contributors and advanced Docker users would be attending DockerCon, we decided to organize the first Moby Summit in collaboration with the Cloud Native Computing Foundation (CNCF). The summit was a small collaborative event for container hackers who are actively maintaining, contributing or generally involved or interested in the design and development of components of the Moby project library in particular: LinuxKit, containerd, Infrakit, SwarmKit, libnetwork and Notary.

Here’s what we covered during the first part of the summit:

0:05 – Opening words by Patrick Chanezon
9:05 – Moby Project Q&A with Solomon Hykes and Justin Cormack
60:14 – Quick update on containerd by Michael Crosby

Here’s what we covered during the 2nd part of the summit:

0:18 – Swarmkit update by Andrea Luzzardi
7:52 – Libnetwork update by Madhu Venugopal
16:27 – Notary update by David Lawrence
19:59 – InfraKit update by David Chung
36:39– Infinit update by Julien Quintard
49:09 – MirageOS by Mindy Preston

Moby Summit at Docker HQ on 6/19

The feedback we received after the first Moby Summit at DockerCon being very positive, we decided to organize more Moby summits over the next few months. The next one will be taking place at Docker HQ on June 19th, 2017.
Here is the link to register for the event including a high level agenda for the event. All revenue from ticket sales will be donated to a non-profit organization promoting diversity in the tech industry.
Looking forward to seeing you there! Can’t attend this June Moby Summit in San Francisco? Join us for our first European Moby Summit as part of DockerCon Europe in Copenhagen.

Learn more about the difference between @moby and @docker by attending the next #mobysummit on 6/19Click To Tweet

Learn more about Moby Project:

Visit the Moby Project Website
See the latest updates on the Moby Project Forum and Blog
Join the conversation on slack

The post Get involved with the Moby Project by attending upcoming Moby Summits! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Spring Boot Development with Docker

The AtSea Shop is an example storefront application that can be deployed on different operating systems and can be customized to both your enterprise development and operational environments. In my last post, I discussed the architecture of the app. In this post, I will cover how to setup your development environment to debug the Java REST backend that runs in a container.
Building the REST Application
I used the Spring Boot framework to rapidly develop the REST backend that manages products, customers and orders tables used in the AtSea Shop. The application takes advantage of Spring Boot’s built-in application server, support for REST interfaces and ability to define multiple data sources. Because it was written in Java, it is agnostic to the base operating system and runs in either Windows or Linux containers. This allows developers to build against a heterogenous architecture.
Project setup
The AtSea project uses multi-stage builds, a new Docker feature, which allows me to use multiple images to build a single Docker image that includes all the components needed for the application. The multi-stage build uses a Maven container to build the the application jar file. The jar file is then copied to a Java Development Kit image. This makes for a more compact and efficient image because the Maven is not included with the application. Similarly, the React store front client is built in a Node image and the compile application is also added to the final application image.
I used Eclipse to write the AtSea app. If you want info on configuring IntelliJ or Netbeans for remote debugging, you can check out the the Docker Labs Repository. You can also check out the code in the AtSea app github repository.
I built the application by cloning the repository and imported the project into Eclipse by setting the Root Directory to the project and clicking Finish
    File > Import > Maven > Existing Maven Projects 
Since I used using Spring Boot, I took advantage of spring-devtools to do remote debugging in the application. I had to add the Spring Boot-devtools dependency to the pom.xml file.
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-devtools</artifactId>
</dependency>
Note that developer tools are automatically disabled when the application is fully packaged as a jar. To ensure that devtools are available during development, I set the <excludeDevtools> configuration to false in the spring-boot-maven build plugin:
<build>
    <plugins>
        <plugin>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-maven-plugin</artifactId>
            <configuration>
                <excludeDevtools>false</excludeDevtools>
            </configuration>
        </plugin>
    </plugins>
</build>
This example uses a Docker Compose file that creates a simplified build of the containers specifically needed for development and debugging.
 version: “3.1”

services:
 database:
   build: 
      context: ./database
   image: atsea_db
   environment:
     POSTGRES_USER: gordonuser
     POSTGRES_DB: atsea
   ports:
     – “5432:5432″ 
   networks:
     – back-tier
   secrets:
     – postgres_password

 appserver:
   build:
      context: .
      dockerfile: app/Dockerfile-dev
   image: atsea_app
   ports:
     – “8080:8080″
     – “5005:5005″
   networks:
     – front-tier
     – back-tier
   secrets:
     – postgres_password

secrets:
 postgres_password:
   file: ./devsecrets/postgres_password
   
networks:
 front-tier:
 back-tier:
 payment:
   driver: overlay
 The Compose file uses secrets to provision passwords and other sensitive information such as certificates –  without relying on environmental variables. Although the example uses PostgreSQL, the application can use secrets to connect to any database defined by as a Spring Boot datasource. From JpaConfiguration.java:
 public DataSourceProperties dataSourceProperties() {
        DataSourceProperties dataSourceProperties = new DataSourceProperties();

    // Set password to connect to database using Docker secrets.
    try(BufferedReader br = new BufferedReader(new FileReader(“/run/secrets/postgres_password”))) {
        StringBuilder sb = new StringBuilder();
        String line = br.readLine();
        while (line != null) {
            sb.append(line);
            sb.append(System.lineSeparator());
            line = br.readLine();
        }
         dataSourceProperties.setDataPassword(sb.toString());
     } catch (IOException e) {
        System.err.println(“Could not successfully load DB password file”);
     }
    return dataSourceProperties;
}
Also note that the appserver opens port 5005 for remote debugging and that build calls the Dockerfile-dev file to build a container that has remote debugging turned on. This is set in the Entrypoint which specifies transport and address for the debugger.
ENTRYPOINT [“java”, 

“-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005″,”-jar”, 

“/app/AtSea-0.0.1-SNAPSHOT.jar”]
Remote Debugging
To start remote debugging on the application, run compose using the docker-compose-dev.yml file.
docker-compose -f docker-compose-dev.yml up –build
Docker will build the images and start the AtSea Shop database and appserver containers. However, the application will not fully load until Eclipse’s remote debugger attaches to the application. To start remote debugging you click on Run > Debug Configurations …
Select Remote Java Application then press the new button to create a configuration. In the Debug Configurations panel, you give the configuration a name, select the AtSea project and set the connection properties for host and the port to 5005. Click Apply > Debug.  

The appserver will start up.
appserver_1|2017-05-09 03:22:23.095 INFO 1 — [main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8080 (http)

appserver_1|2017-05-09 03:22:23.118 INFO 1 — [main] com.docker.atsea.AtSeaApp                : Started AtSeaApp in 38.923 seconds (JVM running for 109.984)
To test remote debugging set a breakpoint on ProductController.java where it returns a list of products.

You can test it using curl or your preferred tool for making HTTP requests:
curl -H “Content-Type: application/json” -X GET  http://localhost:8080/api/product/
Eclipse will switch to the debug perspective where you can step through the code.

The AtSea Shop example shows how easy it is to use containers as part of your normal development environment using tools that you and your team are familiar with. Download the application to try out developing with containers or use it as basis for your own Spring Boot REST application.
Interested in more? Check out these developer resources and videos from Dockercon 2017.

AtSea Shop demo
Docker Reference Architecture: Development Pipeline Best Practices Using Docker EE
Docker Labs

Developer Tools
Java development using docker

DockerCon videos

Docker for Java Developers
The Rise of Cloud Development with Docker & Eclipse Che
All the New Goodness of Docker Compose
Docker for Devs

Developing the AtSea app with #Docker and #SpringBoot by @sparaClick To Tweet

The post Spring Boot Development with Docker appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Kubernetes: a monitoring guide

Today’s post is by Jean-Mathieu Saponaro, Research & Analytics Engineer at Datadog, discussing what Kubernetes changes for monitoring, and how you can prepare to properly monitor a containerized infrastructure orchestrated by Kubernetes.Container technologies are taking the infrastructure world by storm. While containers solve or simplify infrastructure management processes, they also introduce significant complexity in terms of orchestration. That’s where Kubernetes comes to our rescue. Just like a conductor directs an orchestra, Kubernetes oversees our ensemble of containers—starting, stopping, creating, and destroying them automatically to keep our applications humming along.Kubernetes makes managing a containerized infrastructure much easier by creating levels of abstractions such as pods and services. We no longer have to worry about where applications are running or if they have enough resources to work properly. But that doesn’t change the fact that, in order to ensure good performance, we need to monitor our applications, the containers running them, and Kubernetes itself.Rethinking monitoring for the Kubernetes eraJust as containers have completely transformed how we think about running services on virtual machines, Kubernetes has changed the way we interact with containers. The good news is that with proper monitoring, the abstraction levels inherent to Kubernetes provide a comprehensive view of your infrastructure, even if the containers and applications are constantly moving. But Kubernetes monitoring requires us to rethink and reorient our strategies, since it differs from monitoring traditional hosts such as VMs or physical machines in several ways.Tags and labels become essentialWith containers and their orchestration completely managed by Kubernetes, labels are now the only way we have to interact with pods and containers. That’s why they are absolutely crucial for monitoring since all metrics and events will be sliced and diced using labels across the different layers of your infrastructure. Defining your labels with a logical and easy-to-understand schema is essential so your metrics will be as useful as possible.There are now more components to monitorIn traditional, host-centric infrastructure, we were used to monitoring only two layers: applications and the hosts running them. Now with containers in the middle and Kubernetes itself needing to be monitored, there are four different components to monitor and collect metrics from.Applications are constantly movingKubernetes schedules applications dynamically based on scheduling policy, so you don’t always know where applications are running. But they still need to be monitored. That’s why using a monitoring system or tool with service discovery is a must. It will automatically adapt metric collection to moving containers so applications can be continuously monitored without interruption.Be prepared for distributed clustersKubernetes has the ability to distribute containerized applications across multiple data centers and potentially different cloud providers. That means metrics must be collected and aggregated among all these different sources.  For more details about all these new monitoring challenges inherent to Kubernetes and how to overcome them, we recently published an in-depth Kubernetes monitoring guide. Part 1 of the series covers how to adapt your monitoring strategies to the Kubernetes era.Metrics to monitorWhether you use Heapster data or a monitoring tool integrating with Kubernetes and its different APIs, there are several key types of metrics that need to be closely tracked:Running pods and their deploymentsUsual resource metrics such as CPU, memory usage, and disk I/OContainer-native metricsApplication metrics for which a service discovery feature in your monitoring tool is essential All these metrics should be aggregated using Kubernetes labels and correlated with events from Kubernetes and container technologies. Part 2 of our series on Kubernetes monitoring guides you through all the data that needs to be collected and tracked.Collecting these metricsWhether you want to track these key performance metrics by combining Heapster, a storage backend, and a graphing tool, or by integrating a monitoring tool with the different components of your infrastructure, Part 3, about Kubernetes metric collection, has you covered. Anchors aweigh!Using Kubernetes drastically simplifies container management. But it requires us to rethink our monitoring strategies on several fronts, and to make sure all the key metrics from the different components are properly collected, aggregated, and tracked. We hope our monitoring guide will help you to effectively monitor your Kubernetes clusters. Feedback and suggestions are more than welcome. –Jean-Mathieu Saponaro, Research & Analytics Engineer, DatadogGet involved with the Kubernetes project on GitHub Post questions (or answer questions) on Stack Overflow Connect with the community on SlackFollow us on Twitter @Kubernetesio for latest updates
Quelle: kubernetes

Docker Federal Summit Recap and videos

On May 2nd, Docker returned to the Newseum to host the second annual Docker Federal Summit.  This one day event is designed to bring government agency developers, IT ops, program leaders and the ecosystem together to share and learn about the trends driving change in IT from containers, cloud and devops.  We expanded the agenda this year two tracks, with presentations from Docker, ecosystem partners, agency and community leaders to drive discussions, technology deep dives and hands on tutorials.
View the general session replay here:

General session table of content and slides

13:05 Iain Gray, SVP Customer Success discusses how Docker delivers a unique secure supply chain for all applications and infrastructure
33:35 Nathan McCauley, Director Security Engineering discusses the principles of least privilege design on which Docker is built
55:30 Modernize Traditional Apps to gain portability, security and efficiency without changing source code
59:13 Banjot Chanana, Senior Director Products delivers an overview and demo of Docker Enterprise Edition

In addition, the following breakout sessions dove deeper into pragmatic advice, security, development, cloud and compliance.

Lessons Learned from Deploying Containers in Production featuring a panel discussion with Booz Allen Hamilton, JIDO, GSA and USCIS
Scaling and Securing Applications on Your Terms featuring Doug Gebert, HPE Deputy CTO for DISA and DoD
Supercharge Modern App Development with Azure Government and Docker featuring Eddie Villalba and Steve Michelotti
Federal Compliance Panel Discussion featuring Andrew Weiss, Susie Adams, James Scott and Greg Elin
Docker Secure Substrate for Container Apps featuring Riyaz Faizullabhoy and Andy Clemenko

For hands on training, the Federal Summit offered tutorials on Modernizing .NET apps, Docker Orchestration and Deploying Apps with Docker Enterprise Edition. These hands on labs are now available publicly for anyone interested in learning more about Docker.
Last but not least – Thank you to our event sponsors.

Continue your Docker journey with these helpful links

Try Docker Enterprise Edition for free
Learn more about Docker in Government
Register for an upcoming Docker webinar

 

#DockerFedSummit videos and tutorials now available onlineClick To Tweet

The post Docker Federal Summit Recap and videos appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/