OK, I give up. Is Docker now Moby? And what is LinuxKit?

The post OK, I give up. Is Docker now Moby? And what is ? appeared first on Mirantis | Pure Play Open Cloud.
This week at , Docker made several announcements, but one in particular caused massive confusion as users thought that &;Docker&; was becoming &8220;Moby.&8221;  Well&; OK, but which Docker? The Register probably put it best, when they said, &8220;Docker (the company) decided to differentiate Docker (the commercial software products Docker CE and Docker EE) from Docker (the open source project).&8221;  Tack on a second project about building core operating systems, and there&;s a lot to unpack.
Let&8217;s start with Moby.  
What is Moby?
Docker, being the foundation of many peoples&8217; understanding of containers, unsurprisingly isn&8217;t a single monolithic application.  Instead, it&8217;s made up of components such as runc, containerd, InfraKit, and so on. The community works on those components (along with Docker, of course) and when it&8217;s time for a release, Docker packages them all up and out they go. With all of those pieces, as you might imagine, it&8217;s not a simple task.
And what happens if you want your own custom version of Docker?  After all, Docker is built on the philosophy of &8220;batteries included but swappable&8221;.  How easy is it to swap something out?
In his blog post introducing the Moby Project, Solomon Hykes explained that the idea is to simplify the process of combining components into something usable. &8220;We needed our teams to collaborate not only on components, but also on assemblies of components, borrowing an idea from the car industry where assemblies of components are reused to build completely different cars.&8221;
Hykes explained that from now on, Docker releases would be built using Moby and its components.  At the moment there are 80+ components that can be combined into assemblies.  He further explained that:
&8220;Moby is comprised of:

A library of containerized backend components (e.g., a low-level builder, logging facility, volume management, networking, image management, containerd, SwarmKit, …)
A framework for assembling the components into a standalone container platform, and tooling to build, test and deploy artifacts for these assemblies.
A reference assembly, called Moby Origin, which is the open base for the Docker container platform, as well as examples of container systems using various components from the Moby library or from other projects.&8221;

Who needs to know about Moby?
The first group that needs to know about Moby is Docker developers, as in the people building the actual Docker software, and not people building applications using Docker containers, or even people building Docker containers.  (Here&8217;s hoping that eventually this nomenclature gets cleared up.)  Docker developers should just continue on as usual, and Docker pull requests will be reouted to the Moby project.
So everyone else is off the hook, right?
Well, um, no.
If all you do is pull together containers from pre-existing components and software you write yourself, then you&8217;re good; you don&8217;t need to worry about Moby. Unless, that is, you aren&8217;t happy with your available Linux distributions.
Enter LinuxKit.
What is LinuxKit?
While many think that Docker invented the container, in actuality linux containers had been around for some time, and Docker containers are based on them.  Which is really convenient &; if you&8217;re using Linux.  If, on the other hand, you are using a system that doesn&8217;t include Linux, such as a Mac, a Windows PC, or that Raspberry Pi you want to turn into an automatic goat feeder, you&8217;ve got a problem.
Docker requires linuxcontainers.  Which is a problem if you have no linux.
Enter LinuxKit.  
The idea behind LinuxKit is that you start with a minimal Linux kernal &8212; the base distro is only 35MB &8212; and add literally only what you need. Once you have that, you can build your application on it, and run it wherever you need to.  Stephen Foskitt tweeted a picture of an example from the announcement:

More about LinuxKit DockerCon pic.twitter.com/TfRJ47yBdB
— Stephen Foskett (@SFoskett) April 18, 2017

The end result is that you can build containers that run on desktops, mainframes, bare metal, IoT, and VMs.
The project will be managed by the Linux Foundation, which is only fitting.
So what about Alpine, the minimal Linux that&8217;s at the heart of Docker?  Docker&8217;s security director, Nathan McCauley said that &8220;LinuxKit&8217;s roots are in Alpine.&8221;  The company will continue to use it for Docker.

Today we launch LinuxKit &8212; a Linux subsystem focussed on security. pic.twitter.com/Q0YJsX67ZT
— Nathan McCauley (@nathanmccauley) April 18, 2017

So what does this have to do with Moby?
Where LinuxKit has to do with Moby
If you&8217;re salivating at the idea of building your own Linux distribution, take a deep breath. LinuxKit is an assembly within Moby.  
So if you want to use LinuxKit, you need to download and install Moby, then use it to build your LinuxKit pieces.
So there you have it. You now have the ability to build your own Linux system, and your own containerization system. But it&8217;s definitely not for the faint of heart.
Resources

Wait – we can explain, says Moby, er, Docker amid rebrand meltdown • The Register
Moby, LinuxKit Kick Off New Docker Collaboration Phase | Software | LinuxInsider
Why Docker created the Moby Project | CIO
GitHub &; linuxkit/linuxkit: A toolkit for building secure, portable and lean operating systems for containers
Docker LinuxKit: Secure Linux containers for Windows, macOS, and clouds | ZDNet
Announcing LinuxKit: A Toolkit for building Secure, Lean and Portable Linux Subsystems &8211; Docker Blog
Stephen Foskett on Twitter: &8220;More about LinuxKit DockerCon https://t.co/TfRJ47yBdB&8221;
Introducing Moby Project: a new open-source project to advance the software containerization movement &8211; Docker Blog
DockerCon 2017: Moby’s Cool Hack sessions &8211; Docker Blog

The post OK, I give up. Is Docker now Moby? And what is LinuxKit? appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

DockerCon 2017: Moby’s Cool Hack sessions

Every year at , we expand the bounds of what Docker can do with new features and products. And every day, we see great new apps that are built on top of Docker. And yet, there’s always a few that stand out not just for being cool apps, but for pushing the bounds of what you can do with Docker.
This year we had two great apps that we featured in the Docker Cool Hacks closing keynote. Both hacks came from members of our Docker Captains program, a group of people from the Docker community who are recognized by Docker as very knowledgeable about Docker, and contribute quite a bit to the community.
Play with Docker
The first Cool Hack was Play with Docker by Marcos Nils and Jonathan Leibiusky. Marcos and Jonathan actually were featured in the Cool Hacks session at DockerCon EU in 2015 for their work on a Container Migration Tool.
Play with Docker is a Docker playground that you can run in your browser.

Play with Docker’s architecture is a Swarm of Swarms, running Docker in Docker instances.

Running on pretty beefy hosts r3.4xlarge on AWS &; Play with Docker is able to run about 3500 containers per host, only running containers as needed for a session. Play with Docker is completely open source, so you can run it on your own infrastructure. And they welcome contributions on their GitHub repo.
FaaS (Function as a Service)
The second Cool Hack was Functions as a Service (FaaS) by Alex Ellis. FaaS is a framework for building serverless functions on Docker Swarm with first class support for metrics. Any UNIX process can be packaged as a function enabling you to consume a range of web events without repetitive boilerplate coding. Each function runs as a container that only runs as long as it takes to run the function.

FaaS also comes with a convenient gateway tester that allows you to try out each of your functions directly in the browser.

FaaS is actively seeking contributions, so feel free to send issues and PRs on the GitHub repo.
Check out the video recording of the cool hack sessions below:

Congratulations to DockerCon Cool Hacks winners @marcosnils, @xetorthio, and @alexellisuk for Play&;Click To Tweet

Learn more about our DockerCon 2017 cool hacks:

Check out Play with Docker
Check out and contribute to FaaS
Contribute to Play with Docker

The post DockerCon 2017: Moby’s Cool Hack sessions appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Bring In the “New” Infrastructure Stack

The post Bring In the “New” Infrastructure Stack appeared first on Mirantis | Pure Play Open Cloud.
Today, Mirantis has announced Mirantis Cloud Platform .0, which heralds an operations-centric approach to open cloud.  But what does that mean in terms of cloud services today and into the future?  I think it may change your perspective when considering or deploying cloud infrastructure.
When our Co-Founder Boris Renski declared Infrastructure Software Is Dead, he was not talking about the validity or usefulness of infrastructure software, he was talking about the delivery and operations model for infrastructure software.  Historically, infrastructure software has been complicated as well as being notorious for challenging in terms of lifecycle management.  The typical model encompassed a very slow release model comprised of very large, integrated releases that arrived on the order of years for major releases (1.x, 2.x, 3.x&;) and many quarters for minor releases (3.2, 3.3, 3.4…).  Moving from one to the other was an extremely taxing process for IT organizations, and combined with a typical hardware refresh cycle, this usually resulted in the mega-project mentality in our industry:

Architect and deploy service(s) on a top-to-bottom stack
Once it is working &; don’t touch it (keep it running)
Defer consumption of new features and innovation until next update
Define a mega-project plan (typically along a 3 year HW refresh)
Execute plan by starting at 1 again

While virtualization and cloud technologies provided a separation of hardware from applications, it didn’t necessarily solve this problem.  Even OpenStack by itself did not solve this problem.  As infrastructure software, it was still released and consumed in slow, integrated cycles.
Meanwhile, many interesting developments occurred in the application space.  Microservices, agile development methodologies, CI/CD, containers, DevOps — all focused on the ability to rapidly innovate and rapidly consume software in very small increments comprising a larger whole as opposed to one large integrated release.  This approach has been successful at the application level and has allowed an arms race to develop in the software economy: who can develop new services to drive revenue for their business faster than their competition?
Ironically, this movement has been happening with applications running on the older infrastructure methodology.  Why not leverage these innovations at the infrastructure level as well?
Enter Mirantis Cloud Platform (MCP)…
MCP was designed with the operations-centric approach in mind, to be able to consume and manage cloud infrastructure in the same way modern microservices are delivered at the application level.  The vision for MCP is that of a Continuously Delivered Cloud:

With a single platform for virtual machines, containers and bare metal
Delivered by a CI/CD pipeline
With continuous monitoring

Our rich OpenStack platform has been extended with a full Kubernetes distribution which together enables the deployment and orchestration of VMs, containers and bare metal together, all on the same cloud.  As containers become increasingly important as a means of microservices development and deployment, they can be managed within the same open cloud infrastructure.
Mirantis will update MCP on a continuous basis with a lifecycle determined in weeks, not years.  This allows for the rapid release and consumption of updates to the infrastructure in small increments as opposed to the large integrated releases necessitating the mega-project.  Your consumption is based on DriveTrain, the lifecycle management tool connecting your cloud to the innovation coming from Mirantis.  With DriveTrain you consume the technology at your desired pace, pushed through a CI/CD pipeline and tested in staging, then promoted into production deployment.  In the future, this will include new features and full upgrades performed non-disruptively in an automated fashion.  You will be able to take advantage of the latest innovations quickly, as opposed to waiting for the next infrastructure mega-project.
Operations Support Systems have always been paramount to successful IT delivery, and even more so in a distributed system based on a continuous lifecycle paradigm. StackLight is the OSS that is purpose-built for MCP and provides continuous monitoring to enable automated alerts with a goal of SLA compliance.  This is the same OSS used when your cloud is managed by Mirantis with our Mirantis Managed OpenStack (MMO) offering where we can deliver up to 99.99% SLA guarantees, or if you are managing MCP in-house with your own IT operations.  As part of our Build-Operate-Transfer model, we focus on operational training with StackLight such that post-transfer you are able to use the same in-place StackLight and same in-place standard operating procedures.
Finally!  Infrastructure software that can be consumed and managed in a modern approach just like microservices are consumed and managed at the application level.  Long live the new infrastructure!
To learn more about MCP, please sign up for our webinar on April 26. See you there!
The post Bring In the “New” Infrastructure Stack appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Let’s Meet At OpenStack Summit In Boston!

The post Let&;s Meet At OpenStack Summit In Boston! appeared first on Mirantis | Pure Play Open Cloud.

 
The citizens of Cloud City are suffering — Mirantis is here to help!
 
We&8217;re planning to have a super time at summit, and hope that you can join us in the fight against vendor lock-in. Come to booth C1 to power up on the latest technology and our revolutionary Mirantis Cloud Platform.

If you&8217;d like to talk with our team at the summit, simply contact us and we&8217;ll schedule a meeting.

REQUEST A MEETING

 
Free Mirantis Training @ Summit
Take advantage of our special training offers to power up your skills while you&8217;re at the Summit! Mirantis Training will be offering an Accelerated Bootcamp session before the big event. Our courses will be conveniently held within walking distance of the Hynes Convention Center.

Additionally, we&8217;re offering a discounted Professional-level Certification exam and a free Kubernetes training, both held during the Summit.

 
Mirantis Presentations
Here&8217;s where you can find us during the summit&;.
 
MONDAY MAY 8

Monday, 12:05pm-12:15pm
Level: Intermediate
Turbo Charged VNFs at 40 gbit/s. Approaches to deliver fast, low latency networking using OpenStack.
(Gregory Elkinbard, Mirantis; Nuage)

Monday, 3:40pm-4:20pm
Level: Intermediate
Project Update &; Documentation
(Olga Gusarenko, Mirantis)

Monday, 4:40pm-5:20pm
Level: Intermediate
Cinder Stands Alone
(Ivan Kolodyazhny, Mirantis)

Monday, 5:30pm-6:10pm
Level: Intermediate
m1.Boaty.McBoatface: The joys of flavor planning by popular vote
(Craig Anderson, Mirantis)

 

TUESDAY MAY 9

Tuesday, 2:00pm-2:40pm
Level: Intermediate
Proactive support and Customer care
(Anton Tarasov, Mirantis)

Tuesday, 2:30pm-2:40pm
Level: Advanced
OpenStack, Kubernetes and SaltStack for complete deployment automation
(Aleš Komárek and Thomas Lichtenstein, Mirantis)

Tuesday, 2:50pm-3:30pm
Level: Intermediate
OpenStack Journey: from containers to functions
(Ihor Dvoretskyi, Mirantis; Iron.io, BlueBox)

Tuesday, 4:40pm-5:20pm
Level: Advanced
Point and Click ->CI/CD: Real world look at better OpenStack deployment, sustainability, upgrades!
(Bruce Mathews and Ryan Day, Mirantis; AT&T)

Tuesday, 5:05pm-5:45pm
Level: Intermediate
Workload Onboarding and Lifecycle Management with Heat
(Florin Stingaciu and Lance Haig, Mirantis)

 

WEDNESDAY MAY 10

Wednesday, 9:50am-10:30am
Level: Intermediate
Project Update &8211; Neutron
(Kevin Benton, Mirantis)

Wednesday, 11:00am-11:40am
Level: Intermediate
Project Update &8211; Nova
(Jay Pipes, Mirantis)

Wednesday, 1:50pm-2:30pm
Level: Intermediate
Kuryr-Kubernetes: The seamless path to adding Pods to your datacenter networking
(Ilya Chukhnakov, Mirantis)

Wednesday, 1:50pm-2:30pm
Level: Intermediate
OpenStack: pushing to 5000 nodes and beyond
(Dina Belova and Georgy Okrokvertskhov, Mirantis)

Wednesday, 4:30pm-5:10pm
Level: Intermediate
Project Update &8211; Rally
(Andrey Kurilin, Mirantis)

 

THURSDAY MAY 11

Thursday, 9:50am-10:30am
Level: Intermediate
OSprofiler: evaluating OpenStack
(Dina Belova, Mirantis; VMware)

Thursday, 11:00am-11:40am
Level: Intermediate
Scheduler Wars: A New Hope
(Jay Pipes, Mirantis)

Thursday, 11:30am-11:40am
Level: Beginner
Saving one cloud at a time with tenant care
(Bryan Langston, Mirantis; Comcast)

Thursday, 3:10pm-3:50pm
Level: Advanced
Behind the Scenes with Placement and Resource Tracking in Nova
(Jay Pipes, Mirantis)

Thursday, 5:00pm-5:40pm
Level: Intermediate
Terraforming OpenStack Landscape
(Mykyta Gubenko, Mirantis)

 

Notable Presentations By The Community
 
TUESDAY MAY 9

Tuesday, 11:15am-11:55am
Level: Intermediate
AT&;T Container Strategy and OpenStack&8217;s role in it
(AT&038;T)

Tuesday, 11:45am-11:55am
Level: Intermediate
AT&038;T Cloud Evolution : Virtual to Container based (CI/CD)^2
(AT&038;T)

WEDNESDAY MAY 10

Wednesday, 1:50pm-2:30pm
Level: Intermediate
Event Correlation &038; Life Cycle Management – How will they coexist in the NFV world?
(Cox Communications)

Wednesday, 5:20pm-6:00pm
Level: Intermediate
Nova Scheduler: Optimizing, Configuring and Deploying NFV VNF&8217;s on OpenStack
(Wind River)

THURSDAY MAY 11

Thursday, 9:00am-9:40am
Level: Intermediate
ChatOpsing Your Production Openstack Cloud
(Adobe)

Thursday, 11:00am-11:10am
Level: Intermediate
OpenDaylight Network Virtualization solution (NetVirt) with FD.io VPP data plane
(Ericsson)

Thursday, 1:30pm-2:10pm
Level: Beginner
Participating in translation makes you an internationalized OpenStacker &038; developer
(Deutsche Telekom AG)

Thursday, 5:00pm-5:40pm
Level: Beginner
Future of Cloud Networking and Policy Automation
(Cox Communications)

The post Let&8217;s Meet At OpenStack Summit In Boston! appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Thoughts and Perspectives from OpenShift Commons Berlin

Last week the cloud native, containers and Kubernetes communities converged on Berlin, Germany for OpenShift Commons Gathering, CloudNativeCon and KubeCon. Berlin was the perfect location for this intersection of events because it is historically defined by its transition from the past to the present, and culturally by its diversity of activities and fields of knowledge. Berlin sits at [&;]
Quelle: OpenShift

Deploying 2048 OpenShift nodes on the CNCF Cluster

By Jeremy Eder, Red Hat, Senior Principal Software Engineer Overview The Cloud Native community has been incredibly busy since our last set of scaling tests on the CNCF cluster back in August.  In particular, the Kubernetes (and by extension, OpenShift) communities have been hard at work pushing scalability to entirely new levels. As a significant [&;]
Quelle: OpenShift

Scaling with Kubernetes DaemonSets

The post Scaling with Kubernetes DaemonSets appeared first on Mirantis | Pure Play Open Cloud.
We&;re used to thinking about scaling from the point of view of a deployment; we want it to scale up under different conditions, so it looks for appropriate nodes, and puts pods on them. DaemonSets, on the other hand, take a different tack: any time you have a node that belongs to the set, it runs the pods you specify.  For example, you might create a DaemonSet to tell Kubernetes that any time you create a node with the label app=webserver you want it to run Nginx.  Let&8217;s take a look at how that works.
Creating a DaemonSet
Let&8217;s start by looking at a sample YAML file to define a Daemon Set:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
 name: frontend
spec:
 template:
   metadata:
     labels:
       app: frontend-webserver
   spec:
     nodeSelector:
       app: frontend-node
     containers:
       – name: webserver
         image: nginx
         ports:
         – containerPort: 80
Here we&8217;re creating a DaemonSet called frontend. As with a ReplicationController, pods launched by the DaemonSet are given the label specified in the spec.template.metadata.labels property &; in this case, app=frontend-webserver.
The template.spec itself has two important parts: the nodeSelector and the containers.  The containers are fairly self-evident (see our discussion of ReplicationControllers if you need a refresher) but the interesting part here is the nodeSelector.
The nodeSelector tells Kubernetes which nodes are part of the set and should run the specified containers.  In other words, these pods are deployed automatically; there&8217;s no input at all from the scheduler, so schedulability of a node isn&8217;t taken into account.  On the other hand, Daemon Sets are a great way to deploy pods that need to be running before other objects.
Let&8217;s go ahead and create the Daemon Set.  Create a file called ds.yaml with the definition in it and run the command:
$ kubectl create -f ds.yaml
daemonset “datastore” created
Now let&8217;s see the Daemon Set in action.
Scaling capacity using a DaemonSet
If we check to see if the pods have been deployed, we&8217;ll see that they haven&8217;t:
$ kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
That&8217;s because we don&8217;t yet have any nodes that are part of our DaemonSet.  If we look at the nodes we do have &;
$ kubectl get nodes
NAME        STATUS    AGE
10.0.10.5   Ready     75d
10.0.10.7   Ready     75d
We can go ahead and add at least one of them by adding the app=frontend-node label:
$kubectl label  node 10.0.10.5 app=frontend-node
node “10.0.10.5” labeled
Now if we get a list of pods again&8230;
$ kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
frontend-7nfxo              1/1       Running   0          19s
We can see that the pod was started without us taking any additional action.  
Now we have a single webserver running.  If we wanted to scale up, we could simply add our second node to the Daemon Set:
$ kubectl label  node 10.0.10.7 app=frontend-node
node “10.0.10.7” labeled
If we check the list of pods again, we can see that a new one was automatically started:
$ kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
frontend-7nfxo              1/1       Running   0          1m
frontend-rp9bu              1/1       Running   0          35s
If we remove a node from the DaemonSet, any related pods are automatically terminated:
$ kubectl label  node 10.0.10.5 –overwrite app=backend
node “10.0.10.5” labeled

$ kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
frontend-rp9bu              1/1       Running   0          1m
Updating Daemon Sets, and improvements in Kubernetes 1.6
OK, so how do we update a running DaemonSet?  Well, as of Kubernetes 1.5, the answer is &;you don&8217;t.&; Currently, it&8217;s possible to change the template of a DaemonSet, but it won&8217;t affect the pods that are already running.  
Starting in Kubernetes 1.6, however, you will be able to do rolling updates with Kubernetes DaemonSets. You&8217;ll have to set the updateStrategy, as in:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
 name: frontend
spec:
 updateStrategy: RollingUpdate
   maxUnavailable: 1
   minReadySeconds: 0
 template:
   metadata:
     labels:
       app: frontend-webserver
   spec:
     nodeSelector:
       app: frontend-node
     containers:
       – name: webserver
         image: nginx
         ports:
         – containerPort: 80
Once you&8217;ve done that, you can make changes and they&8217;ll propagate to the running pods. For example, you can change the image on which the containers are based. For example:
$kubectl set image ds/frontend webserver=httpd
If you want to make more substantive changes, you can edit or patch the Daemon Set:
kubectl edit ds/frontend
or
kubectl patch ds/frontend -p=ds-changes.yaml
(Obviously you would use your own DaemonSet names and files!)
So that&8217;s the basics of working with DaemonSets.  What else would you like to learn about them? Let us know in the comments below.
The post Scaling with Kubernetes DaemonSets appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Meet the winners of the Holberton School and Docker hackathon

The last weekend in February, Holberton School and Docker held a joint Docker Hackathon where current students spent 24 hours making cool Docker hacks. Students were joined by Docker mentors who helped them along the way in addition to serving as judges for the final products. 

Here are some highlights from the hackathon.
Third place goes to… Julien, a personal assistant built with Docker and Alexa by Bobby and Larry
In their own words:

After discussing a few ideas, we settled on the idea of doing a Docker/Alexa integration that would abstract away repetitive command line interactions, allowing the user/developer to check the state of her Docker containers, and easily deploy them to production, only using voice commands. Hands free, we would prompt Alexa to interact with our Docker images and containers in various ways (ex1: “spin up image file x on server y”, “list all running containers on server z”, “deploy image a from server x to server y”) and Alexa would do it.
The main technical hurdle of the project was securely communicating between Alexa and our VMs running. To do this we used  the Java JSch library. This class gave us the ability to programmatically shell into our virtual machines, run commands and receive the output remotely from the VM.  Here is a basic diagram of our the data flow: voice command→ Alexa intent interpreter (running on AWS Lambda) selects a bash script executing Docker commands with variables passed in → jsh opens a ssh session into the selected VM and runs script →  outputs of script returns via jsh →  Alexa interprets returned output and gives audio message declaring success or other output as appropriate.

Second place goes to… Call Me Moby — An SMS Container Management App by Corbin Coleman and Jennie Chu
Call Me Moby works in the following ways:
1. Your docker command text is received by our web server as a HTTP POST request
2. Twilio API interprets this request and reads your message as text
3. This text is later parsed and then interacts with the Docker Engine API to perform your operation.
4. We then send back our response to the web server, often times including a text message reply with necessary return statements
5. Finally our message will be sent and received by our phone.
How Can You Use it? Grab the Call Me Moby image from Docker Hub.
In their own words:
Our app.py file contains the brute of the application, handling incoming HTTP requests, maintaining our web server, and utilizing both the Docker Engine and Twilio API. Try running the python3 app.py and go open your local host on your favorite web browser!
Unfortunately, the current app is only running in the local environment and in order for our server to receive the HTTP request, we has to use the ngrok tunneling service. Ngrok provides a localhost tunnel such that outside services can get access to our local development environment. After installation, run Ngrok locally using ./ngrok http 5000, to create your forwarding address. You can also copy and paste your forwarding address into your web browser and see that now any machine can have access to our local environment. Assuming you have a Twilio Account and phone number, just copy and paste your forwarding address to your Twilio phone number management console. From there, run your app.py and start texting and managing!

And the winner is … (drum roll please)&; HMS (Honeypot Management System) by Holden Grissett by Tim Britton
In their own words:

HMS (Honeypot Management System, also a great naval pun) is a honeypot server custom-tailored to make use of the modularity of containers for extensibility and security. We adapted the honeypot server for use in swarm mode to demonstrate the use of container-based honeypots at scale in swarm mode. This system allows us to easily scale up data collection for security research.

HMS currently includes a server to mimic an insecure telnet service, made for the hackathon. Upon connection to the server, a container is spun up for each client. The client’s input is parsed and can either be sent directly to the container and the response sent directly to the client (to give the illusion that they’re directly inside the container), or commands can have pre-scripted responses, or blocked entirely for security. It’s currently set-up to mimic a Busybox installation, but with minor tweaking could easily emulate any image on Docker Store! At current it easily passes tests made by Mirai and Hajime botnets. When these bots seemingly successfully download their malware and exit the server, the container is checked for differences and any downloaded or created files are tar’d and saved for logging purposes.
Going forward, we are extending our functionality to make deploying honeypot images in swarm mode faster and easier. We would also like to extend functionality to existing honeypots and create more of our own container-based honeypots.

Get involved with the Docker Community:

Join the Docker Community Directory and Slack
Join your local Docker Meetup Group
Join our Docker Online meetup

The post Meet the winners of the Holberton School and Docker hackathon appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

OpenShift Debugging 101

I’ve been working with the radanalytics.io big data examples for OpenShift recently and every once in a while I would be on a slow network and plagued with inconsistency in deploys to get the entire example running. I finally reached out for help and got some great debugging advice so I wanted to share some [&;]
Quelle: OpenShift

Docker Turns 4: Mentorship, Pi, Moby Mingle and Moar

In case you missed it, this week we’re celebrating ’s 4th Birthday with meetup celebrations all over the world (check out  on twitter). This feels like the right time to look back at the past 4 years and reflect on what makes the Docker Community so unique and vibrant: people, values, mentorship and learning opportunities. You can read our own Jérôme Petazzoni’s blog post for a more technical retrospective.
Managing an open source project at that scale and preserving a healthy community doesn’t come without challenges. Last year, Arnaud Porterie wrote a very interesting 3-part series blog post on open source at Docker covering the different challenges associated with the People, the Process and the Tooling and Automation. The most important aspect of all being the people.
Respect, fairness and openness are essential values required to create a welcoming environment for professionals and hobbyists alike. In that spirit, we’ve launched a scholarship program and partnerships in an attempt to improve opportunities for underrepresented groups in the tech industry while helping the Docker Community become more diverse. If you’re interested in this topic, we’re fortunate enough to have Austin area high school student Kate Hirschfeld presenting at DockerCon on Diversity in the face of adversity.
But what really makes the Docker community so special is all of the passionate contributors who work tremendously hard to submit pull requests, file GitHub issues, organize meetups, give talks at conferences, write blog posts or record Docker tips videos.
Leadership, mentorship, contribution and collaboration play a massive role in the development of the Docker Community and container ecosystem. Through the organization of the Docker Mentor Week last year or a Docker Mentor Summit at DockerCon 2017, we’re always trying to emulate the community and encourage more advanced users to share their knowledge with newcomers.
A great example of leadership and mentorship in the Docker Community is Docker Captain Alex Ellis. We could not write a blog post on Pi Day without mentioning Alex and the awesome work he does around Docker and Raspberry Pi. In addition to sharing his knowledge through blog posts and videos, Alex is actively inspiring and mentoring younger folks such as Finnian Anderson. Alex’s support and advocacy got Finnian invited to DockerCon 2017 to give a demo of a Raspberry Pi-driven hardware gauge to monitor a Docker Swarm in real time.

If you’re pumped about all the things you learn and all the people you meet at Docker events, you’re going to love what we have planned for you at this year’s DockerCon! We’re giving everyone at DockerCon access to a tool called to connect with people who share the same Docker use cases, topic of interests or hack ideas, or even your favorite TV shows. So no matter where you’re traveling from or how many people you know before the conference, we will make sure you end up feeling at home!

Register for DockerCon 2017 
   

  

Docker turns 4 &; our take on what makes the docker community vibrant and unique dockerbday&;Click To Tweet

The post Docker Turns 4: Mentorship, Pi, Moby Mingle and Moar appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/