Apply for a Docker Scholarship and learn how to code!

Today, Docker is proud to announce the launch of the Docker Scholarship Program in partnership with Reactor Core to improve opportunities for underrepresented groups in the tech industry! With the help of the community, we surpassed our goal for the DockerCon 2016 Bump Up Challenge unlocking $50,000 to fund three full tuition scholarships.
 

 
The Docker Scholarship Program is part of our continued work to improve opportunities for women and underrepresented groups throughout the global Docker ecosystem and encourage inclusivity in the larger tech community.
Docker’s Goal
The goal of the Docker scholarship program is to strengthen the broader tech community by making it more diverse and inclusive to traditionally underrepresented groups. We aim to achieve that goal by providing financial support and mentorship to three students at Reactor Core’s partner schools, Hack Reactor and Telegraph Academy.
Our Partnership with Hack Reactor and Telegraph Academy
Docker believes in the power of innovation and pushing our current technological boundaries. As a driver of innovation, we embrace our role in  advancing opportunities for underrepresented groups in the tech industry. Hack Reactor and Telegraph Academy share in our vision of empowering people and creating more opportunities for every member of our community. We are inspired by their commitment to improving the status quo for underrepresented groups in the tech industry.
Available Scholarships:
2016 Scholarships:
Telegraph Academy November Cohort
Complete the Docker Scholarship application and apply to Telegraph Academy’s bootcamp. Applications will be reviewed and applicants who are accepted into the Telegraph Academy program and meet Docker’s criteria will be invited to Docker HQ for a panel interview with Docker team members. Scholarships will be awarded based on acceptance to Telegraph Academy program, demonstration of personal financial need and quality of the responses to the Docker Scholarship application.
Apply here
Hack Reactor October Cohort
Complete the Docker Scholarship application and apply to Hack Reactor’s bootcamp. Applications will be reviewed and applicants who are accepted into the Hack Reactor program and meet Docker’s criteria will be invited to Docker HQ for a panel interview with Docker team members. Scholarships will be awarded based on acceptance to Hack Reactor program, demonstration of personal financial need and quality of the responses to the Docker Scholarship application.
As women in the tech industry are traditionally underrepresented, we have a strong preference to award this scholarship to a self identified woman. However, we encourage all to apply as there may be additional opportunities available.
Apply here
 
2017 Scholarships:
Telegraph Academy February Cohort
Stay tuned for Telegraph&;s February 2017 cohort application.
Visit the Docker Scholarship page to learn more about each scholarship and the partner programs.
 
Want to help Docker with these initiatives?
We’re always happy to connect with other folks or companies who want to improve opportunities for women and underrepresented groups throughout the global Docker ecosystem and promote diversity in the larger tech community.
If you or your organization are interested in getting more involved, please contact us at events@docker.com. With your help, we are excited to take these initiatives to the next level!
The post Apply for a Docker Scholarship and learn how to code! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Kubernetes Namespaces: use cases and insights

“Who’s on first, What’s on second, I Don’t Know’s on third” Who’s on First? by Abbott and CostelloIntroductionKubernetes is a system with several concepts. Many of these concepts get manifested as “objects” in the RESTful API (often called “resources” or “kinds”). One of these concepts is Namespaces. In Kubernetes, Namespaces are the way to partition a single Kubernetes cluster into multiple virtual clusters. In this post we’ll highlight examples of how our customers are using Namespaces. But first, a metaphor: Namespaces are like human family names. A family name, e.g. Wong, identifies a family unit. Within the Wong family, one of its members, e.g. Sam Wong, is readily identified as just “Sam” by the family. Outside of the family, and to avoid “Which Sam?” problems, Sam would usually be referred to as “Sam Wong”, perhaps even “Sam Wong from San Francisco”.  Namespaces are a logical partitioning capability that enable one Kubernetes cluster to be used by multiple users, teams of users, or a single user with multiple applications without concern for undesired interaction. Each user, team of users, or application may exist within its Namespace, isolated from every other user of the cluster and operating as if it were the sole user of the cluster. (Furthermore, Resource Quotas provide the ability to allocate a subset of a Kubernetes cluster’s resources to a Namespace.)For all but the most trivial uses of Kubernetes, you will benefit by using Namespaces. In this post, we’ll cover the most common ways that we’ve seen Kubernetes users on Google Cloud Platform use Namespaces, but our list is not exhaustive and we’d be interested to learn other examples from you.Use-cases coveredRoles and Responsibilities in an enterprise for namespacesPartitioning landscapes: dev vs. test vs. prodCustomer partitioning for non-multi-tenant scenariosWhen not to use namespacesUse-case : Roles and Responsibilities in an EnterpriseA typical enterprise contains multiple business/technology entities that operate independently of each other with some form of overarching layer of controls managed by the enterprise itself. Operating a Kubernetes clusters in such an environment can be done effectively when roles and responsibilities pertaining to Kubernetes are defined. Below are a few recommended roles and their responsibilities that can make managing Kubernetes clusters in a large scale organization easier.Designer/Architect role: This role will define the overall namespace strategy, taking into account product/location/team/cost-center and determining how best to map these to Kubernetes Namespaces. Investing in such a role prevents namespace proliferation and “snowflake” Namespaces.Admin role: This role has admin access to all Kubernetes clusters. Admins can create/delete clusters and add/remove nodes to scale the clusters. This role will be responsible for patching, securing and maintaining the clusters. As well as implementing Quotas between the different entities in the organization. The Kubernetes Admin is responsible for implementing the namespaces strategy defined by the Designer/Architect. These two roles and the actual developers using the clusters will also receive support and feedback from the enterprise security and network teams on issues such as security isolation requirements and how namespaces fit this model, or assistance with networking subnets and load-balancers setup.Anti-patternsIsolated Kubernetes usage “Islands” without centralized control: Without the initial investment in establishing a centralized control structure around Kubernetes management there is a risk of ending with a “mushroom farm” topology i.e. no defined size/shape/structure of clusters within the org. The result is a difficult to manage, higher risk and elevated cost due to underutilization of resources.Old-world IT controls choking usage and innovation: A common tendency is to try and transpose existing on-premises controls/procedures onto new dynamic frameworks .This results in weighing down the agile nature of these frameworks and nullifying the benefits of rapid dynamic deployments.Omni-cluster: Delaying the effort of creating the structure/mechanism for namespace management can result in one large omni-cluster that is hard to peel back into smaller usage groups. Use-case : Using Namespaces to partition development landscapesSoftware development teams customarily partition their development pipelines into discrete units. These units take various forms and use various labels but will tend to result in a discrete dev environment, a testing|QA environment, possibly a staging environment and finally a production environment. The resulting layouts are ideally suited to Kubernetes Namespaces. Each environment or stage in the pipeline becomes a unique namespace.The above works well as each namespace can be templated and mirrored to the next subsequent environment in the dev cycle, e.g. dev->qa->prod. The fact that each namespace is logically discrete allows the development teams to work within an isolated “development” namespace. DevOps (The closest role at Google is called Site Reliability Engineering “SRE”)  will be responsible for migrating code through the pipelines and ensuring that appropriate teams are assigned to each environment. Ultimately, DevOps is solely responsible for the final, production environment where the solution is delivered to the end-users.A major benefit of applying namespaces to the development cycle is that the naming of software components (e.g. micro-services/endpoints) can be maintained without collision across the different environments. This is due to the isolation of the Kubernetes namespaces, e.g. serviceX in dev would be referred to as such across all the other namespaces; but, if necessary, could be uniquely referenced using its full qualified name serviceX.development.mycluster.com in the development namespace of mycluster.com.Anti-patternsAbusing the namespace benefit resulting in unnecessary environments in the development pipeline. So; if you don’t do staging deployments, don’t create a “staging” namespace.Overcrowding namespaces e.g. having all your development projects in one huge “development” namespace. Since namespaces attempt to partition, use these to partition by your projects as well. Since Namespaces are flat, you may wish something similar to: projectA-dev, projectA-prod as projectA’s namespaces.Use-case : Partitioning of your CustomersIf you are, for example, a consulting company that wishes to manage separate applications for each of your customers, the partitioning provided by Namespaces aligns well. You could create a separate Namespace for each customer, customer project or customer business unit to keep these distinct while not needing to worry about reusing the same names for resources across projects.An important consideration here is that Kubernetes does not currently provide a mechanism to enforce access controls across namespaces and so we recommend that you do not expose applications developed using this approach externally.Anti-patternMulti-tenant applications don’t need the additional complexity of Kubernetes namespaces since the application is already enforcing this partitioning.Inconsistent mapping of customers to namespaces. For example, you win business at a global corporate, you may initially consider one namespace for the enterprise not taking into account that this customer may prefer further partitioning e.g. BigCorp Accounting and BigCorp Engineering. In this case, the customer’s departments may each warrant a namespace.When Not to use NamespacesIn some circumstances Kubernetes Namespaces will not provide the isolation that you need. This may be due to geographical, billing or security factors. For all the benefits of the logical partitioning of namespaces, there is currently no ability to enforce the partitioning. Any user or resource in a Kubernetes cluster may access any other resource in the cluster regardless of namespace. So, if you need to protect or isolate resources, the ultimate namespace is a separate Kubernetes cluster against which you may apply your regular security|ACL controls.Another time when you may consider not using namespaces is when you wish to reflect a geographically distributed deployment. If you wish to deploy close to US, EU and Asia customers, a Kubernetes cluster deployed locally in each region is recommended.When fine-grained billing is required perhaps to chargeback by cost-center or by customer, the recommendation is to leave the billing to your infrastructure provider. For example, in Google Cloud Platform (GCP), you could use a separate GCP Project or Billing Account and deploy a Kubernetes cluster to a specific-customer’s project(s).In situations where confidentiality or compliance require complete opaqueness between customers, a Kubernetes cluster per customer/workload will provide the desired level of isolation. Once again, you should delegate the partitioning of resources to your provider.Work is underway to provide (a) ACLs on Kubernetes Namespaces to be able to enforce security; (b) to provide Kubernetes Cluster Federation. Both mechanisms will address the reasons for the separate Kubernetes clusters in these anti-patterns. An easy to grasp anti-pattern for Kubernetes namespaces is versioning. You should not use Namespaces as a way to disambiguate versions of your Kubernetes resources. Support for versioning is present in the containers and container registries as well as in Kubernetes Deployment resource. Multiple versions should coexist by utilizing the Kubernetes container model which also provides for auto migration between versions with deployments. Furthermore versions scope namespaces will cause massive proliferation of namespaces within a cluster making it hard to manage.Caveat GubernatorYou may wish to, but you cannot create a hierarchy of namespaces. Namespaces cannot be nested within one another. You can’t, for example, create my-team.my-org as a namespace but could perhaps have team-org.Namespaces are easy to create and use but it’s also easy to deploy code inadvertently into the wrong namespace. Good DevOps hygiene suggests documenting and automating processes where possible and this will help. The other way to avoid using the wrong namespace is to set a kubectl context. As mentioned previously, Kubernetes does not (currently) provide a mechanism to enforce security across Namespaces. You should only use Namespaces within trusted domains (e.g. internal use) and not use Namespaces when you need to be able to provide guarantees that a user of the Kubernetes cluster or ones its resources be unable to access any of the other Namespaces resources. This enhanced security functionality is being discussed in the Kubernetes Special Interest Group for Authentication and Authorization, get involved at SIG-Auth. –Mike Altarace & Daz Wilkin, Strategic Customer Engineers, Google Cloud PlatformDownload KubernetesGet involved with the Kubernetes project on GitHub Post questions (or answer questions) on Stack Overflow Connect with the community on SlackFollow us on Twitter @Kubernetesio for latest updates
Quelle: kubernetes

5 Minutes with the Docker Captains

Captain is a distinction that Docker awards select members of the community that are both experts in their field and are passionate about sharing their Docker knowledge with others. Captains are Docker ambassadors (not Docker employees) and their genuine love of all things Docker has a huge impact on the Docker community &; whether they are blogging, writing books, speaking, running workshops, creating tutorials and classes, offering support in forums, or organizing and contributing to local events &8211; they make Docker’s mission of democratizing technology possible. Whether you are new to Docker or have been a part of the community for awhile, please don’t hesitate to reach out to Docker Captains with your challenges, questions, speaking requests and more.

This week we are highlighting 3 of our outstanding Captains who made August one filled with Docker learnings and events. Read on to learn more about how they got started, what they love most about Docker, and why Docker.
While Docker does not accept applications for the Captains program, we are always on the lookout to add additional leaders that inspire and educate the Docker community. If you are interested in becoming a Docker Captain, we need to know how you are giving back. Sign up for community.docker.com, share your activities on social media with the Docker, get involved in a local meetup as a speaker or organizer and continue to share your knowledge of Docker in your community.
 
Brian Christner
 
Brian Christner is a Cloud Advocate for Swisscom a Switzerland based Telecom where they are busy deploying a large Docker infrastructure. Brian is passionate about Linux, Docker or anything with a .IO domain name and regularly contributes Dockerarticles and GitHub projects.
 
How has Docker impacted what you do on a daily basis?
3 years ago Docker was still a relatively new concept to my coworkers and customers. Today, I would say that over 50% of the meetings I attend are about Docker, containers or technologies surrounding the Docker ecosystem. We recently integrated Docker image support into our Application Cloud which was a huge success. Docker continues to power our Services platform for Application Cloud where we are busy adding more services all the time like MongoDB, Redis, RabbitMQ and ELK as a service.
As a Docker Captain, how do you share your learnings with the community?
I keep quite busy building new Docker projects, researching, presenting at meetups and publishing articles to https://www.brianchristner.io. I’m also one of the maintainers of the Awesome Docker List which is a collection of Docker resources and projects.  If you have a good project or resource, please submit it so the community can benefit. I also contribute regularly to https://www.reddit.com/r/docker
Are you working on any fun projects?
Currently I’m building a Docker Swarm version of https://github.com/vegasbrianc/prometheus
Who are you when you’re not online?
When I’m not online you can find me in the Swiss Alps Hiking, mountain biking or skiing with my wife and son.
 
Viktor Farcic
 
Viktor Farcic is a Senior Consultant at CloudBees. His big passions are Microservices, Continuous Integration, Delivery and Deployment (CI/CD) and Test-Driven Development (TDD). He wrote The DevOps 2.0 Toolkit: Automating the Continuous Deployment Pipeline with Containerized Microservices and the Test-Driven Java Development books. His random thoughts and tutorials can be found on his blog TechnologyConversations.com.
 
 
How has Docker impacted what you do on a daily basis?
Almost everything I do today involves Docker one way or another. The code I wrote is compiled through containers (since I bought my last laptop, I do not even have most of my build tools installed). Tests I run are inside containers. Services and applications are packaged and deployed as containers. Servers I used for development and testing are substituted with containers running on my laptop. The list can go on and on. In my case, Docker is everywhere.
What is a common technology question you’re asked and the high-level explanation?
How do we put things into containers without changing anything else? My answer is always the same. Docker is not only a tool but a new way to approach many different software development aspects. If we are to leverage Docker’s full potential, many things need to change. Architecture, team structure, processes, and so on.
Share a random story with us.
When I was young, I almost become archeologist. Being in the same profession as Indiana Jones was a much better way to attract girls than being a geek. Eventually, my geeky side won and I went back to computers.
If you could switch your job with anyone else, whose job would you want?
It would be Jérôme Petazzoni. He looks like someone who truly enjoys his work (apart from being great at it).
 
Chanwit Kaewkasi
 
Chanwit is an Asst. Professor at  Suranaree University of Technology and a Docker Swarm Maintainer. Chanwit ported Swarm to Windows and developed a number of Swarm features in the early (v0.1) days. He serves as a Technical Cloud Adviser to many companies in Thailand, where they have been setting up Swarm clusters for their production environments.
 
How has Docker impacted what you do on a daily basis?
I’m teaching and co-running a research laboratory at Suranaree University of Technology (SUT) in Thailand. Basically, Docker is the major part of our, Large Scale Software Engineering, research ecosystem there. We use Docker as the infrastructure layer of every system we built, ranging from low-power storage clusters, bare-metal computing clouds, and upgradable IoT devices at scale.
To make the research progresses, we need to understand how does Docker and its clustering system work. This resulted in the recent 2000-node crowd-sourcing Docker cluster project, SwarmZilla (formerly known as Swarm2K) in July.
As a Docker Captain, how do you share that learning with the community?
Together with members of the Docker community, we did scaling tests on the July Swarm2K cluster and provided feedback to the Docker Engineering team so they could use the data collected from the experiments to improve Docker Engine. I blogged about Docker and the Swarm2K project and other things at http://medium.com/@chanwit.
 
The post 5 Minutes with the Docker Captains appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

SIG Apps: build apps for and operate them in Kubernetes

Editor’s note: This post is by the Kubernetes SIG-Apps team sharing how they focus on the developer and devops experience of running applications in Kubernetes.Kubernetes is an incredible manager for containerized applications. Because of this, numerous companies have started to run their applications in Kubernetes.Kubernetes Special Interest Groups (SIGs) have been around to support the community of developers and operators since around the 1.0 release. People organized around networking, storage, scaling and other operational areas.As Kubernetes took off, so did the need for tools, best practices, and discussions around building and operating cloud native applications. To fill that need the Kubernetes SIG Apps came into existence.SIG Apps is a place where companies and individuals can:see and share demos of the tools being built to enable app operatorslearn about and discuss needs of app operatorsorganize around efforts to improve the experienceSince the inception of SIG Apps we’ve had demos of projects like KubeFuse, KPM, and StackSmith. We’ve also executed on a survey of those operating apps in Kubernetes.From the survey results we’ve learned a number of things including:That 81% of respondents want some form of autoscalingTo store secret information 47% of respondents use built-in secrets. At reset these are not currently encrypted. (If you want to help add encryption there is an issue for that.) The most responded questions had to do with 3rd party tools and debuggingFor 3rd party tools to manage applications there were no clear winners. There are a wide variety of practicesAn overall complaint about a lack of useful documentation. (Help contribute to the docs here.)There’s a lot of data. Many of the responses were optional so we were surprised that 935 of all questions across all candidates were filled in. If you want to look at the data yourself it’s available online.When it comes to application operation there’s still a lot to be figured out and shared. If you’ve got opinions about running apps, tooling to make the experience better, or just want to lurk and learn about what’s going please come join us.Chat with us on SIG-Apps Slack channelEmail as at SIG-Apps mailing listJoin our open meetings: weekly at 9AM PT on Wednesdays, full details here.–Matt Farina, Principal Engineer, Hewlett Packard Enterprise
Quelle: kubernetes

Create a Couchbase cluster using Kubernetes

Editor’s note: today’s guest post is by Arun Gupta, Vice President Developer Relations at Couchbase, showing how to setup a Couchbase cluster with Kubernetes.  Couchbase Server is an open source, distributed NoSQL document-oriented database. It exposes a fast key-value store with managed cache for submillisecond data operations, purpose-built indexers for fast queries and a query engine for executing SQL queries. For mobile and Internet of Things (IoT) environments, Couchbase Lite runs native on-device and manages sync to Couchbase Server.Couchbase Server 4.5 was recently announced, bringing many new features, including production certified support for Docker. Couchbase is supported on a wide variety of orchestration frameworks for Docker containers, such as Kubernetes, Docker Swarm and Mesos, for full details visit this page.  This blog post will explain how to create a Couchbase cluster using Kubernetes. This setup is tested using Kubernetes 1.3.3, Amazon Web Services, and Couchbase 4.5 Enterprise Edition.Like all good things, this post is standing on the shoulder of giants. The design pattern used in this blog was defined in a Friday afternoon hack with @saturnism. A working version of the configuration files was contributed by @r_schmiddy.Couchbase ClusterA cluster of Couchbase Servers is typically deployed on commodity servers. Couchbase Server has a peer-to-peer topology where all the nodes are equal and communicate to each other on demand. There is no concept of master nodes, slave nodes, config nodes, name nodes, head nodes, etc, and all the software loaded on each node is identical. It allows the nodes to be added or removed without considering their “type”. This model works particularly well with cloud infrastructure in general. For Kubernetes, this means that we can use the exact same container image for all Couchbase nodes.A typical Couchbase cluster creation process looks like:Start Couchbase: Start n Couchbase serversCreate cluster: Pick any server, and add all other servers to it to create the clusterRebalance cluster: Rebalance the cluster so that data is distributed across the clusterIn order to automate using Kubernetes, the cluster creation is split into a “master” and “worker” Replication Controller (RC). The master RC has only one replica and is also published as a Service. This provides a single reference point to start the cluster creation. By default services are visible only from inside the cluster. This service is also exposed as a load balancer. This allows the Couchbase Web Console to be accessible from outside the cluster.The worker RC use the exact same image as master RC. This keeps the cluster homogenous which allows to scale the cluster easily.Configuration files used in this blog are available here. Let’s create the Kubernetes resources to create the Couchbase cluster.Create Couchbase “master” Replication ControllerCouchbase master RC can be created using the following configuration file:apiVersion: v1kind: ReplicationControllermetadata:  name: couchbase-master-rcspec:  replicas: 1  selector:    app: couchbase-master-pod  template:    metadata:      labels:        app: couchbase-master-pod    spec:      containers:      – name: couchbase-master        image: arungupta/couchbase:k8s        env:          – name: TYPE            value: MASTER        ports:        – containerPort: 8091—-apiVersion: v1kind: Servicemetadata:  name: couchbase-master-service  labels:    app: couchbase-master-servicespec:  ports:    – port: 8091  selector:    app: couchbase-master-pod  type: LoadBalancerThis configuration file creates a couchbase-master-rc Replication Controller. This RC has one replica of the pod created using the arungupta/couchbase:k8s image. This image is created using the Dockerfile here. This Dockerfile uses a configuration script to configure the base Couchbase Docker image. First, it uses Couchbase REST API to setup memory quota, setup index, data and query services, security credentials, and loads a sample data bucket. Then, it invokes the appropriate Couchbase CLI commands to add the Couchbase node to the cluster or add the node and rebalance the cluster. This is based upon three environment variables:TYPE: Defines whether the joining pod is worker or masterAUTO_REBALANCE: Defines whether the cluster needs to be rebalancedCOUCHBASE_MASTER: Name of the master serviceFor this first configuration file, the TYPE environment variable is set to MASTER and so no additional configuration is done on the Couchbase image.Let’s create and verify the artifacts.Create Couchbase master RC:kubectl create -f cluster-master.yml replicationcontroller “couchbase-master-rc” createdservice “couchbase-master-service” createdList all the services:kubectl get svcNAME                       CLUSTER-IP    EXTERNAL-IP   PORT(S)    AGEcouchbase-master-service   10.0.57.201                 8091/TCP   30skubernetes                 10.0.0.1      <none>        443/TCP    5hOutput shows that couchbase-master-service is created.Get all the pods:kubectl get poNAME                        READY     STATUS    RESTARTS   AGEcouchbase-master-rc-97mu5   1/1       Running   0          1mA pod is created using the Docker image specified in the configuration file.Check the RC:kubectl get rcNAME                  DESIRED   CURRENT   AGEcouchbase-master-rc   1         1         1mIt shows that the desired and current number of pods in the RC are matching.Describe the service:kubectl describe svc couchbase-master-serviceName: couchbase-master-serviceNamespace: defaultLabels: app=couchbase-master-serviceSelector: app=couchbase-master-podType: LoadBalancerIP: 10.0.57.201LoadBalancer Ingress: a94f1f286590c11e68e100283628cd6c-1110696566.us-west-2.elb.amazonaws.comPort: <unset> 8091/TCPNodePort: <unset> 30019/TCPEndpoints: 10.244.2.3:8091Session Affinity: NoneEvents:  FirstSeen LastSeen Count From SubobjectPath Type Reason Message  ——— ——– —– —- ————- ——– —— ——-  2m 2m 1 {service-controller } Normal CreatingLoadBalancer Creating load balancer  2m 2m 1 {service-controller } Normal CreatedLoadBalancer Created load balancerAmong other details, the address shown next to LoadBalancer Ingress is relevant for us. This address is used to access the Couchbase Web Console.Wait for ~3 mins for the load balancer to be ready to receive requests. Couchbase Web Console is accessible at <ip>:8091 and looks like:The image used in the configuration file is configured with the Administrator username and password password. Enter the credentials to see the console:Click on Server Nodes to see how many Couchbase nodes are part of the cluster. As expected, it shows only one node:Click on Data Buckets to see a sample bucket that was created as part of the image:This shows the travel-sample bucket is created and has 31,591 JSON documents.Create Couchbase “worker” Replication ControllerNow, let’s create a worker replication controller. It can be created using the configuration file:apiVersion: v1kind: ReplicationControllermetadata:  name: couchbase-worker-rcspec:  replicas: 1  selector:    app: couchbase-worker-pod  template:    metadata:      labels:        app: couchbase-worker-pod    spec:      containers:      – name: couchbase-worker        image: arungupta/couchbase:k8s        env:          – name: TYPE            value: “WORKER”          – name: COUCHBASE_MASTER            value: “couchbase-master-service”          – name: AUTO_REBALANCE            value: “false”        ports:        – containerPort: 8091This RC also creates a single replica of Couchbase using the same arungupta/couchbase:k8s image. The key differences here are:TYPE environment variable is set to WORKER. This adds a worker Couchbase node to be added to the cluster.COUCHBASE_MASTER environment variable is passed the value of couchbase-master-service. This uses the service discovery mechanism built into Kubernetes for pods in the worker and the master to communicate.AUTO_REBALANCE environment variable is set to false. This ensures that the node is only added to the cluster but the cluster itself is not rebalanced. Rebalancing is required to to re-distribute data across multiple nodes of the cluster. This is the recommended way as multiple nodes can be added first, and then cluster can be manually rebalanced using the Web Console.Let’s create a worker:kubectl create -f cluster-worker.yml replicationcontroller “couchbase-worker-rc” createdCheck the RC:kubectl get rcNAME                  DESIRED   CURRENT   AGEcouchbase-master-rc   1         1         6mcouchbase-worker-rc   1         1         22sA new couchbase-worker-rc is created where the desired and the current number of instances are matching.Get all pods:kubectl get poNAME                        READY     STATUS    RESTARTS   AGEcouchbase-master-rc-97mu5   1/1       Running   0          6mcouchbase-worker-rc-4ik02   1/1       Running   0          46sAn additional pod is now created. Each pod’s name is prefixed with the corresponding RC’s name. For example, a worker pod is prefixed with couchbase-worker-rc.Couchbase Web Console gets updated to show that a new Couchbase node is added. This is evident by red circle with the number 1 on the Pending Rebalance tab.Clicking on the tab shows the IP address of the node that needs to be rebalanced:Scale Couchbase cluster Now, let’s scale the Couchbase cluster by scaling the replicas for worker RC:kubectl scale rc couchbase-worker-rc –replicas=3replicationcontroller “couchbase-worker-rc” scaledUpdated state of RC shows that 3 worker pods have been created:kubectl get rcNAME                  DESIRED   CURRENT   AGEcouchbase-master-rc   1         1         8mcouchbase-worker-rc   3         3         2mThis can be verified again by getting the list of pods:kubectl get poNAME                        READY     STATUS    RESTARTS   AGEcouchbase-master-rc-97mu5   1/1       Running   0          8mcouchbase-worker-rc-4ik02   1/1       Running   0          2mcouchbase-worker-rc-jfykx   1/1       Running   0          53scouchbase-worker-rc-v8vdw   1/1       Running   0          53sPending Rebalance tab of Couchbase Web Console shows that 3 servers have now been added to the cluster and needs to be rebalanced.Rebalance Couchbase ClusterFinally, click on Rebalance button to rebalance the cluster. A message window showing the current state of rebalance is displayed:Once all the nodes are rebalanced, Couchbase cluster is ready to serve your requests:In addition to creating a cluster, Couchbase Server supports a range of high availability and disaster recovery (HA/DR) strategies. Most HA/DR strategies rely on a multi-pronged approach of maximizing availability, increasing redundancy within and across data centers, and performing regular backups.Now that your Couchbase cluster is ready, you can run your first sample application.For further information check out the Couchbase Developer Portal and Forums, or see questions on Stack Overflow.  –Arun Gupta, Vice President Developer Relations at CouchbaseDownload KubernetesGet involved with the Kubernetes project on GitHub Post questions (or answer questions) on Stack Overflow Connect with the community on SlackFollow @Kubernetesio on Twitter for latest updates
Quelle: kubernetes

Docker Weekly | Roundup

This week, we’re taking a look at how to quickly create a swarm cluster, setup a mail forwarder on Docker, and better understand the new Docker 1.12.0 load-balancing feature. As we begin a new week, let’s recap our top 5 most-read stories for the week of August 7, 2016:

1. Docker Cheat Sheet: a quick reference guide on how to initialize swarm mode, build an image from the Dockerfile, and pull an image from a registry.
2. cURL with HTTP2 Support: build a Dockerfile to create a minimal, Alpine Linux-based image with support for HTTP2. Emphasis on keeping the generated image small and customizing curl by Nathan LeClaire.
3. Distributed Application Bundles: tutorial on how to create a demo swarm cluster composed of Docker machines and deploy a service using a dab file by Viktor Farcic.
4. Setting up Mail Forwarder: create email addresses for your domain, provide address for the mails forwarded, and pass information to the Docker container via environment variables by Brian Christner.
5. Load-Balancing Feature: in-depth overview of what’s new in Docker 1.12.0 load-balancing feature by Ajeet Singh Raina

Top 5 most popular Docker stories of the week via @DockerClick To Tweet

Quelle: https://blog.docker.com/feed/

Docker Hub Hits 5 Billion Pulls

Last week, the total number of image pulls from the Docker Hub Repository Service reached 5 billion. That’s an increase of 150% since just February. It’s pretty amazing for a three year old project. Docker Hub has become a part of the daily life of developers because it

Is a central, reliable, and secure service to host your own repositories and get access to high quality images
The repository service also serves Docker Cloud repos and in fact with your existing content and subscriptions

That means the over 650,000 registered users are pulling images over 13000 times a minute. That’s almost twice as fast as in February when we had 7000 pulls a minute.
Equally interesting, the total number of pulls of official images exceeded 1 billion at the same time. The Docker Official Repositories are a curated set of image repositories that are promoted on Docker Hub. Official images are scanned by Docker Security Scanning, making them the most secure base images you can use. In fact, you can see how secure each of them are by using Security Scanning. And you can use the same scanning service on your own private repositories during the free trial period.
If you haven’t gotten in on the fun yet, it’s really easy to get started. Create your Docker ID today and get one free private repo.

You can create a repository right there

Or you can just push from the command line
$ docker login
Login with your Docker ID to push and pull images from Docker. If you don&;t have a Docker ID, head over to https://cloud.docker.com to create one.
Username (manomarks): manomarks
Password:
Login Succeeded
$ docker build -t manomarks/visualizer .
$ docker push manomarks/visualizer
And we’re building even more functionality with the new private beta of the Docker Store, which will provide a scalable self-service system for ISVs to publish and distribute trusted and enterprise-ready content. Head over to store.docker.com to give it a look.

We&8217;re happy to announced that @Docker Hub had reached 5 billion PullsClick To Tweet

Quelle: https://blog.docker.com/feed/

Docker Online Meetup # 41: Docker Captains Share their Tips and Tricks for Built In Docker Orchestration

It’s been nearly two weeks since Docker released Docker 1.12 as generally available for production environments, introducing a number of new features and concepts to the Docker project. Our  team has already started to dig in and share their learnings with the community via blog posts, talks and peer-to-peer help. Docker Captains are technology experts who have been awarded the distinction of being a Docker Captain in part because of their passion for sharing their Docker knowledge with others. So, we’ve invited three of our Docker Captains to speak at the next Docker Online Meetup on August 31st and share their tips and tricks for using Docker 1.12.

Ajeet Singh Raina is currently working as Technical Lead Engineer in the Enterprise Solution Group at Dell India R&D and has solid understanding of a diverse range of IT infrastructure, system management, system integration engineering and quality assurance. Ajeet has a great passion for upcoming trends and technologies. He loves contributing towards Open source space through writing and blogging @ http://www.collabnix.com.
Ajeet has shared a number of fantastic articles on Service Discovery, including Demonstrating Docker 1.12 Service Discovery with Docker Compose and How Service Discovery works under Docker Engine 1.12. In the meetup, Ajeet will quickly share his key takeaways and the best ways to use Docker 1.12 Service Discovery.

Viktor Farcic is a Senior Consultant at CloudBees. He coded using a plethora of languages starting and can often be found speaking about Docker around the world. His big passions are Microservices, Continuous Integration, Delivery and Deployment (CI/CD) and Test-Driven Development (TDD).  He wrote The DevOps 2.0 Toolkit: Automating the Continuous Deployment Pipeline with Containerized Microservices and the Test-Driven Java Development books. His random thoughts and tutorials can be found on his blog TechnologyConversations.com.
If you haven’t had the chance to check out Viktor’s Docker Swarm Introduction and Integrating Proxy With Docker Swarm (Tour Around Docker 1.12 Series), you are going to want to make some time to dive in. Viktor covers the basics of how Swarm in Docker v1.12 works and then deep-dives into some of the more complicated aspects. In the meetup, Viktor will share best practices for setting a Swarm cluster and integrating it with HAProxy.

For 20 years Bret Fisher has designed, built, and operated distributed systems from 4 to 4000. He currently focuses on DevOps style activities in the public cloud and enterprise. Bret works on creating immutable infrastructures, automation, containers, CI/CD, cloud monitoring, and an occasional JavaScript developer. He spends his free time in Virginia&;s local, thriving tech scene helping lead local Code for America and Docker Meetup groups. He basically spend his days helping people. Bret lives at the beach. Dogs over Cats.
Bret frequents Docker forums to lend a helping hand when others get stuck, so he knows a thing or two about how to optimize getting started with 1.12. Bret will share his favorite Docker 1.12 command options and aliases that will make your life easier including cli aliases for quick container management; the shortest path to secure production-ready swarm; how to use cli filters for easier management of larger swarms; and docker remote cli security setup.
Register now to attend the meetup and learn tips and tricks for using Docker 1.12!

Join @Docker Captains @ajeetsraina @BretFisher @vfarcic to learn about @docker 1.12 built in&;Click To Tweet

Quelle: https://blog.docker.com/feed/

Your Docker Agenda in August

From webinars to workshops, to conference talks, check out our list of events that are coming up in August!

North America | South America | Europe | Oceania | Asia | Africa | Official Docker Training Courses
 

Check out the @Docker agenda for August! Tons of awesome meetups, webinars & conferences!Click To Tweet

Official Docker Training Courses
View the full schedule of instructor led training courses here! Description of courses are below.

Docker Datacenter Training Series
Introduction to Docker
Docker Administration and Operations
Advanced Docker Operations
Managing Container Services with Universal Control Plane
Deploying Docker Datacenter
User Management and Troubleshooting UCP

North America
 
Aug 3rd: Docker Meetup at Docker HQ &; San Francisco, CA
Come and join us at Docker HQ on Wednesday for our 47th meetup! Ben Bonnefoy , a member of the Docker technical staff, will give an insight into Docker for Mac and Docker for Windows and then Nishant Totla , a software engineer in the core open source team, will give some updates on Docker .12. This will be followed by a talk by Neil Gehani , a Sr. Product Manager at HPE, on in-cluster testing. It will be a fun evening of learning, exchanging ideas and networking with pizza, beer and plenty of Docker stickers for everyone.
RSVP
Aug 3rd: Docker Meetup at Meltmedia &8211; Tempe, AZ
This meetup will focus on Docker for AWS, specifically running distributed apps from localhost to AWS.
RSVP
Aug 4th: Docker Meetup at Rackspace &8211; Austin, TX
A discussion about Docker Tips and Tricks.
RSVP
Aug 9th: Docker Meetup at CA Technologies &8211; Denver, CO
A talk about moving from SaaS to On-Premise with Docker, in particular how Docker made it possible to deploy a SaaS web application into firewalled networks and a journey of orchestrating together micro-service architecture from raw bash script to Replicated.
RSVP
Aug 11th: Docker Meetup at Full Sail Campus &8211; Orlando, FL
Docker Ecosystem and Use Case talks, followed by networking.
RSVP
Aug 11th: Docker Meetup at Braintree &8211; Chicago, IL
Ken Sipe will take the group through a look at the anatomy of a container including control groups (cgroups) and namespaces. Then there will be a discussion about Java&;s memory management and GC characteristics and how JRE characteristics change based on core count.
RSVP
Aug 16th: Docker Meetup at AEEC Innovation Lab &8211; Alexandria, VA
Docker Captain, Phil Estes, will present.
RSVP
Aug 16th: Docker Meetup at Datastax &8211; Santa Clara, CA
Databases, Image Management, In-cluster and Chaos Testing talks by Baruch Sadogursky, Ben Bromhead and Neil Gehani.
RSVP
Aug 16th: Docker Meetup at Impact Hub &8211; Santa Barbara, CA
This meetup will be about leveraging Docker + Compose for a real world dev environment. James Brown from Invoca will discuss how the move to Docker has benefited their development process.
RSVP
Aug 18th: Docker Meetup at CirrusMio &8211; Lexington, KY
Come and learn how others are using Docker! There will be two demos/talks scheduled for this meetup. The first will be about using Jenkins to build containers and the second will be about Docker in production.
RSVP
Aug 18th: Docker Meetup in Minneapolis &8211; Minneapolis, MN
The Container Summit City Series comes to Minneapolis on August 18th to continue the conversation surrounding containers in production! Bryan Cantrell, CTO of Joyent, will be joined in speaking by other expert users from companies that have been running containers in production for years and have experience with what solution stacks work best and what pitfalls to avoid.
RSVP
Aug 22nd: Docker Meetup at Issuetrak &8211; Virginia Beach , VA
Bret Fisher will tell all about DockerCon 2016 and what&8217;s in store for Docker in 1.12.
Aug 22nd &8211; 24th: LinuxCon/ ContainerCon &8211; Toronto, CA
There’s plenty of us at LinuxCon/ ContainerCon this year! Come see us at Booth to meet the Docker speakers and pick up your swag.
Aug 23rd: Docker and NATS Cloud Native Meetup During LinuxCon &8211; Toronto, Canada
The Docker Toronto meetup group and the Toronto NATS Cloud Native and IoT meetup group are joining forces to bring you a mega-meetup during LinuxCon! Riyaz Faizullabhoy from Docker will present on &;The Update Framework&8217; and , Diogo Monteiro will discuss implementing microservices with NATS. Raffi Der Haroutiounian will give an overview of NATS, Docker and Microservices.
Aug 23rd: Docker Meetup at the Iron Yard &8211; Houston, TX
Join us for our next meetup event!
RSVP
Aug 24th: Docker Meetup at CodeGuard &8211; Atlanta, GA
Talk by Eldon Stegall entitled, &8216;Abusing The Bridge: Booting a baremetal cluster from a docker container.&8217;
RSVP
Aug 28th &8211; 31h: VMworld 16 US &8211; Las Vegas, CA
Docker returns to VMworld this year and in Las Vegas! We’re launching our newest and biggest booth yet, so be sure to catch us at Booth . Yes, there will be swag given away.
Aug 31st: Docker Meetup in Salt Lake City &8211; Salt Lake City , UT
Come for a tutorial on new Docker 1.12 features and a review of DockerCon 2016 by Ryan Walls.
RSVP

South America
 
Aug 4th: Docker Meetup at Globant &8211; Córdoba, Argentina
Come for a talk on Docker for AWS. Talks by Florencia Caro, Ruben Dopazo, Carlos Santiago Moreno y Luis Barrueco.
RSVP
Aug 6th: Docker Meetup at Universidad Interamericana de Panamá &8211; Panamá, Panama
An introduction to Docker and Docker Cluster.
RSVP
Aug 9th: Docker Meetup at VivaReal&8211; Sao Paulo, Brazil
RSVP
Aug 13th: Docker Meetup at Microsoft Peru &8211; Lima, Peru
Join for a DockerCon recap.
RSVP
Aug 20th: Docker Meetup at Auditório-Unijorge Campus Comércio &8211; Salvador, Brazil
This is the beginning of the Docker Tour: the Docker Salvador meetup group&8217;s initiate to spread Docker technology among IT students in Salvador. This event will have two lectures for beginners where they can install the tool and learn Docker at ease in a friendly environment.
RSVP
Aug 23rd: Docker Meetup at Auditório Tecnopuc &8211; Porto Alegre, Brazil
A meetup to discuss PHP and Docker.
RSVP

Europe
 
Aug 3rd: Docker HandsOn &8211; Meet-Repeat C#+1 &8211; Hamburg, Germany
Aug 4th: Docker Meetup at SkyScanner Glasgow &8211; Glasgow, United Kingdom
What&8217;s new in Docker Land (@rawkode and @GJTempleton). Guy & I will be walking you through all the latest developments in Docker Land, including Docker Engine 1.12, Docker Compose 1.8, and Docker for Mac and Windows. Also well as these Docker updates, we&8217;ll be providing a quick review of DockerCon 2016 and highlighting some of the best talks for you to watch in your own time.
RSVP
Aug 8th: Docker Talk at Golang Conference &8211; Golang, UK
Speaking Docker Captain Tiffany Jernigan
Aug 9th: IOT RpiCar si ASP.NET Core + Docker &8211; Bucharest, Romania
Aug 10th:  Docker Meetup at KWORKS &8211; Istanbul, Turkey
Dockerizing a Complex Application Stack [w/Istanbul DevOps]
Aug 24th: Docker Meetup at Pipedrive &8211; Tallinn, Estonia
Let&8217;s share and discuss our experience with Docker ecosystem. More details of the content coming up!
RSVP
Aug 24th: Docker Meetup at Elastx &8211; Stockholm, Sweden
Continuously Deploying Containers To Docker Swarm Cluster. Speaker: Viktor Farcic (Docker Captain), & Senior Consultant, CloudBees. Abstract: Many of us have already experimented with Docker &8211; for example, running one of the pre-built images from Docker Hub. It is possible that your team might have recognized the benefits that Docker, in conjunction with experimentation, provides in building microservices and the advantages the technology could bring to development, testing, integration, and, ultimately, production.
RSVP
Aug 25th: Day of Containers &8211; Stockholm &8211; Stockholm, Sweden
Andrey Devyatkin & Viktor Farcic (Docker Captain) will give a talk &;Docker 101.&; If you are new to docker, this session is for you! In this sessions you will learn all the basics of docker and its main components. We will go through the the concept of containers, writing your own docker files, connecting data volumes, and basic orchestration with compose and swarm. Bring your laptops!
Aug 28th: Docker Meetup at Praqma &8211; Copenhagen, Denmark
Continuously Deploying Containers To Docker Swarm Cluster. Speaker: Viktor Farcic, Docker Captain & Senior Consultant, CloudBees. Abstract: Many of us have already experimented with Docker &8211; for example, running one of the pre-built images from Docker Hub. It is possible that your team might have recognized the benefits that Docker, in conjunction with experimentation, provides in building microservices and the advantages the technology could bring to development, testing, integration, and, ultimately, production.
RSVP
Aug 28th: Docker Talk at Agile Peterborough &8211; Peterborough, UK
Speaker Docker Captain Alex Ellis
Aug 28th: Docker Pre- Conference Meetup &8211; Praqma, Copenhagen
Speaker Docker Captain Viktor Farcic
Aug 29th: Docker Meetup at Praqma &8211; Copenhagen, Denmark
Laura Frank (Docker Captain) &8211; &8220;Stop being lazy and test your software.&8221; Testing software is necessary, no matter the size or status of your company. Introducing Docker to your development workflow can help you write and run your testing frameworks more efficiently, so that you can always deliver your best product to your customers and there are no excuses for not writing tests anymore. Jan Krag &8211; &8220;Docker 101.&8221; If you are new to docker, this session is for you! In this sessions you will learn all the basics of docker and its main components.
Viktor Farcic (Docker Captain)

Aug 31st: Docker Meetup at INCUBA &8211; Aarhus, Denmark
Rohde & Schwarz will give a talk about how they use Docker for development and test. HLTV.org will give a talk about how they use Docker to easily deploy microservices as part of their web platform.
RSVP
Aug 31st &8211; Sep 2: Software Circus &8211; Amsterdam, Netherlands
In Amsterdam for Software Circus? So is Docker! Speaking from Docker Ben Firshman

Asia
 
Aug 20th: Docker Meetup at Red Hat India Pvt. Ltd &8211; Bangalore, India
Docker for AWS and Azure &8211; Neependra Khare (Docker Captain), CloudYuga. Service Discovery and Load Balancing with Docker Swarm &8211; Ajeeth S. Raina (Docker Captain), Dell. Docker Application Bundle Overview &8211; Thomas Chacko. Logging as a service using Docker &8211; Manoj Goyal, Cisco. SDN-Like App Delivery Controller using Docker Swarm &8211; Prasad Rao, Avi Networks.
RSVP

Oceania 
Aug 1st: Docker Meetup in Auckland &8211; Auckland, New Zealand
Learn about all the new Docker features and offerings announced at DockerCon16 in Seattle!
RSVP
Aug 8th: Docker Meetup at Commbank &8211; Sydney, Australia
The Big Debate: AWS v Azure vs Google Cloud vs EMC Hybrid Cloud. One of the questions will help bring to light each platform&8217;s integration with the Docker ecosystem.
RSVP

Africa
Aug 6th: Docker Meetup at LakeHub &8211; Kisumu, Kenya
Please join us to learn about all the exciting announcements from DockerCon! Talk 1: What&8217;s New in Docker 1.12, by William Ondenge. In this presentation, William will describe Docker 1.12 new features and help you get your hands on the latest builds of Docker to try them on your own.
RSVP
// <![CDATA[
!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s); js.id=id;js.async=true;js.src="https://a248.e.akamai.net/secure.meetupstatic.com/s/script/2012676015776998360572/api/mu.btns.js?id=65gk05ie6n07ijoq3eq5vchs6f";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","mu-bootjs");
// ]]>
Quelle: https://blog.docker.com/feed/

Continuous Integration Testing on Docker Cloud. It’s Dead Simple

This is a guest post by Stephen Pope & Kevin Kaland from Project Ricochet
Docker Cloud is a SaaS solution hosted by Docker that gives teams the ability to easily manage, deploy, and scale their Dockerized applications.

The Docker Cloud service features some awesome continuous integration capabilities, especially its testing features. Once you understand the basics, I’ve found they are remarkably easy to use. The fact is, continuous integration covers a wide range of items — like automated builds, build testing, and automated deployment. The Docker Cloud service makes features like automated builds and deployment quite obvious, but the testing features can be a little harder to find, even though they are in plain sight!
In this piece, my aim is to walk you through the Docker Cloud service’s testing capabilities in a straightforward manner. By the end, I hope you’ll agree that it’s really dead simple!
So, let’s begin with the first task. Before we can test our builds, we need to automate them. We’ll use GitHub to set this up here, but note that it works the same way in Bitbucket.
Setup an Automated Build

1. Log into Docker Cloud using your Docker ID.
2. On the landing page (or in the left-hand menu), click on Repositories.
3. If you don’t already have a repository, you’ll need to click the Create button on the Repository page.
4. Click the Builds tab on the Repository page. If this is your first autobuild, you should see this screen:

To connect your GitHub account, click the Learn more link.
5. Once on the Cloud Settings page, look for the Source Providers section. Click on the plug  icon to connect your GitHub account. Authorize the connection on the screen that follows.

6. When your GitHub account is connected, go back to the Repository page and click Configure Automated Builds. Now we are in business!
7. Select the GitHub source repository you want to build from.

8. In the Build Location section, choose the option to Build on Docker Cloud’s infrastructure and select a builder size to run the build process on. Accept the default Autotest option for now (we’ll describe the Autotest options in detail in a moment).

Make sure you are satisfied with the Tag Mappings; these map your Docker image build tags (e.g. latest, test, production, etc.) to your GitHub branches. Ensure that Autobuild is enabled. If your Dockerfile needs any Environment Variables at build time, you can add them here. (Ours doesn’t.) Once you&8217;ve set everything up, click Save.
The specified tag will now be built when you push to the associated branch:

Setup Automated Deployment
After the build images are created, you can enable automated deployment.
If you are inclined to build images automatically, you may also want to automate the deployment of updated images once they are built. Docker Cloud makes this easy:
1. To get started, you will need a service to deploy (a service is a collection of running containers of a particular Docker image). A good example of a service might be our production node app, running 7 containers with a set of environment variables setup for that specific instance of the app. You might also have an equivalent service for development and testing (where you can test code before production). Here is a good read on starting your first service
2. Edit the service that is using the Docker image.
3. In the General Settings section, ensure that Autoredeploy is enabled:

4. Save changes and you should be set.
Autotest Builds before Deployment
Remember when I said testing your builds was dead simple? Well, check this out. All you need to do is enable Autotests.
On the Repository page, navigate to the Builds tab and then click Configure Automated Builds. Within the Autotest section, three options are available:

Off will test commits only to branches that are using Autobuild to build and push images.
Source repository will test commits to all branches of the source code repository, regardless of their Autobuild setting.
Source repository and external pull requests will test commits to all branches of the source code repository, including any pull requests opened against it.

Before you turn that on, you’ll need to set up a few assets in your repository to define the tests and how they should be run. You can find examples of this in our Production Meteor using Docker Git repo.
This boils down to a single basic file — plus some optional ones in case you need them.
Our docker-compose.test.yml will serve as the main entry point for testing. It lets you define a “sut” service. This enables you to run the main tests and various other services that may be needed to test your build. In our example, you may notice that it simply outputs “test passed” — but that line is where the magic happens. If your test returns 0, your test has passed. If it returns a 1, it hasn’t. Essentially, you are performing a simple call from the YAML file, or if more complex tests are done, in a more robust bash script.
Let’s review a YAML compose file example from a blog on automated testing that uses a bash script and some additional features:
sut:
 build: .
 dockerfile: Dockerfile.test
 links:
   &8211; web
web:
 build: .
 dockerfile: Dockerfile
 links:
   &8211; redis
redis:
 image: redis
Here, we define a sut service, along with some build instructions and an additional dockerfile for the tests. With this, you should be able to build a separate image for testing, instead of using the image for your build. That enables you to have different packages and files for testing that won’t be included in your application build.
Docker.test
FROM ubuntu:trusty
RUN apt-get update && apt-get install -yq curl && apt-get clean
WORKDIR /app
ADD test.sh /app/test.sh
CMD [&8220;bash&8221;, &8220;test.sh&8221;]
Here you’ll notice the final CMD is a test.sh bash script. This script will execute and return a 0 or 1 based on the test results.
Let’s take a quick look at the test.sh script:
Test.sh
sleep 5
if curl web | grep -q &8216;<b>Visits:</b> &8216;; then
 echo &8220;Tests passed!&8221;
 exit 0
else
 echo &8220;Tests failed!&8221;
 exit 1
fi
You’ll see the script is doing a simple curl call against the test application to see if some text appears on the page. If it does, the test passed. If not, the test will fail.
Remember how easy I said this was to implement on Docker Cloud? That&8217;s all there is to it! Additionally once you’ve mastered the basics, more advanced integrations can be done with builds hooks.
Of course, building the tests for a complete application will be a much larger task than described here, but the point is you’ll be able to focus on the tests, not how to squeeze them into your CI workflow. Docker Cloud makes the setup and implementation super easy. Once you understand these basic components, you should be able to set up our test Meteor service up in a matter of minutes.
Alright, that’s it for now. I hope this piece helped guide you through the process fairly easily, and more importantly showcases the cool testing CI workflow Docker Cloud has to offer. If you have additional questions or comments, make sure to head over to the Docker Cloud Forum, Docker technical staff will be glad to help. Here are some related posts that should prove helpful on your journey. Enjoy!
Get Docker Cloud for Free &8211; https://cloud.docker.com/

Docker Cloud Automated Repository Testing
Basic Voting Webapp (used at DockerCon for various examples)
An in depth post on automated test on Digital Ocean
Meteor Docker Example with Test (used in this Blog)

@Docker Cloud Service features continuous integration capabilities !Click To Tweet

Quelle: https://blog.docker.com/feed/