Q&A: 15 Questions AWS Users Ask About DDC For AWS

Docker is deployed across all major cloud service providers, including AWS. So when we announced Docker Datacenter for AWS (which makes it even easier to deploy DDC on AWS) and showed live demos of the solution at AWS re:Invent 2016 it was no surprise that we received a ton of interest about the solution. Docker Datacenter for AWS, as you can guess from its name, is now the easiest way to install and stand up the Docker Datacenter (DDC)  stack on an AWS EC2 cluster. If you are an AWS user and you are looking for an enterprise container management platform, then this blog will help answer questions you have about using DDC on AWS.
In last week’s webinar,  Harish Jayakumar,  Solutions Engineer at Docker, provided a solution overview and demo to showcase how the tool works, and some of the cool features within it. You can watch the recording of the webinar below:

We also hosted a live Q&A session at the end where we opened up the floor to the audience and did our best to get through as many questions as we could. Below, are fifteen of the questions that we received from the audience. We selected these because we believe they do a great job of representing the overall set of inquiries we received during the presentation. Big shout out to Harish for tag teaming the answers with me.
Q 1: How many VPCs are required to create a full cluster of UCP, DTR and the workers.
A: With the DDC Template it creates one new VPC along with its subnets and security groups. More details here:  https://ucp-2-1-dtr-2-2.netlify.com/datacenter/install/aws/
However, if you do want to use DDC with your existing VPC you can always deploy DDC directly without using the Cloud Formation template if you would like.
Q 2: Is the $150/monthly cost  per instance. Is this for an EC2 instance?
A: Yes, the $150/month cost is per EC2 instance. This is our monthly subscription model and is is purchasable directly on Docker Store. We also offer have annual subscriptions that are currently priced at $1,500 per node/per year or $3,000 per node/per year. You can view all pricing here.
Q 3: Would you be able to go over how to view logs for each containers? And what&;s the type of log output that UCP shows in the UI?
A: Within the UCP UI you can click on the “Resources” tab-> and then go to “Containers.” Once you have selected “Containers,  you can click on each individual container and see the logs within the UI.

Q 4: How does the resource allocation work? Can we over allocate CPU or RAM?
A: Yes. By default, each container’s access to the host machine’s CPU cycles is unlimited, but you can set various constraints to limit a given container’s access to the host machine’s CPU cycles. For RAM, Docker can enforce hard memory limits, which allow the container to use no more than a given amount of user or system memory. Or you Docker can provide soft limits, which allow the container to use as much memory as it needs unless certain conditions are met, such as when the kernel detects low memory or contention on the host machine. You can find more details here: https://docs.docker.com/engine/admin/resource_constraints/
Q 5: Can access to the console via UCP be restricted via RBAC constraints?
A: Yes. Here is a blog explaining access controls in detail:

https://blog.docker.com/2016/03/role-based-access-control-docker-ucp-tutorial/

Q 6: Can we configure alerting from Docker Datacenter based on user definable ined criteria (e.g. resource utilization of services)?
A: Yes, but with a little tweaking. Everything with Docker is event driven- so you can configure to trigger alerts for each event and take the necessary action. Within the UI, you can see all of the usage of resources listed. You have the ability to set how you want to see the notifications associated with it.
Q 7: Is there a single endpoint in front of the three managers?
A: Within UCP, we suggest teams deploy three managers to ensure high availability of the cluster. As far as the single endpoint, you can configure one if you would like. For example, you can configure an ELB in AWS to be in front of those three (3) managers and then they can reach to that one load balancer instead of accessing the individual manager with their ip.
Q 8: Do you have to use DTR or can you use alternative registries such as AWS ECR, Artifactory, etc.?
A: With the Cloud Formation template, it is only DTR. Docker Datacenter is the end to end enterprise container management solution and DTR/UCP are integrated. This means they share several components between them. They also have SSO enabled between the components so the same LDAP/AD group can be used. Also, the solution ensures a secure software supply chain including signing and scanning. The chain is only made possible when using the full solution. The images are signed and scanned by DTR and because of integration you can simply enable UCP to not run containers based of images that haven’t been signed. We call this policy enforcement.
Q 9: So there is a single endpoint in front of the mgrs (like a Load balancer) where I can config my docker cli to?
A: Yes, that is correct.
Q 10: How many resources on the VMs or Physical machines are needed to run Docker Datacenter on prem? Let&8217;s say for three UCP manager nodes and three worker nodes.
A: The CloudFormation template does it all for you. However, if you plan to install DDC outside of the Cloud Formation template here are the infrastructure requirements you should consider:

https://docs.docker.com/ucp/installation/system-requirements/

(installed on CommerciallySupported Engine https://docs.docker.com/cs-engine/install/)
Q 11: How does this demo of DDC for AWS compare to https://aws.amazon.com/quickstart/architecture/docker-ddc/
A: It is the same. But stay tuned, as we will be providing an updated version in the coming weeks.
Q 12: If you don&8217;t use a routing mesh, would you need to route to each specific container? How do you know their individual IPs? Is it possible to have a single-tenant type of architecture where each user has his own container running?
A: The routing mesh is available as part of the engine. It’s turned on by default and it routes to containers cluster wide. Before the Routing mesh ( prior to Docker 1.12) you will have to route to a specific container and its port. It does not have to be the ip specifically. You can route host names to specific services from within the UCP UI. We also introduced the concept of alias &; where you can associate a container by its name and the engine has a built in DNS to handle the routing for you. However, I would encourage looking at routing mesh, which is available in Docker 1.12 and above.
Q 13: Are you using Consul as a K/V store for the overlay network ?
A: No we are not using Consul as the K/V store nor does Docker require an external K/V store. The state is stored using a distributed database on the manager nodes called Raft store.  Manager nodes are part of a Raft consensus group. This enables them to share information and elect a leader. A leader is the central authority maintaining the state, which includes lists of nodes, services and tasks across the swarm in addition to making scheduling decisions.
Q 14: How do you work with node draining in the context of Auto Scaling Groups (ASG)?
A: The node drain drains all the workloads from a node. It prevents a node from receiving new tasks from the manager. It also means the manager stops tasks running on the node and launches replica tasks on a node with ACTIVE availability. The node does remaining the ASG group.
Q 15: Is DDC for AWS dependent on AWS EBS?
A: We use EBS volumes for the instances, but we aren&8217;t using it for persistent storage, more of a local disk cache. Data there will go away if instance goes away.
To get started with Docker Datacenter for AWS, sign up for a free 30-day trial at www.docker.com/trial.
Enjoy!
 

Meet the easiest way to deploy @Docker Datacenter on AWS!Click To Tweet

The post Q&;A: 15 Questions AWS Users Ask About DDC For AWS appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

A dash of Salt(Stack): Using Salt for better OpenStack, Kubernetes, and Cloud — Q&A

The post A dash of Salt(Stack): Using Salt for better OpenStack, Kubernetes, and Cloud &; Q&;A appeared first on Mirantis | The Pure Play OpenStack Company.
On January 16, Ales Komarek presented an introduction to Salt. We covered the following topics:

The model-driven architectures behind how Salt stores topologies and workflows

How Salt provides solution adaptability for any custom workloads

Infrastructure as Code: How Salt provides not only configuration management, but entire life-cycle management

How Continuous Delivery/ Integration/ Management fits into the puzzle

How Salt manages and scales parallel cloud deployments that include OpenStack, Kubernetes and others

What we didn&;t do, however, is get to all of the questions from the audience, so here&8217;s a written version of the Q&A, including those we didn&8217;t have time for.
Q: Why Salt?
A: It&8217;s python, it has a huge and growing base of imperative modules and declarative states, and it has a good message bus.
Q: What tools are used to initially provision Salt across an infrastructure? Cobbler, Puppet, MAAS?
A: To create a new deployment, we rely on a single node, where we bootstrap the Salt master and Metal-as-a-Service (formerly based on Foreman, now Ironic). Then we control the MaaS service to deploy the physical bare-metal nodes.
Q: How broad a range of services do you already have recipes for, and how easy is it to write and drop in new ones if you need one that isn&8217;t already available?
A: The ecosystem is pretty vast. You can look at either https://github.com/tcpcloud or the formula ecosystem overview at http://openstack-salt.tcpcloud.eu/develop/extending-ecosystem.html. There are also guidelines for creating new formulas, which is very straight-forward process. A new service can be created in matter of hours, or even minutes.
Q: Can you convert your existing Puppet/Ansible scripts to Salt, and what would I search to find information about that?
A: Yes, we have reverse engineered autmation for some of these services in the past. For example we were deeply inspired by the Ansible module for Gerrit resource management.  You can find some information on creating Salt Formulas at https://docs.saltstack.com/en/latest/topics/development/conventions/formulas.html,  and we will be adding tutorial material here on this blog in the near future.
Q: Is there a NodeJS binding available?
A: If you meant the NodeJS formula to setup a NodeJS enironment, yes, there is such a formula. If you mean bindings to the system, you can use the Salt API to integrate NodeJS with Salt.
Q: Have you ever faced performance issues when storing a lot of data in pillars?
A: We have not faced performance issues with pillars that are deliverd by reclass ENC. It has been tested up to a few thousands of nodes.
Q: What front end GUI is typically used with Salt monitoring (e.g., Kibana, Grafana,&;)?
A: Salt monitoring uses Sensu or StackLight for the actual functional monitoring checks. It uses Kibana to display events stored in Elasticsearch and Grafana to visualize metrics coming from time-series databases such as Graphite or Influx.
Q: What is the name of the salt PKI manager? (Or what would I search for to learn more about using salt for infrastructure-wide PKI management?)
A: The PKI feature is well documented in the Salt docs, and is available at https://docs.saltstack.com/en/latest/ref/states/all/salt.states.x509.html.
Q: Can I practice installing and deploying SaltStack on my laptop? Can you recommend a link?
A: I&8217;d recommend you have a look at http://openstack-salt.tcpcloud.eu/develop/quickstart-vagrant.html where you can find a nice tutorial on how to setup a simple infrastructure.
Q: Thanks for the presentation! Within Heat, I&8217;ve only ever seen salt used in terms of software deployments. What we&8217;ve seen today, however, goes clear through to service, resource, and even infrastructure deployment! In this way, does Salt become a viable alternative to Heat? (I&8217;m trying to understand where the demarcation is between the two now.)
A: Think of Heat as part of the solution responsible for spinning up the harware resources such as networks, routers and servers, in a way that is similar to MaaS, Ironic or Foreman. Salt&8217;s part begins where Heat&8217;s part ends &; after the resources are started, Salt takes over and finishes the installation/configuration process.
Q: When you mention Orchestration, how does salt differentiate from Heat, or is Salt making Heat calls?
A: Heat is more for hardware resources orchestration. It has some capability to do software configuration, but rather limited. We have created heat resources that help to classify resources on fly. We also have salt heat modules capable of running a heat stack.
Q: Will you be showing any parts of SaltStack Enterprise, or only FREE Salt Open Source? Do you use Salt in Multi-Master deployment?
A: We are using the opensource version of SaltStack, the enterprise gets little gain given the pricing model. In some deployments, we use the salt master HA deployment setups.
Q: What HA engine is typically used for the Salt master?
A: We use 2 separate masters with shared storage provided by GlusterFS on which the master&8217;s and minions&8217; keys are stored.
Q: Is there a GUI ?
A: The creation of a GUI is currently under discussion.
Q: How do you enforce Role Based Administration in the Salt Master? Can you segregate users to specific job roles and limit which jobs they can execute in Salt?
A: We use the ACLs of the Salt master to limit the user&8217;s options. This also applies for the Jenkins-powered pipelines, which we also manage by Salt, both on the job and the user side.
Q: Can you show the salt files (.sls, pillar, &8230;)?
A: You can look at the github for existing formulas at https://github.com/tcpcloud and good example of pillars can be found at https://github.com/Mirantis/mk-lab-salt-model/.
Q: Is there a link for deploying Salt for Kubernetes? Any best practices guide?
A: The best place to look is the https://github.com/openstack/salt-formula-kubernetes README.
Q: Is SaltStack the same as what&8217;s on saltstack.com, or is it a different project?
A: These are the same project. Saltstack.com is company that is behind the Salt technology and provides support and enterprise versions.
Q: So far this looks like what Chef can do. Can you make a comparison or focus on the &;value add&; from Salt that Chef or Puppet don&8217;t give you?
A: The replaceability/reusability of the individual components is very easy, as all formulas are &;aware&8217; of the rest and share a common form and single dependency tree. This is a problem with community-based formulas in either of the other tools, as they are not very compatible with each other.
Q: In terms of purpose, is there any difference between SaltStack vs Openstack?
A: Apart from the fact that SaltStack can install OpenStack, it can also provide virtualization capabilities. However, Salt has very limited options, while OpenStack supports complex production level scenarios.
Q: Great webinar guys. Ansible seems to have a lot of traction as means of deploying OpenStack. Could you compare/contrast with SaltStack in this context?
A: With Salt, the OpenStack services are just part of wider ecosystem; the main advantage comes from the consistency across all services/formulas, the provision of support metadata to provide documentation or monitoring features.
Q: How is Salt better than Ansible/Puppet/Chef ?
A: The biggest difference is the message bus, which lets you control, and get data from, the infrastructure with great speed and concurrency.
Q: Can you elaborate mirantis fuel vs saltstack?
Fuel is an open source project that was (and is) designed to deploy OpenStack from a single ISO-based artifact, and to provide various lifecycle management functions once the cluster has been deployed. SaltStack is designed to be more granular, working with individual components or services.
Q: Are there plans to integrate SaltStack in to MOS?
A: The Mirantis Cloud Platform (MCP) will be powered by Salt/Reclass.
Q: Is Fuel obsolete or it will use Salt in the background instead of Puppet?
A: Fuel in its current form will continue to be used for deploying Mirantis OpenStack in the traditional manner (as a single ISO file). We are extending our portfolio of life cycle management tools to include appropriate technologies for deploying and managing open source software in MCP. For example, Fuel CCP will be used to deploy containerized OpenStack on Kubernetes. Similarly, Decapod will be used to deploy Ceph. All of these lifecycle management technologies are, in a sense, Fuel. Whether a particular tool uses Salt or Puppet will depend on what it&8217;s doing.
Q: MOS 10 release date?
A: We&8217;re still making plans on this.
Thanks for joining us, or if you missed it, please go ahead and view the webinar.
The post A dash of Salt(Stack): Using Salt for better OpenStack, Kubernetes, and Cloud &8212; Q&038;A appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Now Open: 2017 Docker Scholarship & Meet the 2016 Recipients!

Last year, announced our inaugural Docker Program in partnership with Hack Reactor. The 2017 scholarship to Hack Reactor’s March cohort is now open and accepting applications.
 
 
The scholarship includes full tuition to Hack Reactor, pending program acceptance, and recipients will be paired with a Docker mentor.
Applications will be reviewed and candidates who are accepted into the Hack Reactor program and meet Docker’s criteria will be invited to Docker HQ for a panel interview with Docker team members. Scholarships will be awarded based on acceptance to the Hack Reactor program, demonstration of personal financial need and quality of application responses. The Docker scholarship is open to anyone who demonstrates a commitment to advancing equality in their community. All gender and gender identities are encouraged to apply. Click here for more information.
 
Apply to the Docker Scholarship
 
We are excited to introduce our 2016 Docker scholarship recipients, Maurice Okumu and Sauvaghn Jones!
In their own words, learn more about Maurice and Savaughn below:
Maurice Okumu 
 
My name is Maurice Okumu and I was born and raised in Kenya. I came to the USA about three years ago after having lived in Dubai for more than five years where I met my wife while she was working for the military and based in Germany. We have a new baby born on the 24th of October 2016 whom we named Jared Russel.
I started coding more than one year ago and most of my knowledge I gained online on platforms such a s Khan Academy and Code Academy. Then I learned about Telegraph Academy and what they represented and was immediately drawn towards it. Telegraph aims to bridge the technology gap between the underrepresented in the field.
I am so excited that soon I will be able to seemingly create stuff out of thin air, and I am particularly excited about the prospect that I will be able to create animations and bring joy and laughter to people through my  animations as I remember growing up and seeing cartoons and how they made my day every time I watched them. Being able to be a small part of a community that will continue spreading laughter and happiness in the world is what really excites me in technology.
I have been attending Hack Reactor for two weeks now and it has been such a joy to learn so much stuff in such a short period of time. The learning pace  at hack reactor is very fast and very enjoyable at the same time because everyday I go home fulfilled with the thought that I am growing and becoming a better programmer each and every single day.
I would love to work for a medium to large company after graduation and learn even more about coding. I would also love to teach coding to kids and capture their imagination through technology. The support I am getting in my journey to become a software engineer is just amazing and overwhelming and it makes this journey very enjoyable and smoother than most undertakings I have been involved with.

Savaughn Jones
 
How did you hear about the Docker scholarship?
My college friend and Hack Reactor alumni told me about the Docker scholarship. I think he found out about it through a blog post.
Why did you choose Hack Reactor/Telegraph Academy and what excites you about coding?
Two of my college friends completed the Hack Reactor program and their lives improved exponentially. I have always wanted to get into coding and I heard that Hack Reactor was the Harvard of coding bootcamps.
You&;ve been in the program a few weeks, describe your experience so far. What have you enjoyed the most?
I am amazed at how much I have learned in two months. I was always skeptical about learning enough to deserve the title of software engineer. The most amazing thing is the ability to learn new things.
What are your goals/plans after graduation?
I have applied for a Hacker in Residence position at Hack Reactor. It would be like a paid internship of sorts. Otherwise, my plan is to get a job ASAP and continue to pick up new skills and technologies. My ultimate goals are to develop for augmented reality platforms and start my own augmented reality based tabletop gaming company.

Interested in attending @HackReactor? Apply for a Docker Scholarship! Learn more and apply hereClick To Tweet

The post Now Open: 2017 Docker Scholarship &; Meet the 2016 Recipients! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker & Prometheus Joint Holiday Meetup Recap

Last Wednesday we had our 52nd at HQ, but this time we joined forces with the Prometheus user group to host a mega-meetup! There was a great turnout and members were excited to see the talks on using Docker with Prometheus, OpenTracing and the new Docker playground; play-with-docker.
First up was Stephen Day, a Senior Software Engineer at Docker, who presented a talk entitled, ‘The History of According to Me’. Stephen believes that metrics and should be built into every piece of software we create, from the ground up. By solving the hard parts of application metrics in Docker, he thinks it becomes more likely that metrics are a part of your services from the start. See the video of his intriguing talk and slides below.

&;The History of Metrics According to me&; by Stephen Day from Docker, Inc.

&8216;The History of Metrics According to Me&8217; @stevvooe talking metrics and monitoring at the Docker SF meetup! @prometheusIO @CloudNativeFdn pic.twitter.com/6hk0yAtats
— Docker (@docker) December 15, 2016

Next up was Ben Sigelman, an expert in distributed tracing, whose talk ‘OpenTracing Isn’t Just Tracing: Measure Twice, Instrument Once’ was both informative and humorous. He began by describing OpenTracing and explaining why anyone who monitors microservices should care about it. He then stepped back to examine the historical role of operational logging and metrics in distributed system monitoring and illustrated how the OpenTracing API maps to these tried-and-true abstractions. To find out more and see his demo involving donuts watch the video below and slides.

Last but certainly not least were two of our amazing Docker Captains all the way from Buenos Aires, Marcos Nils and Jonathan Leibiusky! During the Docker Distributed Systems Summit in Berlin last October, they built ‘play-with-docker’. It is a a Docker playground which gives you the experience of having a free Alpine Linux Virtual Machine in the cloud where you can build and run Docker containers and even create clusters with Docker features like Swarm Mode. Under the hood DIND or Docker-in-Docker is used to give the effect of multiple VMs/PCs. Watch the video below to see how they built it and hear all about the new features.

@marcosnils & @xetorthio sharing at the Docker HQ meetup all the way from Buenos Aires! pic.twitter.com/kXqOZgClMz
— Docker (@docker) December 15, 2016

play-with-docker was a hit with the audience  and there was a line of attendees hoping to speak to Marcos and Jonathan after their talk! All in all, it was a great night thanks to our amazing speakers, Docker meetup members, the Prometheus user group and the CNCF who sponsored drinks and snacks.

New blog post w/ videos & slides from the Docker & @PrometheusIO joint meetup! To Tweet

The post Docker &; Prometheus Joint Holiday Meetup Recap appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Bailian: From Brick & Mortar to Brick & Click using OpenStack, DevOps

The post Bailian: From Brick &; Mortar to Brick &038; Click using OpenStack, DevOps appeared first on Mirantis | The Pure Play OpenStack Company.
Being an established player in a market can definitely have its advantages. If you&;re big enough, there are advantages of scale and barriers to entry that can make it possible to get comfortable in your market.
But what happens when the market flips on its ear?
This was the situation in which Shanghai-based Bailian group found itself in several years ago. China&8217;s largest retailer, the chain of more than 6000 grocery and department stores was spread all over the country.
Many of the brick-and-mortar company&8217;s online competitors, such as JD.com, Suning, and Taobao were introducing new sites and campaigns, and other traditional enterprises were moving to a multi-channel strategy.  In 2014, Bailian decided to join them.
Chinese consumers bought close to $600 billion in online goods during 2015, a 33 percent increase from the prior year. The company knew that if it were going to survive, it had to solve several major problems:

Lack of agility: Some applications were not cloud native and took months to update, and waiting for a new server could take weeks, slowing development of new applications to a crawl.
Server underutilization: As much hardware as Bailian was using, there was still a huge amount of unused capacity that represented wasted money. It had to be streamlined and simplified.

The company set out to create the largest offline to online commerce platform in the industry &; and to do that, they had to replace their existing IT infrastructure.
Choosing a platform
“Our transition from traditional brick and mortar to omni-channel business presented a great opportunity but an equally large challenge,” says Lu Qichuan, Director of IaaS and Cloud Integration Architecture, Bailian Group. “We needed a large scale IT platform that would enable our innovation and growth.” Thinking big, Lu and his team outlined four guiding principles for their new platform — fast development, dynamic scaling, uncompromised availability, and low cost of operations. These guidelines would support aggressive online growth targets through 2020.
And it wasn&8217;t as though Bailian was a stranger to online commerce. The company was already running a Shanghai grocery delivery service, on its existing IT platforms. But it knew that its existing applications, which were not yet cloud-ready, weren&8217;t just complex to support; they also required long development cycles. Add to this the desire to not just port legacy applications such as supply chain logistics and data management to the new, more flexible infrastructure, but also to reclaim applications running on public cloud, and the way forward was clear: private cloud was what Bailian needed.
But which? The company had already zeroed in on many of the advantages of OpenStack. In particular, Bailian Group was impressed by the platform’s continuous innovation, with rich new feature sets every six months.  The IT team also valued OpenStack’s lower licensing and maintenance cost, flexible architecture, and its complete elimination of vendor lock in.
Finally, Bailian Group is a state-owned enterprise, so when China&8217;s Ministry of Industry and Information (MIIT) officially declared its support for the OpenStack ecosystem, the decision was straightforward.
Bailian Group then selected the OpenStack managed services of UMCloud, the Shanghai-based joint venture between Mirantis and UCloud, China’s largest independent public cloud provider. UMCloud’s charter to accelerate OpenStack adoption and embrace China’s “Internet Plus” national policy closely matched Bailian Group’s platform strategy. “We found OpenStack to be the most open and flexible cloud technology, and Mirantis and UMCloud to be the best partners to help us launch our new omni-channel commerce platform,” says Lu.
Start small, think big, scale fast
Bailian Group’s IT leaders worked with Mirantis and UMCloud to quickly build a 20-node MVP (minimum viable product) using the latest OpenStack distribution and Fuel software to deploy and manage all cloud components. The architecture included Ceph distributed storage, Neutron and OVS software defined networking, KVM virtualization, F5 load balancers, and the StackLight logging, monitoring and alerting (LMA) toolchain.

With this early success, the team quickly added capacity and will soon reach 300 nodes and 5000 VMs in this first phase of a three phase, five-year plan. Already a handful of applications are in production on the new platform, including one that manages offline-to-online store advertisement images using distributed Ceph storage. The team has also added new cloud application development tools and processes that foster a CI/CD and DevOps culture and increase innovation and time-to-market. This development environment includes a PaaS platform powered by the Murano application catalog and Sahara for data analysis.  
For phase two, the IT team anticipates expanding the OpenStack platform to 500 nodes across two data centers and more than 10,000 applications by the end of 2018. Phase two will also add a Services Oriented Architecture (SOA), microservices, and dynamic energy savings.
Embracing the strategy of starting small, thinking big, and scaling fast, phase three will extend to 3000 nodes and over 10 million virtual machines and applications by the end of 2020. Phase three will also add an industry cloud and SaaS services that drive prosperity of the retail business and show other retailers the processes and benefits of cloud platform innovation and offline to online digital transformation.
Interested in more information about how Bailian Group is making the most of OpenStack to solve its agility problems? Get the full case study.
The post Bailian: From Brick &038; Mortar to Brick &038; Click using OpenStack, DevOps appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

What’s New in Docker Datacenter with Engine 1.12 – Demo Q&A

Last week we announced the latest release of (DDC) with Engine 1.12 integration, which includes Universal Control Plane (UCP) 2.0 and Docker Trusted Registry (DTR) 2.1. Now, IT operations teams can manage and secure their environment more effectively and developers can self-service select from an even more secure image base. Docker Datacenter with Engine 1.12 boasts improvements in orchestration and operations, end to end security (image signing, policy enforcement, mutual TLS encryption for clusters), enables Docker service deployments and includes an enhanced UI. Customers also have backwards compatibility for Swarm 1.x and Compose.

 
To showcase some of these new features we hosted a webinar where we provided an overview of Docker Datacenter, talked through some of the new features and showed a live demo of solution. Watch the recording of the webinar below:
 

 
We hosted a Q&A session at the end of the webinar and have included some of the most common audience questions  received.
Audience Q&A
Can I still deploy run and deploy my applications built with a previous Docker Engine version?
Yes. UCP 2.0 automatically sets up and manages a Swarm cluster alongside the native built-in swarm-mode cluster from Engine 1.12 on the same set of nodes. This means that when you use “docker run” commands, they are handled by the Swarm 1.x part of the UCP cluster and thus ensures full backwards compatibility with your existing Docker applications. The best part is, no additional product installation or configuration is required by the admin to make this work.In addition to this, previous versions of the Docker Engine (1.10 and 1.11) will still be supported as part of Docker Datacenter.
 
Will Docker Compose continue to work in Docker Datacenter?  I.e Deploy containers to multiple hosts in a DDC cluster, as opposed to only on a single host?
In UCP, “docker-compose up” will deploy to multiple hosts on the cluster. This is different from an open-source Engine 1.12 swarm-mode, where it will only deploy on a single node, because UCP offers full backwards compatibility (using the parallel Swarm 1.x cluster, as described above). Note that you will have to use Compose v2 in order to deploy across multiple hosts, as Compose v1 format does not support multi-host deployment.
 
For the built in HTTP routing mesh, which External LB&;s are supported? Nginx, HAProxy, AWS EC2 Elastic LB? Does this work similar to what Interlock was doing?
The experimental HTTP routing mesh (HRM) feature is focused on providing correct routing between hostnames and services, so it will  work across any of the above load balancers, as long as you configure them appropriately for this purpose.
The HRM and Interlock LB/SD feature sets provide similar capabilities but for different application architectures. HRM is used for swarm-mode based services, while Interlock is used for non-swarm-mode “docker run” containers.
For more information on these features, check out our blog post on DDC networking updates and the updated reference architecture linked within that post.
 
Will the HTTP routing mesh feature be available also in the open source free version of the docker engine?
Docker Engine 1.12 (open-source) contains the TCP-based routing mesh, which allows you to route based on ports. Docker Datacenter also provides the HTTP routing mesh feature which extends the open-source feature to allow you to route based on hostnames.
 
What is “docker service” used for and why?
A Docker service is a construct within swarm-mode that consists of a group of containers (“tasks”) from the same image. Services follow a declarative model that allows you to specify the desired state of your application: you specify how many instances of the container image you want, and swarm-mode ensures that those instances are deployed on the cluster. If any of those instances go down (e.g. because a host is lost), swarm-mode automatically reschedules them elsewhere on the cluster. The service also provides integrated load balancing and service discovery for its container instances.
 
What type of monitoring of host health is built in?
The new swarm-mode in Docker Engine 1.12 uses a RAFT-based consensus algorithm to determine the health of nodes in the cluster. Each swarm manager sends regular pings to workers (and to other managers) in order to determine their current status. If the pings return an unhealthy response or do not meet the latency minimums for the cluster (configurable in the settings), then that node might be declared unhealthy and containers will be scheduled elsewhere in the cluster. In Universal Control Plane (UCP), the status of nodes is described in detail in the web UI on the dashboard and Nodes pages.
 
What kind of role based access controls (RBAC) are available for networks and load balancing features?
The previous version of UCP (1.1) had the ability to provide granular label-based access control for containers. We’ve since expanded that granular access control to include both services and networks, so you can use labels to define which networks a team of users has access to, and what level of access that team has. The load balancing features make use of both services and networks so will be access controlled through those resources.
 
Is it possible to enforce a criteria that only allows production DTR run only containers that are signed?
Yes, you can accomplish this using a combination of features in the new version of Docker Datacenter. DTR 2.1 contains a Notary server (Docker Content Trust), which allows you to provide your users cryptographic keys to sign images. UCP 2.0 has the ability to run only signed images on the cluster. Furthermore, you can use “delegations” to define which teams must sign the image prior to it being deployed; for example in a low security cluster you could allows any UCP user to sign, whereas in production, you might require signatures from Release Management, Security, and Developer teams. Learn more about running images with Docker Content Trust here.
 
As a very large enterprise doing various POC&8217;s for Docker, one of the big questions is vulnerabilities in the open source code that can be part of the base images. Is there anything that Docker is developing to counter this?
Earlier this year, we announced Docker Security Scanning, which provides a detailed security profile of Docker images for risk management and software compliance purposes. Docker Security Scanning is currently available for private repositories in Docker Cloud private and coming soon to Docker Datacenter.
 
Is there any possibility to trace which user is accessing a container?
Yes, you can use audit logging. To provide auditing of your cluster, you can utilize UCP’s Remote Log Server feature. This allows you to send system debug information to a syslog server of your choice, including a full list of all commands run against the UCP cluster. This would include information such as which user attempted to deploy or access a container.
 
What checks does the new DDC have for potential noisy neighbor container scenarios, or for rogue containers that can potentially hog the underlying infrastructure?
One of the ways you can provide a check against noisy neighbor scenarios is through the use of runtime resource constraints. These allow you to set limits on how much system resources (e.g. cpu, memory) that any given container is allowed to use. These are configurable within the UI.
 
Do you have a trial license for Docker Datacenter ?
We offer a free 30-day trial of Docker Datacenter. Trial software  can be accessed by visiting the Docker Store &; www.docker.com/trial
 
For pricing, is a node defined as a host machine or a container?
The subscription is licensed and priced on a per node per year basis. A node is anything with the Docker Commercially Supported (CS) Engine installed on it. It could be a bare metal server, cloud instance or within a virtual machine. More pricing details are available here.
 
More Resources:

Request a demo: of the latest Docker Datacenter
See What’s New in Docker Datacenter
Learn more by visiting the Docker Datacenter webpage
Sign up for a free 30 day trial

Check out the FAQ from last week’s Docker Datacenter demo! To Tweet

The post What’s New in Docker Datacenter with Engine 1.12 &8211; Demo Q&;A appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

OpenShift Commons Briefing #54: DevSecOps: Security Injection with SecurePaaS on OpenShift

In this briefing, Derrick Sutherland of Shadow-Soft addresses cyber security concerns in a DevOps world and demonstrates how SecurePaaS on OpenShift, automatically and without developer intervention, introspects, federates, and injects identity, authentication, authorization, &; auditing (IAAA) into an application’s source code, uniquely protecting IT assets.
Quelle: OpenShift

IBM wins Frost & Sullivan 2016 Cloud Company of the Year award

Market research firm Frost & Sullivan has conferred its 2016 Cloud Company of the Year award to IBM, citing hybrid integration and affordability as major factors.
Lynda Stadtmueller, Vice President of Cloud Services for Stratecast/Frost & Sullivan, explained the choice of the IBM Cloud platform because it &;supports the concept of &;hybrid integration&; — that is, a hybrid IT environment in which disparate applications and data are linked via a comprehensive integration platform, allowing the apps to share common management functionality and control.&;
The capabilities she noted enable Bluemix users to tap into analytics functionality and Watson.
Stadtmueller continued: “IBM Cloud offers a price-performance advantage over competitors due to its infrastructure configurations and service parameters — including a bare metal server option; single-tenant (private) compute and storage options; granular capacity selections for processing, memory, and network for public cloud units; and all-included technical support.”
IBM VP of Cloud Strategy and Portfolio Management Don Boulia said the award &8220;recognizes the extraordinary range and depth of IBM&8217;s cloud services portfolio.&8221;
Other IBM capabilities Frost & Sullivan cited were its scalable cloud portfolio, extensive connectivity and microservices.
For more, check out Read IT Quik’s full article.
The post IBM wins Frost &; Sullivan 2016 Cloud Company of the Year award appeared first on news.
Quelle: Thoughts on Cloud

Total Cost of Ownership: AWS TCO vs OpenStack TCO Q&A

The post Total Cost of Ownership: AWS TCO vs OpenStack TCO Q&;A appeared first on Mirantis | The Pure Play OpenStack Company.
Last month, Amar Kapadia led a lively discussion about the Total Cost of Ownership of OpenStack clouds versus running infrastructure on Amazon Web Services.  Here are some of the questions we got from the audience, along with the answers.
Q: Which AWS cost model do you use? Reserved? As you go?
A: Both. We have a field that can say what % are reserved, and what discount you are getting on reserved instances. For the webinar, we assumed 30% reserved instances at 32% discount. The rest are pay-as-you-go.
Q: How does this comparison look when considering VMware&;s newly announced support for OpenStack? Is that OpenStack support with VMware only with regards to supporting OpenStack in a &;Hybrid Cloud&; model? Please touch on this additional comparison. Thanks.
A: In general, a VMware Integrated OpenStack (VIO) comparison will look very different (and show a much higher cost) because they support only vSphere.
Q: Can Opex be detailed as per the needs of the customer? For example, if he doesn&8217;t want an IT/Ops team and datacenter fees included as the customer would provide their own?
A: Yes, please contact us if you would like to customize the calculator for your needs.
Q: Do you have any data on how Opex changes with the scale of the system?
A: It scales linearly. Most of the Opex costs are variable costs that grow with scale.
Q: What parameters were defined for this comparison, and were the results validated by any third party, or on just user/orgnaisatuon experience?
A: Parameters are in the slide. Since there is so much variability in customers&8217; environments, we don&8217;t think a formal third party validation makes sense. So the validation is really through 5-10 customers.
Q: How realistic is it to estimate IT costs? Size of company, size of deployment, existing IT staff (both firing and hiring), each of these will have an impact on the cost for IT/OPs teams.
A: The calculator assumes a net new IT/OPS team. It&8217;s not linked to the company size, but rather the OpenStack cloud size. We assume a minimum team size of about 3.5 people and linear growth after that as your cloud scales.
Q: Should the Sparing not be adding more into the cost, as you will need more hardware for HA for high availability?
A: Yes, sparing is included.
Q: AWS recommends using 90% utilization, and if you are using 60%, it&8217;s better to downgrade the VM to ensure 90% utilization. In the case of provisioning 2500 vms with autoscaling, this should help.
A: Great point, however, we see a large number of customers who do not do this, or do not even know what percentage of their VMs are underutilized. Some customers even have zombie VMs that are not used at all, but they are still paying for them.
Q: With the hypothesis that all applications can be &8220;containerized&8221;, will the comparison outcomes remain the same?
A: We don&8217;t have this yet, but a private cloud will turn out to have a much better TCO. The reason is that we believe private clouds can run containers on bare-metal while public clouds have to run containers in VMs for security reasons. So a private cloud will be a lot more efficient.
Q: This is interesting. Can you please add replication cost? This is what AWS does free of cost within an availability zone. In the case of OpenStack, we need to take care of replication.
A: I assume you mean for storage. Yes we already include a 3x factor to convert from raw storage to usable storage to factor in replication (3-way).
Q: Just wondering how secure is the solution as you have mentioned for a credit card company? AWS is PCI DSS certified.
A: Yes this solution is PCI certified.
Q: Has this TCO calculator been validated against a real customer workload?
A: Yes, 5-10 customers have validated this calculator.
Q: Do you think that these costs apply to another countries, or this is US based?
A: These calculations are US based. Both AWS and private cloud costs could go up internationally.
Q: Hi, thank you for your time in this webinar. How many servers (computes, controllers, storage servers) are you using, and which model do you use for your calculations ? Thanks.
A: The node count is variable. For this webinar, we assumed 54 compute nodes, 6 controllers, and 1080GB of block storage. We assumed commodity Intel and SuperMicro hardware with 3 year warranty.
Q: Can we compare different models, such as AWS vs VMware private cloud/public cloud with another vendor (not AWS)?
A: These require customizations. Please contact us.
The post Total Cost of Ownership: AWS TCO vs OpenStack TCO Q&038;A appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis