Helping Developers Simplify Apps, Toolchains, and Open Source

It’s been an exciting four months since we announced that Docker is refocusing on developers. We have spent much of that time listening to you, our developer community, in meetups, on GitHub, through social media, with our Docker Captains, and in face-to-face one-on-ones. Your support and feedback on our refocused direction have been helpful and positive, and we’re fired-up for the year ahead!

What’s driving our enthusiasm for making developers successful? Quite simply, it’s in recognition of the enormous impact your creativity – manifested in the applications you ship – has on all of our lives. Widespread adoption of smartphones and near-pervasive Internet connectivity only accelerates consumer demand for new applications. And businesses recognize that applications are key to engaging their customers, partnering effectively with their supply chain ecosystem, and empowering their employees.

As a result, the demand for developers has never been higher. The current worldwide population of 18 million developers is growing approximately 20% every year (in contrast to the 0.6% annual growth of the overall US labor force). Yet, despite this torrid growth, demand for developers in 2020 will outstrip supply by an estimated 1 million. Thus, we see tremendous opportunities in helping every developer to be even more creative and productive as quickly as possible.

But how best to super-charge developer creativity and productivity? More than half of our employees at Docker are developers, and they, our Docker Captains, and our developer community collectively say that reducing complexity is key. In particular, there is an opportunity to reduce complexity stemming from three potential sources:

Applications. Developers want to ship their ideas from code to cloud as quickly as possible. But, while cloud-native microservices-based apps offer many compelling benefits, these can come at the cost of complexity. Orders of magnitude more app components, multiple languages, multiple service implementations – Containers? Serverless functions? Cloud-hosted services? – and more risk increasing the cognitive load on development teams.

Toolchains. In shipping code-to-cloud, developers want the freedom to select their own tools for each stage of their app delivery toolchains, and there are rich breadth and depth of innovative products from which to select. But integrating together multiple point products across the toolchain stages of source code management, build/CI, deployment, and others can be challenging. Often, it results in custom, one-off scripts that subsequently need to be maintained, lossy hand-off of app state between delivery stages, and subpar developer experiences.

Open Source. No surprise to the Docker community, an increasing number of developers are attracted by the creativity and velocity of innovation in open source technologies. But development teams often struggle with how to integrate and get the most out of open source components in their apps, how to manage the lifecycle of open source updates and patches, and how to navigate open source licensing dos and don’ts.

And for all the complexities above, development teams are seeking code-to-cloud solutions that won’t slow them down or lock them into any specific tool or runtime environment.

At Docker, we view our mission as helping developers bring their ideas to life by conquering the complexities of application development. In conquering these complexities, we believe that developers shouldn’t have to trade off freedom of choice for simplicity, agility, or portability.

We are fortunate that today there are millions of developers already using Docker Desktop and Docker Hub – rated “Second Most Loved Platform” in Stack Overflow’s 2019 survey – to conquer the complexity of building, sharing, and running cloud-native microservices-based applications. In 2020 we will help development teams further reduce complexity so they can ship creative applications even faster. How? Stay tuned for more this week!
The post Helping Developers Simplify Apps, Toolchains, and Open Source appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

What makes a good Operator?

In 2016, CoreOS coined the term, Operator. They started a movement about a whole new type of managed application that achieves automated Day-2 operations with a user-experience that feels native to Kubernetes.
Since then, the extensions mechanisms that underpin the Operator pattern, have evolved significantly. Custom Resource Definitions, an integral part of any Operator, became stable, got validation and a versioning feature that includes conversion. Also, the experience the Kubernetes community gained when writing and running Operators accumulated critical mass. If you’ve attended any KubeCon in the past 2 years, you will have noticed the increased coverage and countless sessions focusing on Operators.
The popularity that Operators enjoy, is based on the possibility to achieve a cloud-like service experience for almost any workload available wherever your cluster runs. Thus, Operators are striving to be the world’s best provider of their workload as-a-service.
But what actually does make for a good Operator? Certainly the user experience is an important pillar, but it is mostly defined through the interaction between the cluster user running kubectl and the Custom Resources that are defined by the Operator. 
This is possible with Operators being extensions of the Kubernetes control plane. As such, they are global entities that run on your cluster for a potentially very long time, often with wide privileges. This has some implications that require forethought.
For this kind of application, best practices have evolved to mitigate potential issues, security risks, or simply to make the Operator more maintainable in the future. The Operator Framework Community has published a collection of these practices: https://github.com/operator-framework/community-operators/blob/master/docs/best-practices.md
They are covering recommendations concerning the design of an Operator as well as behavioral best practices that come into play at runtime. They reflect a culmination of experience from the Kubernetes community writing Operators for a broad range of use cases. In particular, the observations the Operator Framework community made, when developing tooling for writing and lifecycling Operators.
Some highlights include the following development practices:

One Operator per managed application
Multiple operators should be used for complex, multi-tier application stacks
CRD can only be owned by a single Operator, shared CRDs should be owned by a separate Operator
One controller per custom resource definition

As well as many others.
With regard to best practices around runtime behavior, it’s noteworthy to point out these:

Do not self-register CRDs
Be capable of updating from a previous version of the Operator
Be capable of managing an Operand from an older Operator version
Use CRD conversion (webhooks) if you change API/CRDs

There are additional runtime practices (please, don’t run as root) in the document worth reading.
This list, being a community effort, is of course open to contributions and suggestions. Maybe you are planning to write an Operator in the near future and are wondering how a certain problem would be best solved using this pattern? Or you recently wrote an Operator and want to share some of your own learnings as your users started to adopt this tool? Let us know via GitHub issues or file a PR with your suggestions and improvements. Finally, if you want to publish your Operator or use an existing one, check out OperatorHub.io.
The post What makes a good Operator? appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift