How to Select the Docker Subscription That’s Right for You

On August 31st, 2021, we announced updates to our product subscription tiers. These changes are helping us to deliver on our mission to ​​simplify application development and remove complexities for developers, while also providing the security and scale businesses rely on.

With four different subscription options: Personal, Pro, Team, and Business, it might be difficult to choose which tier is right for you. So to help you pick the best option, we’ve created a Docker Subscription Cheat Sheet. It highlights some of the key differences between each subscription tier, and who each might be best suited for. 

Docker Desktop is included with every subscription tier

Before you dive into the full document, it’s important to note that Docker Desktop is included with each subscription tier. We recently wrote a blog that covers all the magic behind the scenes of Docker Desktop, where Docker PM, Ben Gotch wrote:

“Great developer tools are magic for new developers and save experienced developers a ton of time. This is what we set out to do with Docker Desktop. Docker Desktop is designed to let you build, share and run containers as easily on Mac and Windows as you do on Linux. Docker handles the tedious and complex setup so you can focus on writing code.” 

Get the Docker Subscription Cheat Sheet now

The Docker Subscription Cheat Sheet is available to view, share, and download here.
The post How to Select the Docker Subscription That’s Right for You appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker at Devoxx France and the U.K.

As Docker is a full-remote company, we have employees scattered across the Americas and Europe. We’re building tools to help developers all over the world build software better. As developers, we know that to do this well, we need to be actively involved in the developer community so we encourage our engineers to speak at local conferences.

Some of Docker’s French Engineers spoke at Devoxx France this year. The event was held at the Palais des Congrés in Paris from 29 September to 1 October. Devoxx France is the biggest developer conference in France with, before pandemic restrictions, 3000 attendees and 240 talks and hands-on labs each year. The conference covers topics from Java and its ecosystem, to the Web, Big Data, IoT, Cloud computing and Software architecture.

Guillaume Lours presented a talk with Jérémie Drouet about Dockerfile best practices that you can watch (in French) below. During this session they split a monolith application defined by a single Dockerfile to a microservices stack by applying Dockerfile improvements such as order layers, and using multistage builds.

Yves Brissaud presented a talk about building and using Cloud Native Application Bundles (CNAB) using Porter. You can watch his talk (in French) below and browse the slides in English. This talk starts with a short definition of Cloud Native and why Application Bundles can help us to improve application packaging and deployment. Two demos of CNAB using Porter were shown. You can find all the materials to reproduce them here.

More recently, Devoxx UK took place between the 1st and the 3th of November. A hybrid format for the conference was used with both in-person and online sessions and audience.

Guillaume Lours was part of the lineup to show Docker Dev Environments in action. This presentation shows you how to create a Dev Environment from a simple copy/paste of a Git repository URL, how you use Dev Envs to manage PR review or test the work in progress of your teammates. The slide deck is available here and the video of the talk should be available soon in the Devoxx Youtube channel.

If you’d like to help us build tools that developers love, take a look at our careers page where we have a lot of open positions.

The post Docker at Devoxx France and the U.K. appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

ROI of Docker Desktop vs. DIY: Considerations, Risks, and Benefits for Business

Docker simplifies application development and removes complexities for developers. This allows software teams to accelerate their productivity and spend more time on delivering value that’s core to their business. One of the ways we do this is by providing a magically simple developer experience with Docker Desktop.   

We wrote about the magic behind the scenes of Docker Desktop back in September. In that blog, Ben Gotch outlined many of the ways Docker Desktop handles the tedious and complex setup for developers, allowing them to focus their time on writing code.

But this brings up a good question: How can businesses evaluate the ROI of purchasing a Docker subscription with Docker Desktop vs. trying to build their own DIY (do-it-yourself) alternative? 

In short, businesses and software leaders should consider a number of factors when evaluating ROI.      

Solving for Speed and Innovation

Every organization is exploring ways to accelerate its software innovation. Why? Because software innovation leads to better business results. One study showed that companies who excel at delivering software innovation grow their revenue 4 to 5 times faster than their peers.  

But creating innovative software is complex and difficult. Businesses have to balance a number of competing priorities including developer velocity, team culture, tooling, budgets, and more. And one of the most common decisions businesses face to successfully drive innovation is how to ensure developers have the tools that simplify their work and enable them to create value vs. spending time on work that’s not core to the business and only adds to the complexity.  

Build vs. Buy Considerations

Companies spend a tremendous amount of money on technology every year for infrastructure, security, IT services, and software. The global pandemic and shift to remote work have only accelerated this spending with companies investing $793 million on enterprise software alone in 2021. They are predicted to spend significantly more on enterprise software in 2022. 

These figures indicate that most businesses have a strong preference to buy commercial software, rather than trying to build their own. However, there are certain scenarios when taking a DIY with OSS approach might make sense and even offer some advantages. But let’s take a look at a few key factors to consider when trying to make a build vs. buy decision when it comes to commercial software: 

Cost of TimeOpportunity CostTime to ValueCost of Security RisksWhen DIY with OSS Makes Sense

Cost of Time

One of the most common ways an enterprise evaluates a software purchase is the cost of time. Here’s a simplified example, using the cost of a Docker Business subscription, which includes Docker Desktop vs. a developer’s time to build a DIY alternative: 

Assume a developer makes $100,000 per year. That’s about $50 per hour. The cost of a Docker Business subscription is $21 per user, per month for a fully packaged developer experience tool. 

If a developer spends only 1 hour per month building and maintaining an alternative solution, you’re already at a loss of $29. 

If you multiply that out across a large team of 25,000 developers, that’s $29 * 12 months * 25,000 developers = $8.7 million lost by taking the DIY approach compared with purchasing subscriptions for Docker Business.      

Opportunity Cost

Another way businesses evaluate ROI is opportunity cost, or the value of what you could have created instead to drive revenue growth. When organizations are considering a DIY approach, it’s important to ask if it will distract developers from solving core business problems that have a greater impact on the bottom line? Development teams need to be able to focus on delivering value to customers, and one way to do that is by offloading any undifferentiated heavy lifting. 

Here’s an example from Gartner, again using Docker Business: “If you are looking at alternative solutions, you must include the opportunity cost of using this solution for your engineering resources. For example, a 100-seat annual subscription to Docker Business without any discounts is currently $25,200. Supporting 100 seats with an open-source alternative is likely to significantly exceed this cost due to the level of engineering resources required to maintain the solution. If you decide to pursue open-source alternatives, you must ensure doing so is a worthwhile use of your engineering resources.”

Time to Value

When companies accelerate their pace of innovation they grow their customer base, outpace their competitors, and deliver better business performance. They also reduce their time to value or the time it takes to get a return on investing in delivering new innovation. However, if developers are spending time on DIY software development projects that aren’t core to the business, it could have a big impact on time to value and ROI. 

Here’s another way to look at time to value: many software development projects fall outside a development team’s comfort zone and end up over budget and behind schedule. One report estimated that about 19% of software development projects fail, costing roughly $260 billion dollars in losses each year in the US alone, up 46% from 2018. Docker Business and Docker Desktop reduce time to value for businesses by enabling developers to focus on delivering innovation that’s core to the business vs. the business taking on a risky DIY software development project.

The Cost of Security Risks

The cost of security risks to an organization is one of the most difficult to quantify because security breaches can have far-reaching effects beyond lost revenue, including damage to your brand, regulatory fines, the cost of remediation, and more. IBM’s 2021 Cost of a Data Breach Report showed that data breaches cost the surveyed companies $4.24 million per incident on average.  

This cost shouldn’t be overlooked when you’re evaluating a build vs. buy software decision. Using Docker Desktop as an example, it has a secure lightweight Linux VM that Docker manages for you. As well as setting up this VM with secure defaults, Docker Desktop keeps this VM, and all other components, up-to-date for you over time by applying kernel patches or other security fixes as required. If you’re considering a DIY with OSS and Docker Engine approach, it’s important to consider whether or not your software teams and engineering resources are prepared and equipped to keep all the components of a DIY solution updated, and all vulnerabilities patched over time. 

When DIY with OSS Makes Sense

There are scenarios where DIY with OSS makes sense for some businesses. For example, if the commercial software that’s available doesn’t meet the needs of your business’s specific edge cases, it might make sense to look into building your own custom software. This would allow you to have complete control over the roadmap of the software but it could also make you more susceptible to disruptions from developer turnover. 

If you’re considering a build-your-own approach, it’s important to plan for enough people, time, and resources to see the project through to completion, even if it goes over budget and beyond schedule. It’s also important to ensure your team has the appropriate skill set and capacity to handle the complexities that often come with OSS custom software development.  

Another example, using a Docker Business subscription again, is taking the cost approach to figure out when a DIY alternative would be the more cost-effective solution. In this case, you’d need the commercial software to cost greater than $50 per month vs. spending 1 hour per month of an engineer’s time who makes $100,000 per year.   

The Choice for Better Business Outcomes

Every organization is in a pursuit to deliver better business outcomes, and top performers are leveraging software innovation to make it happen. Inevitably, while managing many competing priorities, software leaders will be faced with a build vs. buy decision at some point. When you take into account several factors including the cost of time, opportunity cost, time to value, the cost of security risks, and when DIY with OSS makes sense, the data shows that most organizations will be better off buying commercial software vs. building their own alternative solutions. Offloading the undifferentiated work reduces distractions and enables developers to focus on delivering value to customers. 

Docker removes complexities for developers and helps them achieve greater productivity. We are continuing to invest in creating magically simple experiences for developers while also delivering the scale and security businesses rely on. Docker offers subscriptions for developers and teams of every size, including our newest subscription: Docker Business.  

To learn more be sure to attend or watch on-demand our Management and Security at Scale with Docker Business webinar on Tuesday, November 16, 2021 at 9am PT.
The post ROI of Docker Desktop vs. DIY: Considerations, Risks, and Benefits for Business appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Graphcore Poplar SDK Container Images Now Available on Docker Hub

Graphcore’s Poplar® SDK is available for developers to access through Docker Hub, with Graphcore joining Docker’s Verified Publisher Program. Together with Docker, we’re distributing our software stack as container images, enabling developers to easily build, manage and deploy ML applications on Graphcore IPU systems.

We continue to enhance the developer experience to make our hardware and software even easier to use. Just over a year ago we introduced a selection of pre-built Docker containers for users. Now, we are making our Poplar SDK, PyTorch for IPU, TensorFlow for IPU and Tools fully accessible to everyone in the Docker Hub community as part of our mission to drive innovation.

Here’s more information on what’s in it for developers and how to get started.

Why Docker is so important to our community

Docker has become the primary source for pulling container images – according to the latest Index Report, there’s been a total of 396 billion all-time pulls on Docker Hub. Furthermore, Docker Hub remains one of the “most wanted, loved and used” developer tools based on the 2021 Stack Overflow Survey answered by 80,000 developers.

For IPU developers, our Docker container images simplify and accelerate application development workflows being deployed in production on IPU systems by supplying pre-packaged runtime environments for applications built using PyTorch, TensorFlow or directly with Graphcore’s Poplar SDK. Containerised applications provide increased portability of applications with consistent, repeatable execution and are an important enabler for many MLOps frameworks.

What’s available for developers?

As of today, developers can freely install our Poplar Software stack, co-designed with the IPU (Intelligence Processing Unit), specifically for machine intelligence applications. Poplar is Graphcore’s graph toolchain which sits at the core of our easy-to-use and flexible software development environment fully integrated with standard machine learning frameworks so developers can easily port existing models. For developers who want full control to exploit maximum performance from the IPU, Poplar enables direct IPU programming in Python and C++ through PopART (Poplar Advanced Runtime).

Our Poplar SDK images can be pulled via the following repositories:

Poplar SDK – contains Poplar, PopART and tools to interact with IPU devices PyTorch for IPU – contains everything in the Poplar SDK repo with PyTorch pre-installedTensorFlow for IPU – contains everything in the Poplar SDK repo with TensorFlow 1 or 2 pre-installedTools – contains management and diagnostic tools for IPU devices

And as part of the Docker Verified Publisher Program, Graphcore container images are exempt from rate limiting — meaning that developers have unlimited container image requests for Poplar, regardless of their Docker Hub subscription.

Getting Started with Poplar on Docker

The Poplar Docker containers encapsulate everything needed to run models in IPU in a complete filesystem (i.e. Graphcore’s Poplar® SDK, runtime environment, system tools, configs, and libraries). In order to use these images and run IPU code, you will need to complete the following steps:

1. Install Docker on the host machine2. Pull Graphcore’s Poplar SDK container images from Docker Hub3. Prepare Access to IPUs4. Verify IPU accessibility with Docker container5. Sample App Code on IPUs

Install Docker on the host machine

Docker installation varies based on operating system, version, and processor. 

You can follow Docker’s getting started guide.

Pull Graphcore’s Poplar SDK container images from Docker Hub

Once Docker is installed, you can run commands to download our hosted images from Docker Hub and run them in the host machine. The Poplar SDK container images can be pulled from the Graphcore Poplar repository on Docker Hub.

There are four repositories and these repositories may contain multiple images based on the SDK version, OS, and architecture.

graphcore/pytorchgraphcore/tensorflowgraphcore/poplargraphcore/tools

Pulling from the framework repo downloads the latest version of the SDK compiled for AMD host processor by default.

To pull the latest TensorFlow image use:

$ docker pull graphcore/tensorflow

If you want to select a specific build for a specific SDK version and processor, you can configure the tags based on Docker Image Tags.

Prepare Access to IPUs

To talk to the IPUs in PODs, we must configure the connection between the host machines and the IPUs – the IPU over Fabric (IPUoF). The information that Poplar needs to access devices can be passed via an IPUoF configuration file which is, by default, written to a directory in your home directory (~/.ipuof.conf.d). The configuration files are useful when the Poplar hosts do not have direct network access to the V-IPU controller (for security reasons, for example). 

If you are using Graphcloud, the IPUoF default config file is generated every time a new user is created and added to the POD. Check if there are .conf files inside that folder (e.g. ~/.ipuof.conf.d/lr21-3-16ipu.conf). If you have this setup, you can proceed to the next step.

If not available, you will need to configure Poplar to connect to the V-IPU server by following the V-IPU Guide: Getting Started. Take note to save your IPUoF config file in the folder ~/.ipuof.conf.d to run the scripts in the next section.

Verify IPU accessibility with Docker container

Now that you have the container ready, you can check if the IPU is accessible from inside the container.

List the IPU devices within the context of the container by running the following:

$ docker run –rm –ulimit memlock=-1:-1 –net=host –cap-add=IPC_LOCK –device=/dev/infiniband –ipc=host -v ~/.ipuof.conf.d/:/etc/ipuof.conf.d -it graphcore/tools gc-info -l

Running a sample TensorFlow app

First, get the code from the Graphcore tutorials repository in GitHub.

$ git clone https://github.com/graphcore/tutorials.git

$ cd tutorials

The Docker container is an isolated environment. It will come empty and not have access to the host machine’s file system. In order to use data from your host machine, the data needs to be accessible within the Docker container. 

You can make the data accessible by mounting directories as volumes to share data between the host machine and the Docker container environment. 

A common pattern when working with a Docker-based development environment is to mount the current directory into the container (as described in Section 2.2, Mounting directories from the host), then set the working directory inside the container with -w <dir name>. For example, -v “$(pwd):/app” -w /app.

To run the mnist example in the TensorFlow container, you can use the following command which mounts the tutorials repo into the docker container and runs it.

$ docker run –rm –ulimit memlock=-1:-1 –net=host –cap-add=IPC_LOCK –device=/dev/infiniband –ipc=host -v ~/.ipuof.conf.d/:/etc/ipuof.conf.d -it -v “$(pwd):/app” -w /app graphcore/tensorflow:2 python3 simple_applications/tensorflow2/mnist/mnist.py

This guest blog post originally appeared here.
The post Graphcore Poplar SDK Container Images Now Available on Docker Hub appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker’s Developer Community: Wind In Our Sails

Two years ago, in November 2019, we refocused our company on the needs of developers. At the time, we recognized the growing adoption of microservices, the explosion in the number of tools, and the many opportunities to simplify these complexities. Little did we know that within months the world would face a global pandemic and economic recession, and that our company would quickly shift to 100% virtual, work-from-home operations. We also didn’t anticipate how the pandemic would increase the demand for application development, further accelerating the importance of developer productivity.

And throughout all these ups and downs the Docker team stayed the course. I thank them for their unwavering focus on serving our developer community. And I thank our Docker Captains, our community leaders, our ecosystem partners, our customers and the wider Docker community for their feedback, loyalty, and trust. Your support these last two years enabled us to not just survive, but to thrive.

Delivering Collaboration, Speed, and Security for Developers

The global tragedy of the pandemic accelerated digital initiatives in every industry while the “new normal” of work-from-home raised the importance of collaboration within “virtual-first” development teams. In response, to simplify the complexities of modern app development facing these teams we shipped Docker Development Environments. With it, team members can quickly and easily share with each other reproducible development environments, enabling them to spend more time writing code instead of installing tools, setting environment variables, and untangling dependencies.

This “need for speed” triggered by the pandemic is somewhat addressed by the decade-long rise in custom silicon. Yet how can developers take advantage of custom silicon performance without re-writing their applications or learning unfamiliar tools? Since our re-focusing in November 2019 we’ve shipped new capabilities that address this challenge, including support for Arm-based Apple M1 silicon in Docker Desktop, NVIDIA GPU support in Docker Desktop and Docker Compose-to-AWS, and RISC-V support in BuildKit. In doing so, we’re helping developers to get the speed benefits of custom silicon “for free” without having to make changes to their app or learn a new toolchain.

Sadly, the rise in online activity catalyzed by the pandemic is attracting the attention of criminals, resulting in an increase in the frequency and sophistication of attacks on organizations’ software supply chains. For this challenge, given the heterogeneity of customers’ environments Docker and our partners have taken a multi-vendor, open standards-based approach. This includes delivering CNCF-based Notary v2 for digital signatures, Docker Hub-interoperable container registry partnerships with AWS, Mirantis, and JFrog, and trusted content partnerships with Canonical, Red Hat, VMware, and other leading commercial ISVs.

The results? More development teams than ever are using Docker as the fastest, most secure way to build, share, and run modern applications. In fact, since our refocusing on developers two years ago our community has grown to 15.4 million monthly active developers sharing 13.7 million apps at a rate of 14.7 billion pulls per month. Moreover, for the second year in a row Stack Overflow’s Developer Survey ranked Docker as the #1 most wanted development tool, and JetBrains’ annual survey rated Docker Compose as the most popular container development tool, used by 58% of respondents.

We’re Just Getting Started

While we’re humbled by the results of these first two years, we also believe we’re just getting started – there’s still so much to do. In particular, the pandemic-triggered demand for apps is accelerating the demand for developer talent. Specifically, the demand for developers is growing 8X faster than the average demand for other occupations, with the market projected to be 45 million developers by the end of this decade. What are the implications for our industry and for Docker?

First, to state the obvious, more developers means more apps. And whether it’s the modernization of traditional apps or the creation of new, Kubernetes-destined ones both mean increased complexity. The result is a larger attack surface of software supply chains for criminals to attack.

With our presence at both ends of the supply chain, Docker is uniquely positioned to help. As the source of trusted content at the beginning of the supply chain – Docker Official Images, Docker Verified Publisher images of commercial ISV partners, and Docker-sponsored open source projects – developers can trust the foundational building blocks of their apps from the start. On the other end of the supply chain, Docker Desktop, we’re providing tools for developers to discover trusted content, verify its integrity, and ensure its ongoing freshness. And for their managers, we give them SaaS-delivered visibility and controls for images pulled from registries as well as created within Docker Desktop. This allows development teams to ship quickly and safely – no need to trade one off for the other.

Second, increasingly developers want the freedom to choose best-of-breed tools in order to take advantage of the latest innovations in app development. In working with ecosystem partners on open standards like OCI, compose-spec, CNCF Distribution, and others, we’ve delivered innovation for developers such as Compose-to-cloud tooling for AWS and Azure, image build automations with GitHub and Bitbucket repos and GitHub Actions, image vulnerability scanning with Snyk, and more.

Going forward, so as to provide developers even more choice in ecosystem partner tools we are making it even easier for partners to integrate with Docker tools, services, and content by expanding the breadth and depth of our product interfaces. Around this we are providing discovery and assurance services so that developers know these ecosystem partner integrations are safe, maintained, and supported. Furthermore, our SaaS-delivered management plane gives managers visibility into integration usage and a means to set and enforce integrations policy. This combination gives developers the freedom to choose their tools, safely.

Third, daily more and more developers are joining hundreds of thousands of Docker community members freely sharing with each other their time, expertise, and joy in using Docker for app development. And whether it’s our public roadmap, awesome-compose contributions, community Slack, community meetups, or 80,000-participant DockerCons you’ll find a friendly, enthusiastic crowd. Moreover, it’s a community that welcomes newcomers just getting started, and thus plays an important role in sustainably scaling Docker adoption as tens of millions more new developers join the community in the years to come.

But besides stars and reviews on image repos on Docker Hub, the wellspring of community recommendations, best practices, and cool hacks isn’t accessible directly in the product to other community members. Wouldn’t it be cool if, for example, the Dockerfile optimization someone in your company a continent away just figured out was automatically made visible and available to everyone in your company working with a similar image? We think so, too! We see so much potential in enabling community members to help and learn from each other, and we can’t wait to share more with you.

Fair Skies Ahead

In these last two years since our refocusing on developers, we’re humbled by the non-stop growth in the Docker community, the enthusiastic feedback and adoption of the new features we’ve shipped, and the positive support for the business changes we’ve made to enable us to sustainably scale Docker to tens of millions more developers. And with the accelerating demand for new apps and more developers to build, share, and run them, we’re incredibly excited about the next chapters in our journey together with you!

Thank you, and let’s keep shipping!

sj

PS – We can’t wait to share details of the above and more with you at next year’s DockerCon on Tuesday, May 10, 2022. Save the date, it’s gonna be a blast! Register today at https://www.docker.com/dockercon/

The post Docker’s Developer Community: Wind In Our Sails appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Desktop 4.2 Release: Save Your Battery with Pause / Resume, and Say Goodbye to the Update Pop-up

With Docker Desktop 4.2 we’re excited to introduce Pause / Resume as well as a host of changes to make it easier for you to manage updates. These features are available to Docker Desktop users on any subscription tier.

Save your battery with Pause / Resume

Pause / Resume gives developers the power to pause their current Docker Desktop session and resume work whenever they want, saving resources on their machine while Docker is paused. When you pause Docker Desktop, the current state of your containers is saved in memory and all processes are frozen. This lowers CPU usage and will help with saving your laptop’s battery life. To resume Docker Desktop, click either the Resume button in the menu or type any Docker CLI command in your terminal.

To try out this feature on Docker Desktop 4.2, navigate to the whale menu and click the ‘Pause’ button. The corresponding content sections for Docker Desktop’s left sidebar items (e.g. Containers) will then be covered to clearly denote the Pause state. 

Please note, Pause / Resume is currently not available in Windows container mode.

Say goodbye to the update pop-up

We’ve heard your feedback that the update modal interrupts your workflows and makes it challenging to use Docker Desktop when you need it most. That’s why we’ve done away with the update pop up and introduced a new update settings section in the Docker Dashboard, where you can check for updates and manage your update preferences. We appreciate when our users stay up to date so they get all the latest bug fixes and new features, but we want to make sure that we enable users to do that at a time that’s convenient. 

To summarize what the experience will be like on Docker Desktop 4.2:

A similar badge will appear in the Docker Dashboard settings icon to make it a seamless experience without interrupting your workflow with a modal. Here you can also manage your software update settings.

The `Automatically check for updates` setting is now available for all Docker subscription tiers 

Thanks to all of your positive support of the Docker subscription updates, we’ve been able to focus on delivering more value to all users. In Docker Desktop 4.2 we’ve enabled all users, regardless of subscription tier, to turn off automatically checking for updates. When you disable this setting, all notifications in the whale menu and the app will be disabled and you will have to manually check for updates. Just update to Docker Desktop 4.2 to start using this feature!

We also know that people have different preferences when it comes to downloading updates. For some, the background can take a lot of bandwidth and don’t want it to start when they are busy at work or on Zoom calls, but others would rather have less intervention when it comes to updating. Which is why we’ve put the choice in your hands to decide whether updates should automatically download or not.

We’re considering introducing more settings in the future and would love to know what you think, let us know on our public roadmap item! 

Coming soon

All of the changes described above are available in 4.2 to all Docker Desktop users, including those on Docker Personal. 

We’re also working on two of your highest voted items from our public roadmap: improving Mac filesystem performance, and implementing Docker Desktop for Linux, so watch this space for more news on those in the coming months. And we would love to know what other improvements you would like to see, so please add a thumbs-up emoji to your highest priorities on the roadmap, or create a new roadmap ticket if your idea isn’t already there.
The post Docker Desktop 4.2 Release: Save Your Battery with Pause / Resume, and Say Goodbye to the Update Pop-up appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Save the Date: Next Community All Hands on December 9th

We’re one month away from our next Community All Hands event, on December 9th at 8am PST/5pm CET. This is a unique opportunity for Docker staff, Captains, and the broader Docker community to come together for live company updates, product updates, demos, community shout-outs and Q&A. The last all-hands gathered more than 2,000 attendees from around the world and we hope to double that this time around. 

The theme for this edition is Innovation and Experimentation, so bring your most innovative projects to share with the community. We will be presenting our latest developments and some exciting demos to experiment with. 

Here are five reasons you should attend the next Community All Hands:

Get the inside scoop on Docker’s latest product developmentsYou’ll get the opportunity to hear directly from the product team about the latest developments, including product strategy, new features, roadmap.Meet the Docker teamWe’ll be there to answer your questions, hear your feedback, and engage in lively discussion about the latest developments in the Docker community.Work for DockerThe Docker team is growing fast and we’re looking for people who are passionate about building great software. You’ll have the opportunity to learn more about the Docker team and the open positions we’re hiring for.Get inspired by the community We’ll be showcasing some of the most innovative projects that are being built with Docker. You’ll have the opportunity to learn about what others are doing with Docker and get inspired for your own projects.Get your hands on some cool swagWe’ll have some awesome Docker gear for you to take home. Stay tuned for more details!

Our Call for Speakers will be open until November 19th. If you would like to submit a proposal to speak, please visit our Call for Speakers page.
The post Save the Date: Next Community All Hands on December 9th appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Continuous Previews (CP): Don’t Merge Until You Preview

Docker’s Peter McKee sits down with Uffizzi Co-founders Grayson Adkins – who serves as Head of Product – and Josh Thurman – who serves as Head of Developer Relations – for a Q&A on the CP method.

Check out the live stream from August 26th for Docker Build: Enabling Full-Stack Continuous Previews with Uffizzi

(The below transcript has been edited for clarity and format)

Peter McKee:  It’s great having you guys back on the Live Show.  I love the idea of Continuous Previews, tell me what brought you guys to the concept?Grayson Adkins:  The kernel of the idea – no pun intended – came from our own challenges in trying to develop our product faster.  We continued to observe two problems that were hindering forward progress. On the development side we were too often merging broken features – and then we would have all this commingled code and finding where the bug was introduced and then fixing it was really hindering progress.  The returned tickets were always holding us back – the classic two steps forward, one step back. On the product side I do all our design work and I kept finding that the design/requirements would need to be modified as I saw the product come to life. And when I would ask “Can I see it yet?” there was no easy solution to that.  So for me to see a new feature and provide feedback I would have to wait until the branch was merged and deployed to our QA environment.  Then the feedback process would start and, as tickets are returned, they’d go back through this big loop of “In Development”, then merge and deploy, then I can check it again.  There was too big of a gap between Development and Product.At the time Uffizzi’s product direction was moving towards making it easy for Developers to deploy their apps to the cloud and once we realized we could fix the “dirty code in main” and “can I see it yet” problems with the same process I realized we could repurpose what we had built to be a Preview Engine for full-stack and microservices applications – something where you could “Continuously Preview” what’s in your feature pipeline.Josh Thurman:  There’s two other big picture elements that brought this into focus for us.  Everyone has seen the success of the Frontend Preview tools by static site platforms like Netlify and Vercel.  Their success taught us that making it easy to Preview was extremely valuable for teams, but no one had really formalized the concept and the supporting technology for more complex applications.  So we realized that the Preview concept shouldn’t just apply to frontends, it should apply to everything that we build.The other major factor was observing how the complexity of microservices architectures and remote work were making it harder to collaborate – particularly as teams and app ecosystems grow in size and complexity.  So the Continuous Previews concept is really about removing barriers to collaborating early and often – it’s about taking away all the barriers between the developer coding the feature and the team member who is reviewing or approving it.  

At a high level, Continuous Previews equals continuous collaboration – so you have a process and technology that is facilitating a collaborative culture and  that’s really when you start to see major improvements in team velocity, cycle times, lead times, and code stability.Peter McKee:  I’ve read the CP Manifesto ( www.cpmanifesto.org / https://github.com/UffizziCloud/Continuous_Previews_Manifesto ) that outlines the principles of CP, what inspired you to write and publish that?Josh Thurman:  It was clear to us that CP is so much bigger than any tool or service, and we also recognize that it’s easy to confuse tools or products with concepts. So we took a cue from the Agile Manifesto and decided that the CP concept and its underlying principles need to be out there for everyone to benefit from and to understand the “Why?”.Grayson Adkins:  The Agile Manifesto is the governing document for how to build software as a team, and we see CP as a best practice that’s nested within the Agile umbrella.

Peter McKee:  Reading the CP manifesto and talking to you guys a few times about it I have a good sense of what CP is at a strategic level but tell me about the tactical level, how does it impact teams on a day-to-day basis?Grayson Adkins:  Most teams follow similar Development workflows that align with the Agile Loop of Design, Develop, Integrate, Test, and Deliver.  When you’re following this process a Developer picks up a ticket, checks out a branch and starts developing.  Once they think they have met the requirements, they then indicate that their ticket is ready to be merged by opening a PR.  

Take this process and multiply it by the number of developers working on different features and you end up with X amount of features that are ready to be merged.  At some interval you execute CI (Continuous Integration) and now you have a batch of new features in your QA environment where for the first time in this process they are ready to be reviewed by someone other than the Developer who coded the feature.  

The problem with this process is that you are bringing QA in way too late in the game.  Once a feature has been merged and deployed it’s 10x harder to find and fix a bug as opposed to catching the issue(s) early and addressing it at the feature branch level where, again, it’s relatively easy to find and fix.

When a developer pushes new code using the CP method it can be QA’d without merging – and also without the Developer having to context switch out of their Git workflow.  The Preview has eliminated the barrier to the collaboration, and your iterative process of Develop – Preview, Develop – Preview is happening faster, and it’s also happening earlier in the development process.  

In a more literal sense you are bringing QA into the development process where traditionally it has served as an add-on function.Josh Thurman:  I’m glad you said teams generically as opposed to saying Development Team or Product Team.  Even though we see most organizations task-organize into Dev Team and Product Team, I think the construct can be a hindrance to collaboration.  CP is a process for “The Team” meaning everyone who contributes to the success of a product. Peter McKee:  You guys have mentioned barriers to collaboration – what exactly are those barriers that teams face now?Josh Thurman:  Good question.  The cleanest way to have your application readily accessible to everyone on your team for testing is to have it deployed to the cloud and reachable via a secure URL endpoint.  I should emphasize here that the endpoint could be to test your full-stack application or any of its individual components like an API or a front end element as examples.  

To deploy a version of your application you need a hosting environment – and setting that up – particularly for a complex or microservices-based application – is no easy task.  Most teams have persistent environments like Test, QA, Staging, and Production that are maintained by your DevOps and Ops team members.  To set up a new environment just to test one feature branch is a significant barrier. The CP method calls for on-demand hosting environments that have a purpose-driven lifecycle.  They are stood up to Preview and Test a feature branch and then they are taken down after that feature is merged.  This is where technology comes in and this is what Uffizzi does so well.  Uffizzi provides teams with an off-the-shelf solution that completely automates on-demand hosting environments with policy-based triggers that facilitate the preview.  You can have as many test environments as you need, when you need them, for the amount of time you need them. And I’ll just add that with this method you also remove your single point of failure when you have a persistent QA or Test environment go down and have to spend most of a day fixing that. 

Grayson Adkins:  The other barrier is DevOps expertise.  Just as one example – it’s a significant jump in complexity to go from a 20+ line Docker Compose file to a 1000+ line YAML file.  DevOps expertise is often it’s own bottleneck within an organization, and if you can reduce the demands on them with an automated solution that’s a big win. To follow best DevOps practices we’ve centered the product around infrastructure-as-code through a uffizzi-compose.yml that is syntactically the same as docker-compose.yml (based on version 3.9) but with some additional inputs. 

But we also have a GUI and the concept of Templates that are meant to increase accessibility – the CP concept is something that can benefit teams of all sizes and skill levels, and we want to act as a DevOps force multiplier in those cases.Peter McKee:  Speaking of barriers, what do you guys see as the biggest hurdle to teams adopting a CP methodology?Josh Thurman:  For a method to take off it needs to be paired with a technology that is easy enough to use that it really starts to proliferate.  Again to reference what Netlify and Vercel have done, they made it simple enough that Frontend Previews have really taken off in the last few years.  

Of course this is what we are trying to do with Uffizzi in the context of full-stack and microservices applications.  We are offering a very specific tool that makes it easy to implement CP by abstracting away the complexity of managing hosting environments and deployments. Grayson Adkins:  I think the other issue is overcoming the inertia of “good enough”—everyone is getting the job done now, so why do they need a new way of doing business? This is the same hurdle with every new process and technology.  Before Agile, Waterfall was good enough.  Before container orchestration, VMs (virtual machines) were good enough.  So I think we’ll go through an early adopter period, and you eventually get to a point where the competitive advantage is such that it requires organizations to implement CP or they risk getting left behind. 

Peter:  As we wrap up what we can look forward to from Uffizzi – what’s on the Road Map? Grayson Adkins:  We’ve got some exciting new features rolling out.  In the near term we are expanding our integrations both with image registries and collaboration software.  We’re currently rolling out integrations with image registries at all the major cloud providers – AWS, Azure, and GCP.  Then over the next few months we’ll add Gitlab and Bitbucket for repos and then Slack, Jira, Microsoft Teams, and Asana on the collaboration side.Peter McKee: Well this has been fantastic, I really appreciate your time and your thought leadership. I can see how this is a game-changer for how organizations build and test.  I look forward to seeing how CP and Uffizzi continue to benefit the Dev Community.  

If you have any Docker-related questions, please feel free to reach out on Twitter to @pmckee and join us in our community slack.If you have any Uffizzi-related questions, please feel free to reach out on Twitter to @uffizzi_ and join us in our uffizzi users community slack – Josh Thurman  (@JoshThurman19) or Grayson Adkins (@GraysonAdkins).You can view the CP manifesto here- www.cpmanifesto.org 

And the Open Source Repos here- UffizziCloud/Continuous_Previews_Manifesto: Source for: https://cpmanifesto.org

https://github.com/UffizziCloud/uffizzi_controller

https://github.com/UffizziCloud/uffizzi_app

The post Continuous Previews (CP): Don’t Merge Until You Preview appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/