Graphcore Poplar SDK Container Images Now Available on Docker Hub

Graphcore’s Poplar® SDK is available for developers to access through Docker Hub, with Graphcore joining Docker’s Verified Publisher Program. Together with Docker, we’re distributing our software stack as container images, enabling developers to easily build, manage and deploy ML applications on Graphcore IPU systems.

We continue to enhance the developer experience to make our hardware and software even easier to use. Just over a year ago we introduced a selection of pre-built Docker containers for users. Now, we are making our Poplar SDK, PyTorch for IPU, TensorFlow for IPU and Tools fully accessible to everyone in the Docker Hub community as part of our mission to drive innovation.

Here’s more information on what’s in it for developers and how to get started.

Why Docker is so important to our community

Docker has become the primary source for pulling container images – according to the latest Index Report, there’s been a total of 396 billion all-time pulls on Docker Hub. Furthermore, Docker Hub remains one of the “most wanted, loved and used” developer tools based on the 2021 Stack Overflow Survey answered by 80,000 developers.

For IPU developers, our Docker container images simplify and accelerate application development workflows being deployed in production on IPU systems by supplying pre-packaged runtime environments for applications built using PyTorch, TensorFlow or directly with Graphcore’s Poplar SDK. Containerised applications provide increased portability of applications with consistent, repeatable execution and are an important enabler for many MLOps frameworks.

What’s available for developers?

As of today, developers can freely install our Poplar Software stack, co-designed with the IPU (Intelligence Processing Unit), specifically for machine intelligence applications. Poplar is Graphcore’s graph toolchain which sits at the core of our easy-to-use and flexible software development environment fully integrated with standard machine learning frameworks so developers can easily port existing models. For developers who want full control to exploit maximum performance from the IPU, Poplar enables direct IPU programming in Python and C++ through PopART (Poplar Advanced Runtime).

Our Poplar SDK images can be pulled via the following repositories:

Poplar SDK – contains Poplar, PopART and tools to interact with IPU devices PyTorch for IPU – contains everything in the Poplar SDK repo with PyTorch pre-installedTensorFlow for IPU – contains everything in the Poplar SDK repo with TensorFlow 1 or 2 pre-installedTools – contains management and diagnostic tools for IPU devices

And as part of the Docker Verified Publisher Program, Graphcore container images are exempt from rate limiting — meaning that developers have unlimited container image requests for Poplar, regardless of their Docker Hub subscription.

Getting Started with Poplar on Docker

The Poplar Docker containers encapsulate everything needed to run models in IPU in a complete filesystem (i.e. Graphcore’s Poplar® SDK, runtime environment, system tools, configs, and libraries). In order to use these images and run IPU code, you will need to complete the following steps:

1. Install Docker on the host machine2. Pull Graphcore’s Poplar SDK container images from Docker Hub3. Prepare Access to IPUs4. Verify IPU accessibility with Docker container5. Sample App Code on IPUs

Install Docker on the host machine

Docker installation varies based on operating system, version, and processor. 

You can follow Docker’s getting started guide.

Pull Graphcore’s Poplar SDK container images from Docker Hub

Once Docker is installed, you can run commands to download our hosted images from Docker Hub and run them in the host machine. The Poplar SDK container images can be pulled from the Graphcore Poplar repository on Docker Hub.

There are four repositories and these repositories may contain multiple images based on the SDK version, OS, and architecture.

graphcore/pytorchgraphcore/tensorflowgraphcore/poplargraphcore/tools

Pulling from the framework repo downloads the latest version of the SDK compiled for AMD host processor by default.

To pull the latest TensorFlow image use:

$ docker pull graphcore/tensorflow

If you want to select a specific build for a specific SDK version and processor, you can configure the tags based on Docker Image Tags.

Prepare Access to IPUs

To talk to the IPUs in PODs, we must configure the connection between the host machines and the IPUs – the IPU over Fabric (IPUoF). The information that Poplar needs to access devices can be passed via an IPUoF configuration file which is, by default, written to a directory in your home directory (~/.ipuof.conf.d). The configuration files are useful when the Poplar hosts do not have direct network access to the V-IPU controller (for security reasons, for example). 

If you are using Graphcloud, the IPUoF default config file is generated every time a new user is created and added to the POD. Check if there are .conf files inside that folder (e.g. ~/.ipuof.conf.d/lr21-3-16ipu.conf). If you have this setup, you can proceed to the next step.

If not available, you will need to configure Poplar to connect to the V-IPU server by following the V-IPU Guide: Getting Started. Take note to save your IPUoF config file in the folder ~/.ipuof.conf.d to run the scripts in the next section.

Verify IPU accessibility with Docker container

Now that you have the container ready, you can check if the IPU is accessible from inside the container.

List the IPU devices within the context of the container by running the following:

$ docker run –rm –ulimit memlock=-1:-1 –net=host –cap-add=IPC_LOCK –device=/dev/infiniband –ipc=host -v ~/.ipuof.conf.d/:/etc/ipuof.conf.d -it graphcore/tools gc-info -l

Running a sample TensorFlow app

First, get the code from the Graphcore tutorials repository in GitHub.

$ git clone https://github.com/graphcore/tutorials.git

$ cd tutorials

The Docker container is an isolated environment. It will come empty and not have access to the host machine’s file system. In order to use data from your host machine, the data needs to be accessible within the Docker container. 

You can make the data accessible by mounting directories as volumes to share data between the host machine and the Docker container environment. 

A common pattern when working with a Docker-based development environment is to mount the current directory into the container (as described in Section 2.2, Mounting directories from the host), then set the working directory inside the container with -w <dir name>. For example, -v “$(pwd):/app” -w /app.

To run the mnist example in the TensorFlow container, you can use the following command which mounts the tutorials repo into the docker container and runs it.

$ docker run –rm –ulimit memlock=-1:-1 –net=host –cap-add=IPC_LOCK –device=/dev/infiniband –ipc=host -v ~/.ipuof.conf.d/:/etc/ipuof.conf.d -it -v “$(pwd):/app” -w /app graphcore/tensorflow:2 python3 simple_applications/tensorflow2/mnist/mnist.py

This guest blog post originally appeared here.
The post Graphcore Poplar SDK Container Images Now Available on Docker Hub appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker’s Developer Community: Wind In Our Sails

Two years ago, in November 2019, we refocused our company on the needs of developers. At the time, we recognized the growing adoption of microservices, the explosion in the number of tools, and the many opportunities to simplify these complexities. Little did we know that within months the world would face a global pandemic and economic recession, and that our company would quickly shift to 100% virtual, work-from-home operations. We also didn’t anticipate how the pandemic would increase the demand for application development, further accelerating the importance of developer productivity.

And throughout all these ups and downs the Docker team stayed the course. I thank them for their unwavering focus on serving our developer community. And I thank our Docker Captains, our community leaders, our ecosystem partners, our customers and the wider Docker community for their feedback, loyalty, and trust. Your support these last two years enabled us to not just survive, but to thrive.

Delivering Collaboration, Speed, and Security for Developers

The global tragedy of the pandemic accelerated digital initiatives in every industry while the “new normal” of work-from-home raised the importance of collaboration within “virtual-first” development teams. In response, to simplify the complexities of modern app development facing these teams we shipped Docker Development Environments. With it, team members can quickly and easily share with each other reproducible development environments, enabling them to spend more time writing code instead of installing tools, setting environment variables, and untangling dependencies.

This “need for speed” triggered by the pandemic is somewhat addressed by the decade-long rise in custom silicon. Yet how can developers take advantage of custom silicon performance without re-writing their applications or learning unfamiliar tools? Since our re-focusing in November 2019 we’ve shipped new capabilities that address this challenge, including support for Arm-based Apple M1 silicon in Docker Desktop, NVIDIA GPU support in Docker Desktop and Docker Compose-to-AWS, and RISC-V support in BuildKit. In doing so, we’re helping developers to get the speed benefits of custom silicon “for free” without having to make changes to their app or learn a new toolchain.

Sadly, the rise in online activity catalyzed by the pandemic is attracting the attention of criminals, resulting in an increase in the frequency and sophistication of attacks on organizations’ software supply chains. For this challenge, given the heterogeneity of customers’ environments Docker and our partners have taken a multi-vendor, open standards-based approach. This includes delivering CNCF-based Notary v2 for digital signatures, Docker Hub-interoperable container registry partnerships with AWS, Mirantis, and JFrog, and trusted content partnerships with Canonical, Red Hat, VMware, and other leading commercial ISVs.

The results? More development teams than ever are using Docker as the fastest, most secure way to build, share, and run modern applications. In fact, since our refocusing on developers two years ago our community has grown to 15.4 million monthly active developers sharing 13.7 million apps at a rate of 14.7 billion pulls per month. Moreover, for the second year in a row Stack Overflow’s Developer Survey ranked Docker as the #1 most wanted development tool, and JetBrains’ annual survey rated Docker Compose as the most popular container development tool, used by 58% of respondents.

We’re Just Getting Started

While we’re humbled by the results of these first two years, we also believe we’re just getting started – there’s still so much to do. In particular, the pandemic-triggered demand for apps is accelerating the demand for developer talent. Specifically, the demand for developers is growing 8X faster than the average demand for other occupations, with the market projected to be 45 million developers by the end of this decade. What are the implications for our industry and for Docker?

First, to state the obvious, more developers means more apps. And whether it’s the modernization of traditional apps or the creation of new, Kubernetes-destined ones both mean increased complexity. The result is a larger attack surface of software supply chains for criminals to attack.

With our presence at both ends of the supply chain, Docker is uniquely positioned to help. As the source of trusted content at the beginning of the supply chain – Docker Official Images, Docker Verified Publisher images of commercial ISV partners, and Docker-sponsored open source projects – developers can trust the foundational building blocks of their apps from the start. On the other end of the supply chain, Docker Desktop, we’re providing tools for developers to discover trusted content, verify its integrity, and ensure its ongoing freshness. And for their managers, we give them SaaS-delivered visibility and controls for images pulled from registries as well as created within Docker Desktop. This allows development teams to ship quickly and safely – no need to trade one off for the other.

Second, increasingly developers want the freedom to choose best-of-breed tools in order to take advantage of the latest innovations in app development. In working with ecosystem partners on open standards like OCI, compose-spec, CNCF Distribution, and others, we’ve delivered innovation for developers such as Compose-to-cloud tooling for AWS and Azure, image build automations with GitHub and Bitbucket repos and GitHub Actions, image vulnerability scanning with Snyk, and more.

Going forward, so as to provide developers even more choice in ecosystem partner tools we are making it even easier for partners to integrate with Docker tools, services, and content by expanding the breadth and depth of our product interfaces. Around this we are providing discovery and assurance services so that developers know these ecosystem partner integrations are safe, maintained, and supported. Furthermore, our SaaS-delivered management plane gives managers visibility into integration usage and a means to set and enforce integrations policy. This combination gives developers the freedom to choose their tools, safely.

Third, daily more and more developers are joining hundreds of thousands of Docker community members freely sharing with each other their time, expertise, and joy in using Docker for app development. And whether it’s our public roadmap, awesome-compose contributions, community Slack, community meetups, or 80,000-participant DockerCons you’ll find a friendly, enthusiastic crowd. Moreover, it’s a community that welcomes newcomers just getting started, and thus plays an important role in sustainably scaling Docker adoption as tens of millions more new developers join the community in the years to come.

But besides stars and reviews on image repos on Docker Hub, the wellspring of community recommendations, best practices, and cool hacks isn’t accessible directly in the product to other community members. Wouldn’t it be cool if, for example, the Dockerfile optimization someone in your company a continent away just figured out was automatically made visible and available to everyone in your company working with a similar image? We think so, too! We see so much potential in enabling community members to help and learn from each other, and we can’t wait to share more with you.

Fair Skies Ahead

In these last two years since our refocusing on developers, we’re humbled by the non-stop growth in the Docker community, the enthusiastic feedback and adoption of the new features we’ve shipped, and the positive support for the business changes we’ve made to enable us to sustainably scale Docker to tens of millions more developers. And with the accelerating demand for new apps and more developers to build, share, and run them, we’re incredibly excited about the next chapters in our journey together with you!

Thank you, and let’s keep shipping!

sj

PS – We can’t wait to share details of the above and more with you at next year’s DockerCon on Tuesday, May 10, 2022. Save the date, it’s gonna be a blast! Register today at https://www.docker.com/dockercon/

The post Docker’s Developer Community: Wind In Our Sails appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Desktop 4.2 Release: Save Your Battery with Pause / Resume, and Say Goodbye to the Update Pop-up

With Docker Desktop 4.2 we’re excited to introduce Pause / Resume as well as a host of changes to make it easier for you to manage updates. These features are available to Docker Desktop users on any subscription tier.

Save your battery with Pause / Resume

Pause / Resume gives developers the power to pause their current Docker Desktop session and resume work whenever they want, saving resources on their machine while Docker is paused. When you pause Docker Desktop, the current state of your containers is saved in memory and all processes are frozen. This lowers CPU usage and will help with saving your laptop’s battery life. To resume Docker Desktop, click either the Resume button in the menu or type any Docker CLI command in your terminal.

To try out this feature on Docker Desktop 4.2, navigate to the whale menu and click the ‘Pause’ button. The corresponding content sections for Docker Desktop’s left sidebar items (e.g. Containers) will then be covered to clearly denote the Pause state. 

Please note, Pause / Resume is currently not available in Windows container mode.

Say goodbye to the update pop-up

We’ve heard your feedback that the update modal interrupts your workflows and makes it challenging to use Docker Desktop when you need it most. That’s why we’ve done away with the update pop up and introduced a new update settings section in the Docker Dashboard, where you can check for updates and manage your update preferences. We appreciate when our users stay up to date so they get all the latest bug fixes and new features, but we want to make sure that we enable users to do that at a time that’s convenient. 

To summarize what the experience will be like on Docker Desktop 4.2:

A similar badge will appear in the Docker Dashboard settings icon to make it a seamless experience without interrupting your workflow with a modal. Here you can also manage your software update settings.

The `Automatically check for updates` setting is now available for all Docker subscription tiers 

Thanks to all of your positive support of the Docker subscription updates, we’ve been able to focus on delivering more value to all users. In Docker Desktop 4.2 we’ve enabled all users, regardless of subscription tier, to turn off automatically checking for updates. When you disable this setting, all notifications in the whale menu and the app will be disabled and you will have to manually check for updates. Just update to Docker Desktop 4.2 to start using this feature!

We also know that people have different preferences when it comes to downloading updates. For some, the background can take a lot of bandwidth and don’t want it to start when they are busy at work or on Zoom calls, but others would rather have less intervention when it comes to updating. Which is why we’ve put the choice in your hands to decide whether updates should automatically download or not.

We’re considering introducing more settings in the future and would love to know what you think, let us know on our public roadmap item! 

Coming soon

All of the changes described above are available in 4.2 to all Docker Desktop users, including those on Docker Personal. 

We’re also working on two of your highest voted items from our public roadmap: improving Mac filesystem performance, and implementing Docker Desktop for Linux, so watch this space for more news on those in the coming months. And we would love to know what other improvements you would like to see, so please add a thumbs-up emoji to your highest priorities on the roadmap, or create a new roadmap ticket if your idea isn’t already there.
The post Docker Desktop 4.2 Release: Save Your Battery with Pause / Resume, and Say Goodbye to the Update Pop-up appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Save the Date: Next Community All Hands on December 9th

We’re one month away from our next Community All Hands event, on December 9th at 8am PST/5pm CET. This is a unique opportunity for Docker staff, Captains, and the broader Docker community to come together for live company updates, product updates, demos, community shout-outs and Q&A. The last all-hands gathered more than 2,000 attendees from around the world and we hope to double that this time around. 

The theme for this edition is Innovation and Experimentation, so bring your most innovative projects to share with the community. We will be presenting our latest developments and some exciting demos to experiment with. 

Here are five reasons you should attend the next Community All Hands:

Get the inside scoop on Docker’s latest product developmentsYou’ll get the opportunity to hear directly from the product team about the latest developments, including product strategy, new features, roadmap.Meet the Docker teamWe’ll be there to answer your questions, hear your feedback, and engage in lively discussion about the latest developments in the Docker community.Work for DockerThe Docker team is growing fast and we’re looking for people who are passionate about building great software. You’ll have the opportunity to learn more about the Docker team and the open positions we’re hiring for.Get inspired by the community We’ll be showcasing some of the most innovative projects that are being built with Docker. You’ll have the opportunity to learn about what others are doing with Docker and get inspired for your own projects.Get your hands on some cool swagWe’ll have some awesome Docker gear for you to take home. Stay tuned for more details!

Our Call for Speakers will be open until November 19th. If you would like to submit a proposal to speak, please visit our Call for Speakers page.
The post Save the Date: Next Community All Hands on December 9th appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Continuous Previews (CP): Don’t Merge Until You Preview

Docker’s Peter McKee sits down with Uffizzi Co-founders Grayson Adkins – who serves as Head of Product – and Josh Thurman – who serves as Head of Developer Relations – for a Q&A on the CP method.

Check out the live stream from August 26th for Docker Build: Enabling Full-Stack Continuous Previews with Uffizzi

(The below transcript has been edited for clarity and format)

Peter McKee:  It’s great having you guys back on the Live Show.  I love the idea of Continuous Previews, tell me what brought you guys to the concept?Grayson Adkins:  The kernel of the idea – no pun intended – came from our own challenges in trying to develop our product faster.  We continued to observe two problems that were hindering forward progress. On the development side we were too often merging broken features – and then we would have all this commingled code and finding where the bug was introduced and then fixing it was really hindering progress.  The returned tickets were always holding us back – the classic two steps forward, one step back. On the product side I do all our design work and I kept finding that the design/requirements would need to be modified as I saw the product come to life. And when I would ask “Can I see it yet?” there was no easy solution to that.  So for me to see a new feature and provide feedback I would have to wait until the branch was merged and deployed to our QA environment.  Then the feedback process would start and, as tickets are returned, they’d go back through this big loop of “In Development”, then merge and deploy, then I can check it again.  There was too big of a gap between Development and Product.At the time Uffizzi’s product direction was moving towards making it easy for Developers to deploy their apps to the cloud and once we realized we could fix the “dirty code in main” and “can I see it yet” problems with the same process I realized we could repurpose what we had built to be a Preview Engine for full-stack and microservices applications – something where you could “Continuously Preview” what’s in your feature pipeline.Josh Thurman:  There’s two other big picture elements that brought this into focus for us.  Everyone has seen the success of the Frontend Preview tools by static site platforms like Netlify and Vercel.  Their success taught us that making it easy to Preview was extremely valuable for teams, but no one had really formalized the concept and the supporting technology for more complex applications.  So we realized that the Preview concept shouldn’t just apply to frontends, it should apply to everything that we build.The other major factor was observing how the complexity of microservices architectures and remote work were making it harder to collaborate – particularly as teams and app ecosystems grow in size and complexity.  So the Continuous Previews concept is really about removing barriers to collaborating early and often – it’s about taking away all the barriers between the developer coding the feature and the team member who is reviewing or approving it.  

At a high level, Continuous Previews equals continuous collaboration – so you have a process and technology that is facilitating a collaborative culture and  that’s really when you start to see major improvements in team velocity, cycle times, lead times, and code stability.Peter McKee:  I’ve read the CP Manifesto ( www.cpmanifesto.org / https://github.com/UffizziCloud/Continuous_Previews_Manifesto ) that outlines the principles of CP, what inspired you to write and publish that?Josh Thurman:  It was clear to us that CP is so much bigger than any tool or service, and we also recognize that it’s easy to confuse tools or products with concepts. So we took a cue from the Agile Manifesto and decided that the CP concept and its underlying principles need to be out there for everyone to benefit from and to understand the “Why?”.Grayson Adkins:  The Agile Manifesto is the governing document for how to build software as a team, and we see CP as a best practice that’s nested within the Agile umbrella.

Peter McKee:  Reading the CP manifesto and talking to you guys a few times about it I have a good sense of what CP is at a strategic level but tell me about the tactical level, how does it impact teams on a day-to-day basis?Grayson Adkins:  Most teams follow similar Development workflows that align with the Agile Loop of Design, Develop, Integrate, Test, and Deliver.  When you’re following this process a Developer picks up a ticket, checks out a branch and starts developing.  Once they think they have met the requirements, they then indicate that their ticket is ready to be merged by opening a PR.  

Take this process and multiply it by the number of developers working on different features and you end up with X amount of features that are ready to be merged.  At some interval you execute CI (Continuous Integration) and now you have a batch of new features in your QA environment where for the first time in this process they are ready to be reviewed by someone other than the Developer who coded the feature.  

The problem with this process is that you are bringing QA in way too late in the game.  Once a feature has been merged and deployed it’s 10x harder to find and fix a bug as opposed to catching the issue(s) early and addressing it at the feature branch level where, again, it’s relatively easy to find and fix.

When a developer pushes new code using the CP method it can be QA’d without merging – and also without the Developer having to context switch out of their Git workflow.  The Preview has eliminated the barrier to the collaboration, and your iterative process of Develop – Preview, Develop – Preview is happening faster, and it’s also happening earlier in the development process.  

In a more literal sense you are bringing QA into the development process where traditionally it has served as an add-on function.Josh Thurman:  I’m glad you said teams generically as opposed to saying Development Team or Product Team.  Even though we see most organizations task-organize into Dev Team and Product Team, I think the construct can be a hindrance to collaboration.  CP is a process for “The Team” meaning everyone who contributes to the success of a product. Peter McKee:  You guys have mentioned barriers to collaboration – what exactly are those barriers that teams face now?Josh Thurman:  Good question.  The cleanest way to have your application readily accessible to everyone on your team for testing is to have it deployed to the cloud and reachable via a secure URL endpoint.  I should emphasize here that the endpoint could be to test your full-stack application or any of its individual components like an API or a front end element as examples.  

To deploy a version of your application you need a hosting environment – and setting that up – particularly for a complex or microservices-based application – is no easy task.  Most teams have persistent environments like Test, QA, Staging, and Production that are maintained by your DevOps and Ops team members.  To set up a new environment just to test one feature branch is a significant barrier. The CP method calls for on-demand hosting environments that have a purpose-driven lifecycle.  They are stood up to Preview and Test a feature branch and then they are taken down after that feature is merged.  This is where technology comes in and this is what Uffizzi does so well.  Uffizzi provides teams with an off-the-shelf solution that completely automates on-demand hosting environments with policy-based triggers that facilitate the preview.  You can have as many test environments as you need, when you need them, for the amount of time you need them. And I’ll just add that with this method you also remove your single point of failure when you have a persistent QA or Test environment go down and have to spend most of a day fixing that. 

Grayson Adkins:  The other barrier is DevOps expertise.  Just as one example – it’s a significant jump in complexity to go from a 20+ line Docker Compose file to a 1000+ line YAML file.  DevOps expertise is often it’s own bottleneck within an organization, and if you can reduce the demands on them with an automated solution that’s a big win. To follow best DevOps practices we’ve centered the product around infrastructure-as-code through a uffizzi-compose.yml that is syntactically the same as docker-compose.yml (based on version 3.9) but with some additional inputs. 

But we also have a GUI and the concept of Templates that are meant to increase accessibility – the CP concept is something that can benefit teams of all sizes and skill levels, and we want to act as a DevOps force multiplier in those cases.Peter McKee:  Speaking of barriers, what do you guys see as the biggest hurdle to teams adopting a CP methodology?Josh Thurman:  For a method to take off it needs to be paired with a technology that is easy enough to use that it really starts to proliferate.  Again to reference what Netlify and Vercel have done, they made it simple enough that Frontend Previews have really taken off in the last few years.  

Of course this is what we are trying to do with Uffizzi in the context of full-stack and microservices applications.  We are offering a very specific tool that makes it easy to implement CP by abstracting away the complexity of managing hosting environments and deployments. Grayson Adkins:  I think the other issue is overcoming the inertia of “good enough”—everyone is getting the job done now, so why do they need a new way of doing business? This is the same hurdle with every new process and technology.  Before Agile, Waterfall was good enough.  Before container orchestration, VMs (virtual machines) were good enough.  So I think we’ll go through an early adopter period, and you eventually get to a point where the competitive advantage is such that it requires organizations to implement CP or they risk getting left behind. 

Peter:  As we wrap up what we can look forward to from Uffizzi – what’s on the Road Map? Grayson Adkins:  We’ve got some exciting new features rolling out.  In the near term we are expanding our integrations both with image registries and collaboration software.  We’re currently rolling out integrations with image registries at all the major cloud providers – AWS, Azure, and GCP.  Then over the next few months we’ll add Gitlab and Bitbucket for repos and then Slack, Jira, Microsoft Teams, and Asana on the collaboration side.Peter McKee: Well this has been fantastic, I really appreciate your time and your thought leadership. I can see how this is a game-changer for how organizations build and test.  I look forward to seeing how CP and Uffizzi continue to benefit the Dev Community.  

If you have any Docker-related questions, please feel free to reach out on Twitter to @pmckee and join us in our community slack.If you have any Uffizzi-related questions, please feel free to reach out on Twitter to @uffizzi_ and join us in our uffizzi users community slack – Josh Thurman  (@JoshThurman19) or Grayson Adkins (@GraysonAdkins).You can view the CP manifesto here- www.cpmanifesto.org 

And the Open Source Repos here- UffizziCloud/Continuous_Previews_Manifesto: Source for: https://cpmanifesto.org

https://github.com/UffizziCloud/uffizzi_controller

https://github.com/UffizziCloud/uffizzi_app

The post Continuous Previews (CP): Don’t Merge Until You Preview appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

October 2021 Newsletter

The latest and greatest content for developers.

Webinar | Management & Security at Scale with Docker BusinessSee how Docker Business enables centralized management and security for organizations. Register Now!

Check out this recap of the Screaming in the Cloud episode, Heresy in the Church of Docker Desktop with Scott Johnston. 

[Learn More]

The latest edition of the Docker Index is in, and it shows continued growth in activity across the Docker community.

[Learn More]

Watch how Image Access Management enables organizations to control which container images their developers use.

[Learn More]

Is your organization looking for an alternative to Docker Desktop? Be sure to consider these points first.

[Learn More]

Here’s what you need to know about the updates and extensions to our subscriptions: Personal, Pro, Team and Business.

[Learn More]

Community All-Hands | Dec 9 at 8am PTThis virtual event brings the Docker community together to share feature and product updates, live demos, breakout sessions and more. Register Now!

News & Content

Speak at our next Docker Community All-HandsNotary v2 Project UpdateBeta IPv6 Support on Docker Hub RegistryVolume Management Now Included with Docker PersonalSpeed up Building with Docker BuildX and Graviton2 EC2Docker Business – Management & Security at ScaleThe Magic Behind the Scenes of Docker DesktopAccelerating New Features in Docker DesktopHeresy In the Church of Docker Desktop

Docker Captain: Aurélie VacheDocker Captains are experts in their field and are passionate about sharing their knowledge with others. See how Aurélie is contributing to the Docker community.

Captain Content

Docker Desktop Licensing Changes: DevOps and Docker Live ShowCreating a Docker CI/CD workflow in GitHub ActionsWSL2 for Docker Desktop in Windows 11 with Nuno do CarmoContainerd project overview with maintainer Phil EstesRunning Automated Tasks with a CronJob over Kubernetes running on Docker Desktop 4.1.1

On-Demand: DockerCon 2021Catch up on 45+ breakout sessions, compelling keynotes, community rooms, interactive panels, exclusive interviews, and much more. Watch Now!

Docker CommunityLearn, connect and collaborate with millions of developers across the globe using Docker.Join Community

Docker BlogCheck out the latest news, tips & tricks, how to guides, best practices and more from Docker experts.Read Blog

The post October 2021 Newsletter appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/