New Relic’s OpenTelemetry and Open Source Commitment
thenewstack.io – New Relic’s Ben Evans talks OpenTelemetry and the company’s contributions to OpenTelemetry and other open source projects.
Quelle: news.kubernauts.io
thenewstack.io – New Relic’s Ben Evans talks OpenTelemetry and the company’s contributions to OpenTelemetry and other open source projects.
Quelle: news.kubernauts.io
infoworld.com – Why we must use a zero-trust security model in microservices and how to implement it using the Kuma universal service mesh.
Quelle: news.kubernauts.io
kinvolk.io – Kinvolk builds 100% open source cloud native infrastructure.
Quelle: news.kubernauts.io
This is a guest post from Viktor Petersson, CEO of Screenly.io. Screenly is the most popular digital signage product for the Raspberry Pi. Find Viktor on Twitter @vpetersson.
For those not familiar with Qt, it is a cross-platform development framework that is used in a wide range of products, including cars (Tesla), digital signs (Screenly), and airplanes (Lufthansa). Needless to say, Qt is very powerful. One thing you cannot say about the Qt framework, however, is that it is easy to compile — at least for embedded devices. The countless blog posts, forum threads, and Stack Overflow posts on the topic reveal that compiling Qt is a common headache.
As long-term Qt users, we have had our fair share of battles with it at Screenly. We migrated to Qt for our commercial digital signage software a number of years ago, and since then, we have been very happy with both its performance and flexibility. Recently, we decided to migrate our open source digital signage software (Screenly OSE) to Qt as well. Since these projects share no code base, this was a greenfield opportunity that allowed us to start afresh and explore exciting new technologies for the build process.
Because compiling Qt (and QtWebEngine) is a very heavy operation, we would need to pre-compile and distribute Qt so that the Dockerfile could simply download and include it in the build process (rather than compiling as part of the installation process).
We sat down and created the following requirements for our build process:
The process must be fully automated from start to finish.We need to be able to build Qt/QtWebEngine for all supported Raspberry Pi boards (with the appropriate Qt device profile).We should use cross compilation on x86 to speed up the process where it makes sense.We need to be able to run the full process on CI, and thus cannot rely on a Raspberry Pi.We should confine everything to run inside Docker containers so we do not clutter the host with build packages.
With the above goals in mind, we had a great opportunity to try out the new multi-platform support in Docker. Used in conjunction with multi-stage builds, we were able to get the best of both worlds:
Use emulation where we cannot cross-compileSwitch to cross-compilation for the heavy lifting
How does multi-platform in Docker work?
The easiest way to use multi-platform functionality in Docker is to invoke it from the command line. Using the docker buildx, we can tap into new beta functionalities. By running docker buildx build –platform linux/arm/v7 -t arm-build . This command builds the docker image as per the `Dockerfile` in the current directory using ARMv7 emulation. Behind the scenes, Docker runs the whole Docker build process in a QEMU virtualized environment (qemu-user-static to be precise). By doing this, the complexity of setting up a custom VM is removed. Once built, we can even use docker run to launch containers in ARMv7 mode automagically.
Multi-platform, multi-stage and Qt
While multi-platform functionality is a great stand-alone feature, it gets even more powerful when combined with multi-stage builds. Within a single Dockerfile, we’re able to mix and match platforms and copy between the steps. This functionality is exactly what we ended up doing with the Qt build process for Screenly OSE.
Stage 1: ARM
Thanks to the fine folks over at Balena, we are able to use a Raspbian base image in the first stage. We can invoke this step using:
FROM –platform=linux/arm/v7 balenalib/rpi-raspbian:buster as builder
After the above step, we can use Docker as we normally do and execute various RUN commands, such as installing packages etc.. Do note that this container is running emulated using QEMU if the build is not run on ARMv7 hardware. In our case, we use the command to install the Qt build dependencies. The above step also allows us to fully eliminate the need for copying files from either a disk image (which is what the Qt Wiki suggests) or rsync files from a physical Raspberry Pi.
Stage 2: x86
Once we have installed our dependencies in our ARM step, we can switch over to the builder’s native x86 architecture to avoid emulation and do the cross compile with the following line:
FROM –platform=linux/amd64 debian:buster
Now, we are onto the interesting part. After we have switched over to x86, we can copy files from the previous step. We do this in order to create a sysroot that we can use for Qt. We complete this step by running the following commands:
<!– wp:paragraph –>
<p>RUN mkdir -p /sysroot/usr /sysroot/opt /sysroot/lib</p>
<!– /wp:paragraph –>
<!– wp:paragraph –>
<p>COPY –from=builder /lib/ /sysroot/lib/</p>
<!– /wp:paragraph –>
<!– wp:paragraph –>
<p>COPY –from=builder /usr/include/ /sysroot/usr/include/</p>
<!– /wp:paragraph –>
<!– wp:paragraph –>
<p>COPY –from=builder /usr/lib/ /sysroot/usr/lib/</p>
<!– /wp:paragraph –>
<!– wp:paragraph –>
<p>COPY –from=builder /opt/vc/ sysroot/opt/vc/</p>
<!– /wp:paragraph –>
We now have the best of both worlds. By taking advantage of both multi-step and multi-platform functionality, we generate a sysroot that we can use to build Qt. Since we used a fully functional Raspbian image in our previous step, we are even able to get Qt to pick up all existing libraries.
./configure
-sysroot /sysroot
As we mentioned in the introduction, compiling Qt is far from straightforward. There are a lot of steps required to compile it successfully. To learn more about the exact steps, you can see the full Dockerfile and script build_qt5.sh.
To emulate or not to emulate…
Being able to emulate a platform like ARM is amazing and provides a lot of flexibility. However, it does come at a cost. There is a big performance penalty. This issue is the reason why we do not actually compile Qt using emulation. Instead, we use cross-compilation. If you have the ability to cross-compile rather than emulate, know that cross-compilation will give you much better performance.
About Screenly
Screenly is the most popular digital signage product for the Raspberry Pi. If you want to turn a physical screen into a secure, remotely-controllable device (over UI or digital signage API) that can display dashboards, images, videos, and webpages, Screenly makes setup a breeze. Screenly is available in two flavors: an open source version and a commercial version.
The post Compiling Qt with Docker multi-stage and multi-platform appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/
learncloudnative.com – Init containers allow you to separate your application from the initialization logic and provide a way to run the initialization tasks such as setting up permissions, database schemas, or seeding dat…
Quelle: news.kubernauts.io
eng.uber.com – Apache Kafka at Uber Uber has one of the largest deployments of Apache Kafka in the world, processing trillions of messages and multiple petabytes of data per day. As Figure 1 shows, today we positio…
Quelle: news.kubernauts.io
Recently our CEO Scott Johnston took a look back on all that Docker had achieved one year after selling the Enterprise business to Mirantis and refocusing solely on developers. We made significant investments to deliver value-enhancing features for developers, completed strategic collaborations with key ecosystem partners and doubled down on engaging its user community, resulting in a 70% year-over-year increase in Docker usage.
Even though we are winding down the calendar year, you wouldn’t know it based on the pace at which our engineering and product teams have been cranking out new features and tools for cloud-native development. In this post, I’ll add some context around all the goodness that we’ve released recently.
Recall that our strategy is to deliver simplicity, velocity and choice for dev teams going from code to cloud with Docker’s collaborative application development platform. Our latest releases, including Docker Desktop 3.0 and Docker Engine 20.10, accelerate the build, share, and run process for developers and teams.
Higher Velocity Docker Desktop Releases
With the release of Docker Desktop 3.0.0, we are totally changing the way we distribute Docker Desktop to developers. These changes allow for smaller, faster Docker Desktop releases for increased simplicity and velocity. Specifically:
Docker Desktop will move to a single release stream for all users. Now developers don’t have to choose between stability versus getting fixes and features quickly.All Docker Desktop updates will be provided as deltas from the previous version. This will reduce the size of a typical update, speeding up your workflow and removing distractions from your day.Updates will be downloaded in the background for a more simple experience. All a developer needs to do is restart Docker Desktop. A preview of Docker Desktop on Apple M1 silicon, the most in-demand request on our public roadmap.
These updates build upon what was a banner year for Docker Desktop. The team has been super hard at work having also collaborated with Snyk on image scanning and collaborating with AWS and Microsoft to deploy from Desktop straight to AWS and Azure, respectively. We also invested in performance improvements to the local development experience, such as CPU improvements on Mac and the WSL2 backend on Windows.
Docker Engine 20.10
Docker Engine is the industry’s de facto container runtime. It powers millions of applications worldwide, providing a standardized packaging format for diverse applications. This major release of Docker Engine 20.10 continues our investment in the community Engine adding multiple new features including support for cgroups V2, moving multiple features out of experimental into GA including `RUN –mount=type=(cache|secret|ssh|…)`and rootless mode, along with a ton of other improvements to the API, client and build experience. These updates enhance security and increase trust and confidence so that developers can move faster than before.
More 2020 Milestones
In addition to these latest product innovations, we continued to execute on our developer strategy. Other highlights from the year that was 2020 include:
11.3 million monthly active users sharing apps from 7.9 million Docker Hub repositories at a rate of 13.6 billion pulls per month – up 70% year-over-year. Collaborated with Microsoft, AWS, Snyk and Canonical to enhance the developer experience and grow the ecosystem. Docker ranked as the #1 most wanted, the #2 most loved, and the #3 most popular platform according to the 2020 Stack Overflow Survey. Hosted DockerCon Live with more than 80,000 registrants. All of the DockerCon Live 2020 content is on-demand if you missed it.Over 80 projects accepted to our open source program.
Keep an eye out in the new year for the latest and greatest from Docker. You can expect to see us release features and tools that enhance the developer experience for app dev environments, container image management, pipeline automation, collaboration and content. Happy holidays and merry everything! We will “see” you in 2021, Docker fam.
The post Closing Out 2020 with More Innovation for Developers appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/
rancher.com – Automating the deploy and destroy of your environments and speed up development using the env0 infrastructure automation platform with Rancher’s Kubernetes management platform.
Quelle: news.kubernauts.io
searchcio.techtarget.com – The most important digital transformation trends of 2021 save companies money and offer tangible business benefits. Wider adoption of low-code, MLOps, multi-cloud management and data streaming will s…
Quelle: news.kubernauts.io
At Microsoft Build in the first half of the year, Microsoft demonstrated some awesome new capabilities and improvements that were coming to Windows Subsystem for Linux 2 including the ability to share the host machine’s GPU with WSL 2 processes. Then in June Craig Loewen from Microsoft announced that developers working on the Windows insider ring machines could now make use of GPU for the Linux workloads. This support for NVIDIA CUDA enabled developers and data scientists to use their local Windows machines for inner-loop development and experimentation.
Last week, during the Docker Community All Hands, we announced the availability of a developer preview build of Docker Desktop for WSL 2 supporting GPU for our Developer Preview Program. We already have more than 1,000 who have joined us to help test preview builds of Docker Desktop for Windows (and Mac!). If you’re interested in joining the program for future releases you should do it now!
Today we are excited to announce the general preview of Docker Desktop support for GPU with Docker in WSL2. There are over one and a half million users of Docker Desktop for Windows today and we saw in our roadmap how excited you all were for us to provide this support.
Preview of Docker Desktop with GPU support in WSL2
To get started with Docker Desktop with Nvidia GPU support on WSL 2, you will need to download our technical preview build from here.
Once you have the preview build installed there are still a couple of steps you will need to do to get started using your GPU:
You will need access to a PC with an Nvidia GPU (if you don’t have this we would still like feedback on this build, we have changed a fair bit in our VM to get this working!) The latest Windows Insider version from the Dev preview ring Beta drivers from Nvidia supporting WSL 2 GPU paravirtualization: https://developer.nvidia.com/cuda/wslWSL 2 backend enabled in Docker Desktop
Once you have all of this working you can have a go at the command below to check that GPU support is working.
Keep in mind that this is a technical preview release: it may break, it has not been tested as thoroughly as our normal releases and ‘here be dragons’.
If you do find issues and want to give us feedback, please raise bugs on our public repo https://github.com/docker/for-win. We use this feedback to improve the product and your support in testing this will help us get this ready for GA sooner
Enjoy the tech preview and happy GPU Hacking!
The post WSL 2 GPU Support is Here appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/