From the Captain’s Chair: Pradumna Saraf

Docker Captains are leaders from the developer community that are both experts in their field and are passionate about sharing their Docker knowledge with others. “From the Captain’s Chair” is a blog series where we get a closer look at one Captain to learn more about them and their experiences. 

Today, we are interviewing Pradumna Saraf. He is an Open Source Developer with a passion for DevOps. He is also a Golang developer and loves educating people through social media and blogs about various DevOps tools like Docker, GitHub Actions, Kubernetes, etc. He has been a Docker Captain since 2024.

Can you share how you first got involved with Docker?

If I remember correctly, I was learning about databases, more specifically, MongoDB. Until that time, I had no idea there was something called Docker. I was trying to find a way to get the database up and running locally, and then I came to know from a YouTube video about how Docker is the most common and efficient way for running these kinds of applications locally, and then I skipped learning about databases and dived deep into learning Docker.

What inspired you to become a Docker Captain?

The community. Docker has always been working towards making the developer life easier and listening to the community and users, whether it’s an open source offering or an enterprise, and I wanted to be part of this community. Before even joining the Captains program, I was advocating for Docker by sharing my learning via social media, blogs, etc, and educating people because I was passionate and really loved the potential of Docker. Becoming a Captain felt natural, as I was already doing the stuff, so it was great to get the recognition.

What are some of your personal goals for the next year?

Writing more technical content, of course! Also, giving more in-person talks at international conferences. I also want to get back to contributing and helping open source projects grow.

If you weren’t working in tech, what would you be doing instead?

That’s an interesting question. I love tech. It’s hard to imagine my life without tech because getting into it was not a decision; it was a passion that was inside of me before I could spell technology. But still, if I were not in tech, I might be a Badminton or a Golf player.

Can you share a memorable story from collaborating with the Docker community?

Yes, there was a meetup in Docker Bangalore, India, where Ajeet (DevRel at Docker), a good friend of mine, and I collaborated, and he invited me to deliver a talk on Docker extensions. It was really nice meeting the community, having conversations over pizza about how various people and companies are using Docker in their workflow and bottlenecks.

What’s your favorite Docker product or feature right now, and why?

I am really biased towards Docker Compose. My favourite feature right now is being able to define models in a Docker Compose YAML file and start/stop an AI model with the same Docker Compose commands. Apart from that, I really like the standalone Docker Model Runner (DMR).

Can you walk us through a tricky technical challenge you solved recently?

I was working on an authorization project, where I was verifying users with the right set of permissions and letting them access the resource, and interestingly, Docker had a key role in that project. The role of Docker was a Policy Decision Point (PDP), which was running inside a container and listening to external requests, and was responsible for validating if the entity/user/request is authorized to access the particular resource with the right permissions. This was a particularly unique application of Docker, where I used it as a decision point. Docker made it easy to run, keeping it separate from the main app and making it scalable with almost zero downtime. It showed Docker can also be used for important services like authorization.

What’s one Docker tip you wish every developer knew?

Using multi-stage builds. It helps keep your images small, clean, secure, and production-ready. It’s such a simple thing, but it can make a huge difference. I have seen an image go from 1.7 GB to under 100 MB. Bonus: It will also make your pull and push faster, saving CI cost and making your overall deployment faster.

If you could containerize any non-technical object in real life, what would it be and why?

My age. I’d containerize age so I could choose how old I want to be. If I want to feel young, I will run Docker with an image with the age version of 20, and if I want to think more mature, I will run Docker with an image with the age version of 40.

Where can people find you online? (talks, blog posts, or open source projects, etc.)

People can find social media platforms like Twitter (X), LinkedIn, BlueSky, Threads, etc. For my open source work, people can find me on GitHub. I have many Docker-related projects. Apart from that, if people are more into blogs and conferences, they can find me on my blog and sessionize profile. Or just Google “Pradumna Saraf”.

Rapid Fire Questions

Cats or Dogs?

Cats

Morning person or night owl?

Night Owl

Favorite comfort food?

Dosa

One word friends would use to describe you?

Helpful

A hobby you picked up recently?

Learning more about aircraft and the aviation industry.
Quelle: https://blog.docker.com/feed/

Amazon Q Developer now help customers understand service prices and estimate workload costs

Today, AWS announces a new pricing and cost estimation capability in Amazon Q Developer. Amazon Q Developer is the most capable generative AI-powered assistant for software development. With this launch, customers can now use Amazon Q Developer to get information about AWS product and service pricing, availability, and attributes, helping them select the right resources and estimate workload costs using natural language. When architecting new workloads on AWS, customers need to estimate costs so they can evaluate cost/performance tradeoffs, set budgets, and plan future spending. Customers can now use Amazon Q Developer to retrieve detailed product attribute and pricing information using natural language, making it easier to estimate the cost of new workloads without having to review multiple pricing pages or specify detailed API request parameters. Customers can now ask questions about service pricing (e.g., “How much does RDS extended support cost?”), the cost of a planned workload (e.g., “I need to send 1 million notifications per month to email, and 1 million to HTTP/S endpoints. Estimate the monthly cost using SNS.”), or the relative costs of different resources (e.g., “What is the cost difference between an Application Load Balancer and a Network Load Balancer?”). To answer these questions, Amazon Q Developer retrieves information from the AWS Price List APIs. To learn more, see Managing your costs using generative AI with Amazon Q Developer. To get started, open the Amazon Q chat panel in the AWS Management Console and ask a question about pricing.
Quelle: aws.amazon.com

Amazon EC2 I7ie instances now available in AWS South America (São Paulo)

AWS is announcing Amazon EC2 I7ie instances are now available in AWS South America (São Paulo) region. Designed for large storage I/O intensive workloads, I7ie instances are powered by 5th Gen Intel Xeon Processors with an all-core turbo frequency of 3.2 GHz, offering up to 40% better compute performance and 20% better price performance over existing I3en instances. I7ie instances offer up to 120TB local NVMe storage density (highest in the cloud) for storage optimized instances and offer up to twice as many vCPUs and memory compared to prior generation instances. Powered by 3rd generation AWS Nitro SSDs, I7ie instances deliver up to 65% better real-time storage performance, up to 50% lower storage I/O latency, and 65% lower storage I/O latency variability compared to I3en instances. I7ie are high density storage optimized instances, ideal for workloads requiring fast local storage with high random read/write performance at very low latency consistency to access large data sets. These instances are available in 9 different virtual sizes and deliver up to 100Gbps of network bandwidth and 60Gbps of bandwidth for Amazon Elastic Block Store (EBS). To learn more, visit the I7ie instances page.
Quelle: aws.amazon.com

New General Purpose Amazon EC2 M8a Instances

AWS announces the general availability of new general-purpose Amazon EC2 M8a instances. M8a instances are powered by 5th Gen AMD EPYC processors (formerly code named Turin) with a maximum frequency of 4.5 GHz, deliver up to 30% higher performance, and up to 19% better price-performance compared to M7a instances.
M8a instances deliver 45% more memory bandwidth compared to M7a instances, making these instances ideal for even latency sensitive workloads. M8a instances deliver even higher performance gains for specific workloads. M8a instances are 60% faster for GroovyJVM benchmark, and up to 39% faster for Cassandra benchmark compared to Amazon EC2 M7a instances. M8a instances are SAP-certified and offer 12 sizes including 2 bare metal sizes. This range of instance sizes allows customers to precisely match their workload requirements.
M8a instances are built on the AWS Nitro System and ideal for applications that benefit from high performance and high throughput such as financial applications, gaming, rendering, application servers, simulation modeling, mid-size data stores, application development environments, and caching fleets.
M8a instances are available in the following AWS Regions: US East (Ohio), US West (Oregon), and Europe (Spain). To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information visit the Amazon EC2 M8a instance page or the AWS News blog.
Quelle: aws.amazon.com

AWS Marketplace expands Japan consumption tax support for Channel Partner Private Offers

Starting today, AWS Marketplace expands its Japan consumption tax (JCT) support for Channel Partner Private Offers (CPPOs), enhancing the tax experience for Independent Software Vendors (ISVs) and Channel Partners. For transactions where Japan ISVs authorize Japan Channel Partners to resell to Japan addressed buyers, AWS Japan G.K. (“AWS Japan”) will now collect the 10% JCT for the first leg of the transaction between ISVs and Channel Partners, issue a tax qualified invoice (TQI) to the Channel Partners and disburse the JCT to ISVs. AWS Japan will continue to collect the 10% JCT for the second leg of the transaction between Japan Channel Partners and Japan buyers and issue a TQI to the buyers, as previously established under the Japan Marketplace Facilitator rule. This launch unifies the compliance across both transactions, creating a seamless tax experience. This feature is applicable for Japan ISVs and Japan Channel Partners when transacting via the AWS Japan Marketplace Operator. To learn more, please visit the AWS Japan FAQ or AWS Marketplace Seller Guide.
Quelle: aws.amazon.com