Distributed Cloud-Native Graph Database with NebulaGraph Docker Extension

Graph databases have become a popular solution for storing and querying complex relationships between data. As the amount of graph data grows and the need for high concurrency increases, a distributed graph database is essential to handle the scale.

Finding a distributed graph database that automatically shards the data, while allowing businesses to scale from small to trillion-edge-level without changing the underlying storage, architecture of the service, or application code, however, can be a challenge. 

In this article, we’ll look at NebulaGraph, a modern, open source database to help organizations meet these challenges.

Meet NebulaGraph

NebulaGraph is a modern, open source, cloud-native graph database, designed to address the limitations of traditional graph databases, such as poor scalability, high latency, and low throughput. NebulaGraph is also highly scalable and flexible, with the ability to handle large-scale graph data ranging from small to trillion-edge-level.

NebulaGraph has built a thriving community of more than 1000 enterprise users since 2018, along with a rich ecosystem of tools and support. These benefits make it a cost-effective solution for organizations looking to build graph-based applications, as well as a great learning resource for developers and data scientists.

The NebulaGraph cloud-native database also offers Kubernetes Operators for easy deployment and management in cloud environments. This feature makes it a great choice for organizations looking to take advantage of the scalability and flexibility of cloud infrastructure.

Architecture of the NebulaGraph database

NebulaGraph consists of three services: the Graph Service, the Storage Service, and the Meta Service (Figure 1). The Graph Service, which consists of stateless processes (nebula-graphd), is responsible for graph queries. The Storage Service (nebula-storaged) is a distributed (Raft) storage layer that persistently stores the graph data. The Meta Service is responsible for managing user accounts, schema information, and Job management. With this design, NebulaGraph offers great scalability, high availability, cost-effectiveness, and extensibility.

Figure 1: Overview of NebulaGraph services.

Why NebulaGraph?

NebulaGraph is ideal for graph database needs because of its architecture and design, which allow for high performance, scalability, and cost-effectiveness. The architecture follows a separation of storage and computing architecture, which provides the following benefits:

Automatic sharding: NebulaGraph automatically shards graph data, allowing businesses to scale from small to trillion-edge-level data volumes without having to change the underlying storage, architecture, or application code.

High performance: With its optimized architecture and design, NebulaGraph provides high performance for complex graph queries and traversal operations.

High availability: If part of the Graph Service fails, the data stored by the Storage Service remains intact.

Flexibility: NebulaGraph supports property graphs and provides a powerful query language, called Nebula Graph Query Language (nGQL), which supports complex graph queries and traversal operations. 

Support for APIs: It provides a range of APIs and connectors that allow it to integrate with other tools and services in a distributed system.

Why run NebulaGraph as a Docker Extension?

In production environments, NebulaGraph can be deployed on Kubernetes or in the cloud, hiding the complexity of cluster management and maintenance from the user. However, for development, testing, and learning purposes, setting up a NebulaGraph cluster on a desktop or local environment can still be a challenging and costly process, especially for users who are not familiar with containers or command-line tools.

This is where the NebulaGraph Docker Extension comes in. It provides an elegant and easy-to-use solution for setting up a fully functional NebulaGraph cluster in just a few clicks, making it the perfect choice for developers, data scientists, and anyone looking to learn and experiment with NebulaGraph.

Getting started with NebulaGraph in Docker Desktop

Setting up

Prerequisites: Docker Desktop 4.10 or later.

Step 1: Enable Docker Extensions

You’ll need to enable Docker Extensions under the Settings tab in Docker Desktop. Within Docker Desktop, confirm that the Docker Extensions feature is enabled (Figure 2). Go to Settings > Extensions and select Enable Docker Extensions.

Figure 2: Enabling Docker Extensions within the Docker Desktop.

All Docker Extension resources are hidden by default, so, to ensure its visibility, go to Settings > Extensions and check the Show Docker Extensions system containers.

Step 2: Install the NebulaGraph Docker Extension

The NebulaGraph extension is available from the Extensions Marketplace in Docker Desktop and on Docker Hub. To get started, search for NebulaGraph in the Extensions Marketplace, then select Install (Figure 3).

Figure 3: Installing NebulaGraph from the Extensions Marketplace.

This step will download and install the latest version of the NebulaGraph Docker Extension from Docker Hub. You can see the installation process by clicking Details (Figure 4).

Figure 4: Installation progress.

Step 3: Waiting for the cluster to be up and running

After the extension is installed, for the first run, it normally takes fewer than 5 minutes for the cluster to be fully functional. While waiting, we can quickly go through the Home tab and Get Started tab to see details of NebulaGraph and NebulaGraph Studio, the WebGUI Utils.

We can also confirm whether it’s ready by observing the containers’ status from the Resources tab of the Extension as shown in Figure 5.

Figure 5: Checking the status of containers.

Step 4: Get started with NebulaGraph

After the cluster is healthy, we can follow the Get Started steps to log in to the NebulaGraph Studio, then load the initial dataset, and query the graph (Figure 6).

Figure 6: Logging in to NebulaGraph Studio.

Step 5: Learn more from the starter datasets 

In a graph database, the focus is on the relationships between the data. With the starter datasets available in NebulaGraph Studio, you can get a better understanding of these relationships. All you need to do is click the Download button on each dataset card on the welcome page (Figure 7).

Figure 7: Starter datasets.

For example, in the demo_sns (social network) dataset, you can use the following query to find new friend recommendations by identifying second-degree friends with the most mutual friends:

Einstein

Figure 8: Query results shown in the Nebula console.

Instead of just displaying the query results, you can also return the entire pattern and easily gain insights. For example, in Figure 9, we can see LeBron James is on two mutual friend paths with Tim:

Figure 9: Graphing the query results.

Another example can be found in the demo_fraud_detection (graph of loan) dataset, where you can perform a 10-degree check for risky applicants, as shown in the following query:

MATCH
p_=(p:`applicant`)-[*1..10]-(p2:`applicant`)
WHERE id(p)=="p_200" AND
p2.`applicant`.is_risky == "True"
RETURN p_ LIMIT 100

The results shown in Figure 10 indicate that this applicant is suspected to be risky because of their connection to p_190.

Figure 10: Results of query showing fraud detection risk.

By exploring the relationships between data points, we can gain deeper insights into our data and make more informed decisions. Whether you are interested in finding new friends, detecting fraudulent activity, or any other use case, the starter datasets provide a valuable starting point.

We encourage you to download the datasets, experiment with different queries, and see what new insights you can uncover, then share with us in the NebulaGraph community.

Try NebulaGraph for yourself

To learn more about NebulaGraph, visit our website, documentation site, star our GitHub repo, or join our community chat.
Quelle: https://blog.docker.com/feed/

February Extensions: Easily Connect Local Containers to a Kubernetes Cluster and More

Although February is the shortest month of the year, we’ve been busy at Docker and we have new Docker Extensions to share with you. Docker extensions build new functionality into Docker Desktop, extend its existing capabilities, and allow you to discover and integrate additional tools that you’re already using with Docker. Let’s look at the exciting new extensions from February.

And, if you’d like to see everything that’s available, check out our full Extensions Marketplace.

Get visibility on your Kubernetes cluster 

Do you need to harden your Kubernetes cluster but lack the visibility to do so? With the Kubescape extension for Docker Desktop, you can secure your Kubernetes cluster and gain insight into your cluster’s security posture via an easy-to-use online dashboard.

The Kubescape extension works by installing the Kubescape in-cluster components, connecting them to the ARMO platform and providing insights into the Kubernetes cluster deployed by Docker Desktop via the dashboard on the ARMO platform.

With the Kubescape extension, you can:

Regularly scan your configurations and images

Visualize your RBAC rules

Receive automatic fix suggestions where applicable

Read Secure Your Kubernetes Clusters with the Kubescape Docker Extension to learn more.

Connect your local containers to any Kubernetes cluster

Do you need a fast and dependable way to connect your local containers to any Kubernetes cluster? With Gefyra for Docker Desktop, you can easily bridge running containers into Kubernetes clusters. Gefyra aims to ease the burdens of Kubernetes-based development for developers with a seamless integration as an extension in Docker Desktop. 

The Gefyra extension lets you run a container locally and connect it to a Kubernetes cluster so you can:

Talk to other services

Let other services talk to your local container

Debug

Achieve faster iterations — no build/push/deploy/repeat

Deploy Alfresco using Docker containers

The Alfresco Docker extension simplifies deploying the Alfresco Digital Business Platform for testing purposes. This extension provides a single Run button in the UI to run all the containers required behind the scenes, so you can spend less time configuring and more time building and testing your product.

With the Alfresco extension on Docker Desktop, you can:

Pull latest Alfresco Docker images

Run Alfresco Docker containers

Use Alfresco deployment locally in your browser

Stop deployment and recover your system to initial status

Easily deploy and test NebulaGraph

NebulaGraph is a popular open source graph database that can handle large volumes of with milliseconds of latency, scale up quickly, and have the ability to perform fast graph analytics. With the NebulaGraph extension on Docker Desktop, you can test, learn, and develop on top of the distributed version of NebulaGraph core, in one click. 

Check out the latest Docker Extensions with Docker Desktop

Docker is always looking for ways to improve the developer experience. Check out these resources for more information on Docker Extensions:

Install Docker Desktop for Mac, Windows, or Linux to try extensions yourself.

Visit our Extensions Marketplace to see all of our extensions.

Learn about building your own extension with our Quick Start page.

Self-published extensions — did you know you can now discover extensions that have been autonomously published in the Extensions Marketplace? For more information, refer to our documentation on Managing Marketplace Extensions.

Quelle: https://blog.docker.com/feed/

Docker Desktop 4.17: New Functionality for a Better Development Experience

We’re excited to announce the Docker 4.17 release, which introduces new functionality into Docker Desktop to improve your developer experience. With Docker 4.17, you’ll have easier access to vulnerability data and recommendations on how to act on that information. Also, we’re making it easier than ever to bring the tools you already love into Docker Desktop with self-published Docker Extensions.

Read on to check out the highlights from this release.

Improved local image analysis

Container image security presents challenges such as dependency awareness, vulnerability awareness, and practical remediation in day-to-day reality. Since Docker Desktop 4.14, we’ve consistently added features to help you understand your images and their vulnerabilities. Improvements in 4.17 were designed with developers in mind to address software supply chain security. 

We’re pleased to announce Early Access to the new Docker Scout service. Docker Scout provides visibility into vulnerabilities and recommendations for quick remediation. Now you can use Docker Scout to analyze and remediate vulnerabilities on local images in Docker Desktop and the Docker CLI. 

Check out the Docker Scout documentation to learn more about how to get started.

What can you do with Docker Scout?

Image analysis results: Filter images based on vulnerability information, look for specific vulnerabilities, or confirm when vulnerabilities have been remediated. You’ll see results based on the layer in which a vulnerability is introduced, so you know exactly where the alert is coming from.

Remediation advice: Get guidance on available remediation options. Docker Scout shows you the recommended remediation path depending on the layer of the vulnerability. Docker Scout also shows a preview before you resolve anything, so you know how many vulnerabilities will be resolved by a specific update.

Remote registries: You can use Docker Desktop to view and pull images from Artifactory repositories to analyze them.

Command-line interface: As of Docker Desktop 4.17, the docker scan command is deprecated and replaced with a command for Docker Scout – docker scout. Read the release notes for more detail. 

Update to Docker Desktop 4.17 to access these new features and take them for a test run. You can also provide feedback directly in Docker Desktop by navigating to the images tab and selecting Give feedback. We look forward to hearing from you!  

A new way to publish Docker Extensions

We are excited to introduce a new way to publish a Docker Extension. When submitting an extension to the Marketplace, you now have two publishing options:

Docker Reviewed

Self-Published – New!

Self-Published extensions are automatically validated. If all validation checks pass, it is published on the Extensions Marketplace and accessible to all users within a few hours. Self-Published is the fastest way to get developers the tools they need and to get feedback from them as you work to evolve and polish your extension. 

Developers can identify self-published extensions in the Extensions Marketplace by the not reviewed label. Extensions that are manually reviewed by the Docker Extensions team have a reviewed label, as shown in the following screenshot. 

We are excited about the increased reach and accessibility the new self-publishing workflow brings to both Docker Extension publishers and users. 

If you have an idea for an extension that isn’t already in the Extensions Marketplace, you can submit it to our ideas discussion board. 

Let us know what you think

Thanks for using Docker Desktop! Learn more about what’s in store with our public roadmap on GitHub, and let us know what other features you’d like to see.

Check out the release notes for a full breakdown of what’s new in Docker Desktop 4.17.
Quelle: https://blog.docker.com/feed/

Secure Your Kubernetes Clusters with the Kubescape Docker Extension

Container adoption in enterprises continues to grow, and Kubernetes has become the de facto standard for deploying and operating containerized applications. At the same time, security is shifting left and should be addressed earlier in the software development lifecycle (SDLC). Security has morphed from being a static gateway at the end of the development process to something that (ideally) is embedded every step of the way. This can potentially increase the effort for engineering and DevOps teams.

Kubescape, a CNCF project initially created by ARMO, is intended to solve this problem. Kubescape provides a self-service, simple, and easily actionable security solution that meets developers where they are: Docker Desktop.

What is Kubescape?

Kubescape is an open source Kubernetes security platform for your IDE, CI/CD pipelines, and clusters.Kubescape includes risk analysis, security compliance, and misconfiguration scanning. Targeting all security stakeholders, Kubescape offers an easy-to-use CLI interface, flexible output formats, and automated scanning capabilities. Kubescape saves Kubernetes users and admins time, effort, and resources.

How does Kubescape work?

Security researchers and professionals codify best practices in controls: preventative, detective, or corrective measures that can be taken to avoid — or contain — a security breach. These are grouped in frameworks by government and non-profit organizations such as the US Cybersecurity and Infrastructure Security Agency, MITRE, and the Center for Internet Security.

Kubescape contains a library of security controls that codify Kubernetes best practices derived from the most prevalent security frameworks in the industry. These controls can be run against a running cluster or manifest files under development. They’re written in Rego, the purpose-built declarative policy language that supports Open Policy Agent (OPA).

Kubescape is commonly used as a command-line tool. It can be used to scan code manually or can be triggered by an IDE integration or a CI tool. By default, the CLI results are displayed in a console-friendly manner, but they can be exported to JSON or JUnit XML, rendered to HTML or PDF, or submitted to ARMO Platform (a hosted backend for Kubescape).

Regular scans can be run using an in-cluster operator, which also enables the scanning of container images for known vulnerabilities.

Why run Kubescape as a Docker extension?

Docker extensions are fundamental for building and integrating software applications into daily workflows. With the Kubescape Docker Desktop extension, engineers can easily shift security left without changing work habits.

The Kubescape Docker Desktop extension helps developers adopt security hygiene as early as the first lines of code. As shown in the following diagram, Kubescape enables engineers to adopt security as they write code during every step of the SDLC.

Specifically, the Kubescape in-cluster component triggers periodic scans of the cluster and shows results in ARMO Platform. Findings shown in the dashboard can be further explored, and the extension provides users with remediation advice and other actionable insights.

Installing the Kubescape Docker extension

Prerequisites: Docker Desktop 4.8 or later.

Step 1: Initial setup

In Docker Desktop, confirm that the Docker Extensions feature is enabled. (Docker Extensions should be enabled by default.)  In Settings | Extensions select the Enable Docker Extensions box.

You must also enable Kubernetes under Preferences. 

Kubescape is in the Docker Extensions Marketplace. 

In the following instructions, we’ll install Kubescape in Docker Desktop. After the extension scans automatically, the results will be shown in ARMO Platform. Here is a demo of using Kubescape on Docker Desktop:

Step 2: Add the Kubescape extension

Open Docker Desktop and select Add Extensions to find the Kubescape extension in the Extensions Marketplace.

Step 3: Installation

Install the Kubescape Docker Extension.

Step 4: Register and deploy

Once the Kubescape Docker Extension is installed, you’re ready to deploy Kubescape.

Currently, the only hosting provider available is ARMO Platform. We’re looking forward to adding more soon.

To link up your cluster, the host requires an ARMO account.

After you’ve linked your account, you can deploy Kubescape.

Accessing the dashboard

Once your cluster is deployed, you can view the scan output on your host (ARMO Platform) and start improving your cluster’s security posture immediately.

Security compliance

One step to improve your cluster’s security posture is to protect against the threats posed by misconfigurations.

ARMO Platform will display any misconfigurations in your YAML, offer information about severity, and provide remediation advice. These scans can be run against one or more of the frameworks offered and can run manually or be scheduled to run periodically.

Vulnerability scanning

Another step to improve your cluster’s security posture is protecting against threats posed by vulnerabilities in images.

The Kubescape vulnerability scanner scans the container images in the cluster right after the first installation and uploads the results to ARMO Platform. Kubescape’s vulnerability scanner supports the ability to scan new images as they are deployed to the cluster. Scans can be carried out manually or periodically, based on configurable cron jobs.

RBAC Visualization

With ARMO Platform, you can also visualize Kubernetes RBAC (role-based access control), which allows you to dive deep into account access controls. The visualization makes pinpointing over-privileged accounts easy, and you can reduce your threat landscape with well-defined privileges. The following example shows a subject with all privileges granted on a resource.

Kubescape, using ARMO Platform as a portal for additional inquiry and investigation, helps you strengthen and maintain your security posture

Next steps

The Kubescape Docker extension brings security to where you’re working. Kubescape enables you to shift security to the beginning of the development process by enabling you to implement security best practices from the first line of code. You can use the Kubernetes CLI tool to get insights, or export them to ARMO Platform for easy review and remediation advice.

Give the Kubescape Docker extension a try, and let us know what you think at cncf-kubescape-users@lists.cncf.io.
Quelle: https://blog.docker.com/feed/

5 Developer Workstation Security Best Practices

Supply chain attacks increased by 300% between 2020 and 2021, making clear that security breaches are happening earlier in the software development lifecycle. Research also shows that in 2021, 80% of cyber security breaches were due to human error, and 20% involved attacks on desktops and laptops. 

Developer workstations are being targeted for several reasons. Workstations have access to critical code and infrastructure, and the earlier a vulnerability is introduced, the more difficult it can be to identify the breach. Developers need to trust not only the dependencies they use directly but also the dependencies of those dependencies, called transitive dependencies. As we see more incidents stemming from developer workstations, developer workstation security should be a top priority for security-conscious organizations. 

Poor security practices in software development translate to trust-breaking breaches and expensive losses, with the average cost of a security breach reaching $9.44 million in the United States. Developers are increasingly responsible for not only the development of products but also for secure development. 

Organizations, regardless of industry, must be securing developer workstations in order to be prepared for the evolving and growing number of attacks. 

Docker’s white paper, Securing Developer Workstations with Docker, covers the top security risks when developing with containers — and how to best mitigate those risks with Docker. By understanding the potential attack vectors, your teams can mitigate evolving security threats.

Let’s take a look at five actions you can take to secure your developer workstations.

1. Prevent malware attacks

Malware refers to malicious software meant to attack software, hardware, or networks. In container development, malware can be particularly damaging not only because of the potentially harmful activities to be run within the container but also because of potential access to external systems like the host’s file system and network.

Containers should be secured by using only trusted images and dependencies, isolating and restricting permissions where possible, and running up-to-date software in up-to-date environments.

2. Build secure software supply chains

Supply chain attacks exploit direct and transitive dependencies. You may be familiar with Log4Shell, a vulnerability affecting an estimated hundreds of millions of devices. The vulnerability behind the infamous SolarWinds security incident was also a supply chain attack. Supply chain attacks increased by 300% in 2021, and security experts don’t expect them to slow down any time soon. 

Supply chain attacks can be mitigated through secure supply chain best practices. These include making sure every step of the supply chain is trustworthy, adding key automation, and making sure brand environments are clearly defined.

3. Account for local admin rights in policies

Individual developers may have different needs for their workstations. Many developers prefer to have local admin rights. Organizations are responsible for creating and enforcing policies that help developers work securely. How your team handles local admin rights is a team decision, and although the outcomes may differ per team, the conversation around local admin rights is a necessity to keep teams secure.

Finding the balance of security and autonomy is an active state. No balance can be achieved and then forgotten about. Instead, organizations must regularly review tools and configurations so developers can do their jobs without being unnecessarily blocked or accidentally jeopardizing their team, product, and customers.

4. Prevent hazardous misconfigurations

Configurations are necessary at almost every step of the software development lifecycle and connect development tools with production resources, such as environments and sensitive data. While more permissive configurations make anything seem possible, unfortunately, that flexibility can accidentally provide malicious actors access to sensitive resources. Configurations that are too strict frustrate developers and limit productivity.

Misconfiguration does not happen on purpose, but it can be mitigated. There’s no one-size-fits-all solution for configurations, given every team and organization has its own tooling, process, and network considerations. Regardless of your organization’s needs, make sure you’re considering the developer workstations and how your IT admins manage local configurations.

5. Protect against insider threats

Although most breaches come from outside of an organization, 20% of breaches in 2021 were caused by internal actors. For the same reasons that external attackers target the early stages of the software development lifecycle, internal bad actors have used similar strategies to bypass internal security safeguards.

Security measures that limit opportunities for external attackers also limit opportunities for internal bad actors. When considering settings, configurations, permissions, and scanning, remember that regardless of where the attack comes from, the trend of attacks is moving earlier in the development cycle, making securing developer workstations a critical step in your security strategy.

Hardened Docker Desktop: Stronger security for enterprises

With capabilities like Hardened Docker Desktop, we want every developer using Docker to be able to work securely and create secure products without being slowed down or needlessly distracted. In Securing Developer Workstations with Docker, we share container security best practices developed and tested by industry experts.

Read the white paper: Securing Developer Workstations with Docker.
Quelle: https://blog.docker.com/feed/

Enable No-Code Kubernetes with the harpoon Docker Extension

(This post is co-written by Dominic Holt, Founder & CEO of harpoon.)

Kubernetes has been a game-changer for ensuring scalable, high availability container orchestration in the Software, DevOps, and Cloud Native ecosystems. While the value is great, it doesn’t come for free. Significant effort goes into learning Kubernetes and all the underlying infrastructure and configuration necessary to power it. Still more effort goes into getting a cluster up and running that’s configured for production with automated scalability, security, and cluster maintenance.

All told, Kubernetes can take an incredible amount of effort, and you may end up wondering if there’s an easier way to get all the value without all the work.

In this post:Meet harpoonHow to use the harpoon Docker ExtensionNext steps

Meet harpoon

With harpoon, anyone can provision a Kubernetes cluster and deploy their software to the cloud without writing code or configuration. Get your software up and running in seconds with a drag and drop interface. When it comes to monitoring and updating your software, harpoon handles that in real-time to make sure everything runs flawlessly. You’ll be notified if there’s a problem, and harpoon can re-deploy or roll back your software to ensure a seamless experience for your end users. harpoon does this dynamically for any software — not just a small, curated list.

To run your software on Kubernetes in the cloud, just enter your credentials and click the start button. In a few minutes, your production environment will be fully running with security baked in. Adding any software is as simple as searching for it and dragging it onto the screen. Want to add your own software? Connect your GitHub account with only a couple clicks and choose which repository to build and deploy in seconds with no code or complicated configurations.

harpoon enables you to do everything you need, like logging and monitoring, scaling clusters, creating services and ingress, and caching data in seconds with no code. harpoon makes DevOps attainable for anyone, leveling the playing field by delivering your software to your customers at the same speed as the largest and most technologically advanced companies at a fraction of the cost.

The architecture of harpoon

harpoon works in a hybrid SaaS model and runs on top of Kubernetes itself, which hosts the various microservices and components that form the harpoon enterprise platform. This is what you interface with when you’re dragging and dropping your way to nirvana. By providing cloud service provider credentials to an account owned by you or your organization, harpoon uses terraform to provision all of the underlying virtual infrastructure in your account, including your own Kubernetes cluster. In this way, you have complete control over all of your infrastructure and clusters.

Once fully provisioned, harpoon’s UI can send commands to various harpoon microservices in order to communicate with your cluster and create Kubernetes deployments, services, configmaps, ingress, and other key constructs.

If the cloud’s not for you, we also offer a fully on-prem, air-gapped version of harpoon that can be deployed essentially anywhere.

Why harpoon?

Building production software environments is hard, time-consuming, and costly, with average costs to maintain often starting at $200K for an experienced DevOps engineer and going up into the tens of millions for larger clusters and teams. Using harpoon instead of writing custom scripts can save hundreds of thousands of dollars per year in labor costs for small companies and millions per year for mid to large size businesses

Using harpoon will enable your team to have one of the highest quality production environments available in mere minutes. Without writing any code, harpoon automatically sets up your production environment in a secure environment and enables you to dynamically maintain your cluster without any YAML or Kubernetes expertise. Better yet, harpoon is fun to use. You shouldn’t have to worry about what underlying technologies are deploying your software to the cloud. It should just work. And making it work should be simple. 

Why run harpoon as a Docker Extension?

Docker Extensions help you build and integrate software applications into your daily workflows. With the harpoon Docker Extension, you can simplify the deployment process with drag and drop, visually deploying and configuring your applications directly into your Kubernetes environment. Currently, the harpoon extension for Docker Desktop supports the following features:

Link harpoon to a cloud service provider like AWS and deploy a Kubernetes cluster and the underlying virtual infrastructure.

Easily accomplish simple or complex enterprise-grade cloud deployments without writing any code or configuration scripts.

Connect your source code repository and set up an automated deployment pipeline without any code in seconds.

Supercharge your DevOps team with real-time visual cues to check the health and status of your software as it runs in the cloud.

Drag and drop container images from Docker Hub, source, or private container registries

Manage your K8s cluster with visual pods, ingress, volumes, configmaps, secrets, and nodes.

Dynamically manipulate routing in a service mesh with only simple clicks and port numbers.

How to use the harpoon Docker Extension

Prerequisites: Docker Desktop 4.8 or later

Step 1: Enable Docker Extensions

You’ll need to enable Docker Extensions under the Settings tab in Docker Desktop

Hop into Docker Desktop and confirm that the Docker Extensions feature is enabled. Go to Settings > Extensions > and check the “Enable Docker Extensions” box.

Step 2: Install the harpoon Docker Extension

The harpoon extension is available on the Extensions Marketplace in Docker Desktop and on Docker Hub. To get started, search for harpoon in the Extensions Marketplace, then select Install.

This will download and install the latest version of the harpoon Docker Extension from Docker Hub.

Step 3: Register with harpoon

If you’re new to harpoon, then you might need to register by clicking the Register button. Otherwise, you can use your credentials to log in.

Step 4: Link your AWS Account

While you can drag out any software or Kubernetes components you like, if you want to do actual deployments, you will first need to link your cloud service provider account. At the moment, harpoon supports Amazon Web Services (AWS). Over time, we’ll be supporting all of the major cloud service providers.

If you want to deploy software on top of AWS, you will need to provide harpoon with an access key ID and a secret access key. Since harpoon is deploying all of the necessary infrastructure in AWS in addition to the Kubernetes cluster, we require fairly extensive access to the account in order to successfully provision the environment. Your keys are only used for provisioning the necessary infrastructure to stand up Kubernetes in your account and to scale up/down your cluster as you designate. We take security very seriously at harpoon, and aside from using an extensive and layered security approach for harpoon itself, we use both disk and field level encryption for any sensitive data.

The following are the specific permissions harpoon needs to successfully deploy a cluster:

AmazonRDSFullAccess

IAMFullAccess

AmazonEC2FullAccess

AmazonVPCFullAccess

AmazonS3FullAccess

AWSKeyManagementServicePowerUser

Step 5: Start the cluster

Once you’ve linked your cloud service provider account, you just click the “Start” button on the cloud/node element in the workspace. That’s it. No, really! The cloud/node element will turn yellow and provide a countdown. While your experience may vary a bit, we tend to find that you can get a cluster up in under 6 minutes. When the cluster is running, the cloud will return and the element will glow a happy blue color.

Step 6: Deployment

You can search for any container image you’d like from Docker Hub, or link your GitHub account to search any GitHub repository (public or private) to deploy with harpoon. You can drag any search result over to the workspace for a visual representation of the software.

Deploying containers is as easy as hitting the “Deploy” button. Github containers will require you to build the repository first. In order for harpoon to successfully build a GitHub repository, we currently require the repository to have a top-level Dockerfile, which is industry best practice. If the Dockerfile is there, once you click the “Build” button, harpoon will automatically find it and build a container image. After a successful build, the “Deploy” button will become enabled and you can deploy the software directly.

Once you have a deployment, you can attach any Kubernetes element to it, including ingress, configmaps, secrets, and persistent volume claims.

You can find more info here if you need help: https://docs.harpoon.io/en/latest/usage.html 

Next steps

The harpoon Docker Extension makes it easy to provision and manage your Kubernetes clusters. You can visually deploy your software to Kubernetes and configure it without writing code or configuration. By integrating directly with Docker Desktop, we hope to make it easy for DevOps teams to dynamically start and maintain their cluster without any YAML, helm chart, or Kubernetes expertise.

Check out the harpoon Docker Extension for yourself!
Quelle: https://blog.docker.com/feed/

Docker Compose: What’s New, What’s Changing, What’s Next

We’ll walk through new Docker Compose features the team has built, share what we plan to work on next, and remind you to switch to Compose V2 as soon as possible.

Compose V1 support will no longer be provided after June 2023 and will be removed from all future Docker Desktop versions. If you’re still on Compose V1, we recommend you switch as soon as possible to leave time to address any issues with running your Compose applications. (Until the end of June 2023, we’ll monitor Compose issues to address challenges related to V1 migration to V2.)

In this postCompose V1: So long and farewell, old friend!What’s new?Build improvementsUsing ssh resourcesBuild multi-arch images with ComposeAdditional updatesWhat’s next?

Compose V1: So long and farewell, old friend!

In the Compose V2 GA announcement we proposed the following timeline:

We’ve extended the timeline, so support now ends after June 2023. 

Switching is easy. Type docker compose instead of docker-compose in your favorite terminal.

An even easier way is to choose Compose V2 by default inside Docker Desktop settings. Activating this option creates a symlink for you so you can continue using docker-compose to preserve your potential existing scripts, but start using the newest version of Compose.

For more on the differences between V1 and V2, see the Evolution of Compose in docs.

What’s new?

Build improvements

During the past few months, the main effort of the team was to focus on improving the build experience within Compose. After collecting all the proposals opened in the Compose specification, we started to ship the following new features incrementally:

cache_to support to allow sharing layers from intermediary images in a multi-stage build. One of the best ways to use this option is sharing cache in your CI between your workflow steps.

no-cache to force a full rebuild of your service.

pull to trigger a registry sync for force-pulling your base images.

secrets to use at build time.

tags to define a list associated with your final build image.

ssh to use your local ssh configuration or pass keys to your build process. This allows you to clone a private repo or interact with protected resources; the ssh info won’t be stored in the final image.

platforms to define multiple platforms and let Compose produce multi-arch images of your services.

Let’s dive deeper into those last two improvements.

Using ssh resources

ssh was introduced in Compose V2.4.0 GA and lets you use ssh resources at build time. Now you’re able to use your local ssh configuration or public/private keys when you build your service image. For example, you can clone a private Git repository inside your container or connect to a remote server to use critical resources during the build process of your services.

The ssh resources are only used during the build process and won’t be available in your final image.

There are different possibilities for using ssh with Compose. The first one is the new ssh attribute of the build section in your Compose file:

services:
myservice:
image: build-test-ssh
build:
context: .
ssh:
– fake-ssh=./fixtures/build-test/ssh/fake_rsa

And you need to reference the ID of your ssh resource inside your Dockerfile:

FROM alpine
RUN apk add –no-cache openssh-client

WORKDIR /compose
COPY fake_rsa.pub /compose/

RUN –mount=type=ssh,id=fake-ssh,required=true diff <(ssh-add -L) <(cat /compose/fake_rsa.pub)

This example is a simple demonstration of using keys at build time. It copies a public ssh key, mounts the private key inside the container, and checks if it matches the public key previously added.

It’s also possible to directly use the CLI with the new –ssh flag. Let’s try to use it to copy a private Git repository. 

The following Dockerfile adds GitHub as a known host in the ssh configuration of the image and then mounts the ssh local agent to clone the private repository:

# syntax=docker/dockerfile:1
FROM alpine:3.15

RUN apk add –no-cache openssh-client git
RUN mkdir -p -m 0700 ~/.ssh && ssh-keyscan github.com >> ~/.ssh/known_hosts
RUN –mount=type=ssh git clone git@github.com:glours/secret-repo.git

CMD ls -lah secret-repo

And using the docker compose build –no-cache –progress=plain –ssh default command will pass your local ssh agent to Compose.

Build multi-arch images with Compose

In Compose version V2.11.0, we introduced the ability to add platforms in the build section and let Compose do a cross-platform build for you.

The following Dockerfile logs the name of the service, the targeted platform to build, and the platform used for doing this build:

FROM –platform=$BUILDPLATFORM golang:alpine AS build

ARG TARGETPLATFORM
ARG BUILDPLATFORM
ARG SERVICENAME
RUN echo "I am $SERVICENAME building for $TARGETPLATFORM, running on $BUILDPLATFORM" > /log

FROM alpine
COPY –from=build /log /log

This Compose file defines an application stack with two services (A and B) which are targeting different build platforms:

services:
serviceA:
image: build-test-platform-a:test
build:
context: .
args:
– SERVICENAME=serviceA
platforms:
– linux/amd64
– linux/arm64
serviceB:
image: build-test-platform-b:test
build:
context: .
args:
– SERVICENAME=serviceB
platforms:
– linux/386
– linux/arm64

Be sure to create and use a docker-container build driver that allows you to build multi-arch images: 

docker buildx create –driver docker-container –use

To use the multi-arch build feature:

> docker compose build –no-cache

Additional updates

We also fixed issues, managed corner cases, and added features. For example, you can define a secret from the environment variable value:

services:
myservice:
image: build-test-secret
build:
context: .
secrets:
– envsecret

secrets:
envsecret:
environment: SOME_SECRET

We’re now providing Compose binaries for windows/arm64 and linux/riscv64.

We overhauled the way Compose manages .env files, environment variables, and precedence interpolation. Read the environment variables precedence documentation to learn more. 

To see all the changes we’ve made since April 2022, check out the Compose release page or the comparing changes page.

What’s next?

The Compose team is focused on improving the developer inner loop using Compose. Ideas we’re working on include:

A development section in the Compose specification, including a watch mode so you will be able to use the one defined by your programming tooling or let Compose manage it for you 

Capabilities to add specific debugging ports, or use profiling tooling inside your service containers

Lifecycle hooks to interact with services at different moments of the container lifecycle (for example, letting you execute a command when a container is created but not started, or when it’s up and healthy)

A –dry-run flag to test a Compose command before executing it

If you’d like to see something in Compose to improve your development workflow, we invite your feedback in our Public Roadmap.

To take advantage of ongoing improvements to Compose and surface any issues before support ends June 2023, make sure you’re on Compose V2. Use the docker compose CLI or activate the option in Docker Desktop settings.

To learn more about the differences between V1 and V2, check out the Evolution of Compose in our documentation.
Quelle: https://blog.docker.com/feed/