The Boring Company: Las Vegas genehmigt Elon Musk weiteren Tunnelausbau
Elon Musks Unternehmen The Boring Company wird den Tunnel unter Las Vegas erweitern. Es gibt aber auch Kritik an dem Projekt. (The Boring Company, Elektroauto)
Quelle: Golem
Elon Musks Unternehmen The Boring Company wird den Tunnel unter Las Vegas erweitern. Es gibt aber auch Kritik an dem Projekt. (The Boring Company, Elektroauto)
Quelle: Golem
Mein Weg zum Star-Wars-Fan involvierte VHS-Kassetten von Episode 1, Lego, Schulreferate über Sternzerstörer und überteuerte Sammelhefte. Von Oliver Nickel (Star Wars, Disney)
Quelle: Golem
A once-in-a-century global health emergency accelerates worldwide healthcare innovation and novel medical breakthroughs, all supported by powerful high-performance computing (HPC) capabilities.
COVID-19 has forever changed how nations function in the globally interconnected economy. To this day, it continues to affect and shape how countries respond to health emergencies. COVID-19 has demonstrated just how interconnected our society is and how risks, threats, and contagions can have global implications for many aspects of our daily lives.
COVID-19 was the largest global health emergency in over a century, with nearly 762 million cases reported as of the end of March 2023, according to the World Health Organization. The National Centre for Biotechnology Information points out the frequency and breath of new variants that continues to emerge at regular intervals. In response to this intricate health crisis, the global healthcare community quickly mobilized to better understand the virus, learn its behavior, and work toward preventative treatment measures to minimize the damage to lives across the world. Globally, nations mobilized resources for frontline workers, offered social protection to those most severely affected, and provided vaccine access for the billions who need it.
Recent technological innovations have provided the medical community with access to capabilities, such as HPC, that equipped healthcare professionals to better study, understand, and respond to COVID-19. Globally, healthcare innovators could access unprecedented computing power to design, test, and develop new treatments, faster, better, and more iteratively, than ever before.
Today, Azure HPC enables researchers to unleash the next generation of healthcare breakthroughs. For example, the computational capabilities offered by the Azure HPC HB-series virtual machines, powered by AMD EPYCTM CPU cores, allowed researchers to accelerate insights and advances into genomics, precision medicine, and clinical trials, with near-infinite high-performance bioinformatics infrastructure capabilities.
Since the beginning of COVID-19, companies have been leveraging Azure HPC to develop new treatments, run simulations, and testing at scale—all in preparation for the next health emergency. Azure HPC is helping companies unleash new treatments and health cure capabilities that are ushering in the next generation of treatments and healthcare capabilities, across the entire industry.
High-performance computing making a difference
A leading immunotherapy company partnered with Microsoft to leverage the capabilities of Azure HPC’s high-performance computing, in order to perform detailed computational analyses of the spike protein structure of SARS-CoV-2. Due to the critical nature of the spike protein structure and the role it plays in allowing the invasion of human cells, targeting it for study, analyses, and insights, is a crucial step in the development of treatments to combat the virus.
The company’s engineers and scientists collaborated with Microsoft, and quickly deployed HPC clusters on Azure, containing over 1250 core graphic processing units (GPUs). These GPUs are specifically designed for machine learning and similarly intense computational applications. The Azure HPC clusters augmented the company’s existing GPU clusters—which was already optimized for molecular modelling of proteins, antibodies, and antivirals—bringing a truly high-powered scaled engagement approach to fruition.
By collaborating with Microsoft in this way and making use of the massive, networked computing capabilities and advanced algorithms enabled by Azure HPC, the company was able to generate working models in days rather than the months it would have taken by following traditional approaches.
The incredible amount of computing power will help bolster drug discovery and therapeutic developments. By joining forces and bringing together the incredible power of Azure HPC and cutting edge immunotherapies, it helped contribute to the development of models that allowed researchers to better understand the virus, find novel binding sites to fight the virus, and ultimately guide the development of future treatments and vaccines for the virus.
Powering pharmaceutical research and innovation
The healthcare industry is making remarkable strides in the development of cutting-edge treatments and innovations that are geared towards solving some of the world's greatest healthcare challenges.
For example, researchers are leveraging HPC to transform their research and development effort as well as accelerating the development of new life-saving treatments.
Using a technique producing amorphous solid dispersions (ASD), drug researchers break up active pharmaceutical ingredients and blend them with organic polymers to improve the dissolution rate, bioavailability, and solubility of drug delivery systems. Although a wonder of modern medicine, it is a highly complicated, often lab-based process that can take months.
Swiss-based Molecular Modelling Laboratory (MML), a leader in ASD screening, wanted to pivot its drug research and development to small organic and biomolecular polymers. This approach determines ASD stability prior to formulation, reveals new ASD combinations, enhances drug safety, and helps reduce drug development costs as well as delivery times.
MML chose to leverage Azure HPC resources on more than 18,000 Azure HBv2 virtual machines and to optimize high-throughput drug screening and active pharmaceutical ingredient solubility limit detection, with the aim to alleviate common development hurdles.
The adoption of Azure HPC has helped MML shift from a small start-up to an established business working with some of the top pharmaceutical companies in the world—all in a very short time.
For the global healthcare community, the computational power and scalability of Azure HPC presents an unprecedented opportunity to accelerate pharmaceutical, medical, as well as health innovation. Azure HPC will continue playing a leading role in supporting the healthcare industry to respond optimally to any future global health emergency that may arise.
Next steps
To request a demo, contact HPCdemo@microsoft.com.
Learn more about Azure HPC.
High-performance computing documentation.
View our HPC cloud journey infographic.
Quelle: Azure
Whether you're a new student, a thriving startup, or the largest enterprise, you have financial constraints, and you need to know what you're spending, where it’s being spent, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Microsoft Cost Management comes in.
We're always looking for ways to learn more about your challenges and how Microsoft Cost Management can help you better understand where you're accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:
FinOps Foundation announces a new specification project to demystify cloud billing data.
Centrally managed Azure Hybrid Benefit for SQL Server is generally available.
Scheduled alerts in Azure Government.
Register for Securely Migrate and Optimize with Azure.
Register for Optimize your IT costs with Azure Monitor.
Cut costs with AI-powered productivity in Microsoft Teams.
3 ways to reduce costs with Microsoft Teams Phone.
What's new in Cost Management Labs.
New ways to save money with Microsoft Cloud.
New videos and learning opportunities.
Documentation updates.
Let's dig into the details.
FinOps Foundation announced a new specification project to demystify cloud billing data
Microsoft partnered with FinOps Foundation and Google to launch FOCUS (FinOps Open Cost and Usage Specification), a technical project to build and maintain an open specification for cloud cost data. As one of the key contributors and principal steering committee members for this project, we’re incredibly excited about the potential value this will bring for organizations of all sizes.
Some of the benefits you’ll see include the ability to:
Better understand how they’re being charged across services and especially cloud providers.
Reduce data ingestion and normalization requirements.
Streamline reporting and monitoring efforts, like cost allocation and showback.
Leverage shared guidance across the industry for how to monitor and manage costs.
FOCUS will play a major role in the evolution of the FinOps Framework and its guidance as it drives more consistency in how to analyze and communicate changes in cost, including anything from measuring key performance indicators (KPIs) to managing anomalies and commitment-based discounts to tracking resource utilization and more.
To learn more, read the FinOps Foundation announcement and join us at FinOps X, where we’ll announce an initial draft release. All FOCUS steering committee members will be on-site for deeper discussions about its roadmap and implementation.
Centrally managed Azure Hybrid Benefit for SQL Server is generally available
If you’re migrating from on-premises to the cloud, Azure Hybrid Benefit should be part of your cost optimization plan. Azure Hybrid Benefit is a licensing benefit that helps customers significantly reduce the costs of running their workloads in the cloud. It works by letting customers use their on-premises licenses with active Software Assurance or subscription-enabled Windows Server and SQL Server licenses on Azure. You can also leverage active Linux subscriptions, including Red Hat Enterprise Linux or SUSE Linux Enterprise server running in Azure. Traditionally, you would track available licenses that you’re using with Azure Hybrid Benefit internally and compare that with cost reports available from Cost Management Power BI reports, which can be tedious. With centralized management, you can assign SQL Server licenses to individual subscriptions or share them across an entire billing account to let the cloud manage the licenses for you, maximizing your benefit and sustaining compliance with less effort.
Centralized management of Azure Hybrid Benefit for SQL Server is now generally available.
To learn more, see Azure Hybrid Benefit documentation.
Scheduled alerts in Azure Government
Last month, you saw the addition of scheduled alerts for built-in views in Cost analysis. This month, we’re happy to announce that scheduled alerts are now available for Azure Government. Scheduled alerts allow you to get notified on a daily, weekly, or monthly basis about changes in cost by sending a picture of a chart view in Cost analysis to a list of recipients. You can even send it to stakeholders who don’t have direct access to costs in the Azure portal. To learn more, see subscribe to scheduled alerts.
Register for Securely Migrate and Optimize with Azure
Did you know you can lower operating costs by up to 40 percent when you migrate Windows Server and SQL Server to Azure versus on-premises?1 Furthermore, you can improve IT efficiency and operating costs by up to 53 percent by automating management of your virtual machines in cloud and hybrid environments. To maximize the value of your existing cloud investments, you can utilize tools like Microsoft Cost Management and Azure Advisor. A recent study showed that our customers achieve up to 34 percent reduction in Azure spend in the first year by using Microsoft Cost Management. To learn more about how to achieve efficiency and maximize cloud value with Azure, join us and register for Securely Migrate and Optimize with Azure, a free digital event on Wednesday, April 26, 2023, 9:00 AM to 11:00 AM Pacific Time.
To learn more, see 5 reasons to join us at Securely Migrate and Optimize with Azure.
Register for Optimize Your IT Costs with Azure Monitor
Join the Azure Monitor engineering team on May 17, 2023 from 10:00 AM to 11:00 AM Pacific Time, as they continue to listen and respond to feedback to ensure your corporate priorities are kept at the forefront!
The Azure Monitor team introduced some new pricing plans that can drive costs down without compromising performance. The team has taken some of the key points along with valuable guidance and best practices and will share it during this webinar.
In this webinar, you will learn:
New Azure Monitor pricing plans and different scenarios in which the new price plan can be applied.
Other levers that you can take advantage of to optimize your monitoring costs.
No regret moves you can implement today to start realizing cost savings.
Register for Optimize your IT costs with Azure Monitor and join us on May 17, 2023 from 10:00 AM to 11:00 AM Pacific Time.
Cut costs with AI-powered productivity in Microsoft Teams
As we face economic uncertainties and changes to work patterns, organizations are searching for ways to optimize IT investments and re-energize employees to achieve business results. Now—more than ever—organizations need solutions to adapt to change, improve productivity, and reduce costs. Fortunately, modern tools powered by AI hold the promise to boost individual, team, and organizational-level productivity and fundamentally change how we work, including intelligent recap for meetings in Microsoft Teams Premium with AI-augmented video recordings, AI-generated notes, and AI-generated tasks and action items, reusable meeting templates, and more.
To learn more, see Microsoft Teams Premium: Cut costs and add AI-powered productivity.
3 ways to reduce costs with Microsoft Teams Phone
As the way we work evolves, today’s organizations need cost-effective, reliable telephony solutions that help them support flexible work and truly bridge the gap between the physical and digital worlds. Our customers are searching for products that help them promote an inclusive working environment and streamline communications. And they need solutions that simplify their technological footprint and cut the cost of legacy IT solutions and other non-essential expenses.
After examining the potential ROI that companies may realize by implementing Teams Phone, a recent study found that businesses could:
Reduce licensing and usage costs.
Minimize the burden on IT.
Help people save time and collaborate more effectively.
To learn more, including customer quotes, see 3 ways to improve productivity and reduce costs with Microsoft Teams Phone.
What's new in Cost Management Labs
With Cost Management Labs, you get a sneak peek at what's coming in Microsoft Cost Management and can engage directly with us to share feedback and help us better understand how you use the service, so we can deliver more tuned and optimized experiences. Here are a few features you can see in Cost Management Labs:
New: Settings in the cost analysis preview—Enabled by default in Labs.
Get quick access to cost-impacting settings from the Cost analysis preview. You will see this by default in Labs and can enable the option from the try preview menu.
Update: Customers view for Cloud Solution Provider partnersCustomers view for Cloud Solution Provider (CSP) partners—Now enabled by default in Labs.
View a breakdown of costs by customer and subscription in the Cost analysis preview. Note this view is only available for CSP billing accounts and billing profiles. You will see this by default in Labs and can enable the option from the Try preview menu.
Merge cost analysis menu items.
Only show one cost analysis item in the Cost Management menu. All classic and saved views are one-click away, making them easier than ever to find and access. You can enable this option from the try preview menu.
Recommendations view.
View a summary of cost recommendations that help you optimize your Azure resources in the cost analysis preview. You can opt in using the try preview menu.
Forecast in the cost analysis preview.
Show your forecast cost for the period at the top of the cost analysis preview. You can opt in using Try preview.
Group related resources in the cost analysis preview.
Group related resources, like disks under virtual machinesVMs or web apps under App Service plans, by adding a “cm-resource-parent” tag to the child resources with a value of the parent resource ID.
Charts in the cost analysis preview.
View your daily or monthly cost over time in the cost analysis preview. You can opt in using Try Preview.
View cost for your resources.
The cost for your resources is one click away from the resource overview in the preview portal. Just click View cost to quickly jump to the cost of that resource.
Change scope from the menu.
Change scope from the menu for quicker navigation. You can opt-in using Try Preview.
Of course, that's not all. Every change in Microsoft Cost Management is available in Cost Management Labs a week before it's in the full Azure portal or Microsoft 365 admin center. We're eager to hear your thoughts and understand what you'd like to see next. What are you waiting for? Try Cost Management Labs today.
New ways to save money in the Microsoft Cloud
Lots of cost optimization improvements over the last month! Here are 10 general availability offers you might be interested in:
Azure Kubernetes Service introduces new Free and Standard pricing tiers.
Spot priority mix for Virtual Machine Scale Sets (VMSS).
More transactions at no additional cost for Azure Standard SSD storage.
Arm-based VMs now available in four additional Azure regions.
New General-Purpose VMs—Dlsv5 and Dldsv5.
Azure Cosmos DB for PostgreSQL cluster compute start and stop.
New burstable SKUs for Azure Database for PostgreSQL—Flexible Server.
Azure Database for PostgreSQL—Flexible Server in Australia Central.
App Configuration geo-replication.
And six new preview offers:
New Memory Optimized VM sizes—E96bsv5 and E112ibsv5.
Azure HX series and HBv4 series virtual machines.
Azure Container Apps offers new plan and pricing structure.
Read-write premium caching for Azure HPC Cache.
In-place scaling for enterprise caches in Azure Redis Cache.
Azure Chaos Studio is now available in Brazil South region.
New videos and learning opportunities
Here’s one new video you might be interested in:
Optimize IT investments to maximize efficiency and reduce cloud spend (10 minutes).
Follow the Microsoft Cost Management YouTube channel to stay in the loop with new videos as they’re released and let us know what you'd like to see next.
Want a more guided experience? Start with Control Azure spending and manage bills with Microsoft Cost Management.
Documentation updates
Here are a few documentation updates you might be interested in:
New: Calculate Enterprise Agreement (EA) savings plan cost savings.
Updated: Understand usage details fields.
Updated: Group and allocate costs using tag inheritance.
Updated: Allocate Azure costs.
Updated: EA Billing administration on the Azure portal.
Updated: Create a Microsoft Customer Agreement subscription.
Updated: Change an Azure reservation directory.
Updated: Optimize Azure Synapse Analytics costs with a Pre-Purchase Plan.
22 updates based on your feedback.
Want to keep an eye on all documentation updates? Check out the Cost Management and Billing documentation change history in the azure-docs repository on GitHub. If you see something missing, select Edit at the top of the document and submit a quick pull request. You can also submit a GitHub issue. We welcome and appreciate all contributions!
What's next?
These are just a few of the big updates from last month. Don't forget to check out the previous Microsoft Cost Management updates. We're always listening and making constant improvements based on your feedback, so please keep the feedback coming.
Follow @MSCostMgmt on Twitter and subscribe to the YouTube channel for updates, tips, and tricks. You can also share ideas and vote up others in the Cost Management feedback forum or join the research panel to participate in a future study and help shape the future of Microsoft Cost Management.
We know these are trying times for everyone. Best wishes from the Microsoft Cost Management team. Stay safe and stay healthy.
1 Forrester Consulting, "The Total Economic Impact™ of Azure Cost Management and Billing", February 2021.
Quelle: Azure
If you’re using a Docker-based development approach, you’re already well on your way toward creating cloud-native software. Containerizing your software ensures that you have all the system-level dependencies, language-specific requirements, and application configurations managed in a containerized way, bringing you closer to the environment in which your code will eventually run.
In complex systems, however, you may need to connect your code with several auxiliary services, such as databases, storage volumes, APIs, caching layers, message brokers, and others. In modern Kubernetes-based architectures, you also have to deal with service meshes and cloud-native deployment patterns, such as probes, configuration, and structural and behavioral patterns.
Kubernetes offers a uniform interface for orchestrating scalable, resilient, and services-based applications. However, its complexity can be overwhelming, especially for developers without extensive experience setting up Kubernetes clusters. That’s where Gefyra comes in, making it easier for developers to work with Kubernetes and improve the process of creating secure, reliable, and scalable software.
What is Gefyra?
Gefyra, named after the Greek word for “bridge,” is a comprehensive toolkit that facilitates Docker-based development with Kubernetes. If you plan to use Kubernetes as your production platform, it’s essential to work with the same environment during development. This approach ensures that you have the highest possible “dev/prod-parity,” minimizing friction when transitioning from development to production.
Gefyra is an open source project that provides docker run on steroids. It allows you to connect your local Docker with any Kubernetes cluster and run a container locally that behaves as if it would run in the cluster. You can write code locally in your favorite code editor using the tools you love.
Additionally, Gefyra does not require you to build a container image from your code changes, push the image to a registry, or trigger a restart in the cluster. Instead, it saves you from this tedious cycle by connecting your local code right into the cluster without any changes to your existing Dockerfile. This approach is useful not only for new code but also when introspecting existing code with a debugger that you can attach to a running container. That makes Gefyra a productivity superstar for any Kubernetes-based development work.
How does Gefyra work?
Gefyra installs several cluster-side components that enable it to control the local development machine and the development cluster. These components include a tunnel between the local development machine and the Kubernetes cluster, a local DNS resolver that behaves like the cluster DNS, and sophisticated IP routing mechanisms. Gefyra uses popular open source technologies, such as Docker, WireGuard, CoreDNS, Nginx, and Rsync, to build on top of these components.
The local development setup involves running a container instance of the application on the developer machine, with a sidecar container called Cargo that acts as a network gateway and provides a CoreDNS server that forwards all requests to the cluster (Figure 1). Cargo encrypts all the passing traffic with WireGuard using ad hoc connection secrets. Developers can use their existing tooling, including their favorite code editor and debuggers, to develop their applications.
Figure 1: Local development setup.
Gefyra manages two ends of a WireGuard connection and automatically establishes a VPN tunnel between the developer and the cluster, making the connection robust and fast without stressing the Kubernetes API server (Figure 2). Additionally, the client side of Gefyra manages a local Docker network with a VPN endpoint, allowing the container to join the VPN that directs all traffic into the cluster.
Figure 2: Connecting developer machine and cluster.
Gefyra also allows bridging existing traffic from the cluster to the local container, enabling developers to test their code with real-world requests from the cluster and collaborate on changes in a team. The local container instance remains connected to auxiliary services and resources in the cluster while receiving requests from other Pods, Services, or the Ingress. This setup eliminates the need for building container images in a continuous integration pipeline and rolling out a cluster update for simple changes.
Why run Gefyra as a Docker Extension?
Gefyra’s core functionality is contained in a Python library available in its repository. The CLI that comes with the project has a long list of arguments that may be overwhelming for some users. To make it more accessible, Gefyra developed the Docker Desktop extension, which is easy for developers to use without having to delve into the intricacies of Gefyra.
The Gefyra extension for Docker Desktop enables developers to work with a variety of Kubernetes clusters, including the built-in Kubernetes cluster, local providers such as Minikube, K3d, or Kind, Getdeck Beiboot, or any remote clusters. Let’s get started.
Installing the Gefyra Docker Desktop
Prerequisites: Docker Desktop 4.8 or later.
Step 1: Initial setup
In Docker Desktop, confirm that the Docker Extensions feature is enabled. (Docker Extensions should be enabled by default.) In Settings | Extensions select the Enable Docker Extensions box (Figure 3).
Figure 3: Enable Docker Extensions.
You must also enable Kubernetes under Settings (Figure 4).
Figure 4: Enable Kubernetes.
Gefyra is in the Docker Extensions Marketplace. In the following instructions, we’ll install Gefyra in Docker Desktop.
Step 2: Add the Gefyra extension
Open Docker Desktop and select Add Extensions to find the Gefyra extension in the Extensions Marketplace (Figure 5).
Figure 5: Locate Gefyra in the Docker Extensions Marketplace.
Once Gefyra is installed, you can open the extension and find the start screen of Gefyra that lists all containers that are connected to a Kubernetes cluster. Of course, this section is empty on a fresh install.To launch a local container with Gefyra, just like with Docker, you need to click on the Run Container button at the top right (Figure 6).
Figure 6: Gefyra start screen.
The next steps will vary based on whether you’re working with a local or remote Kubernetes cluster. If you’re using a local cluster, simply select the matching kubeconfig file and optionally set the context (Figure 7).
For remote clusters, you may need to manually specify additional parameters. Don’t worry if you’re unsure how to do this, as the next section will provide a detailed example for you to follow along with.
Figure 7: Selecting Kubeconfig.
The Kubernetes demo workloads
The following example showcases how Gefyra leverages the Kubernetes functionality included in Docker Desktop to create a development environment for a simple application that consists of two services — a backend and a frontend (Figure 8).
Both services are implemented as Python processes, and the frontend service uses a color property obtained from the backend to generate an HTML document. Communication between the two services is established via HTTP, with the backend address being passed to the frontend as an environment variable.
Figure 8: Frontend and backend services.
The Gefyra team has created a repository for the Kubernetes demo workloads, which can be found on GitHub.
If you prefer to watch a video explaining what’s covered in this tutorial, check out this video on YouTube.
Prerequisite
Ensure that the current Kubernetes context is switched to Docker Desktop. This step allows the user to interact with the Kubernetes cluster and deploy applications to it using kubectl.
kubectl config current-context
docker-desktop
Clone the repository
The next step is to clone the repository:
git clone https://github.com/gefyrahq/gefyra-demos
Applying the workload
The following YAML file sets up a simple two-tier app consisting of a backend service and a frontend service with communication between the two services established via the SVC_URL environment variable passed to the frontend container.
It defines two pods, named backend and frontend, and two services, named backend and frontend, respectively. The backend pod is defined with a container that runs the quay.io/gefyra/gefyra-demo-backend image on port 5002. The frontend pod is defined with a container that runs the quay.io/gefyra/gefyra-demo-frontend image on port 5003. The frontend container also includes an environment variable named SVC_URL, which is set to the value backend.default.svc.cluster.local:5002.
The backend service is defined to select the backend pod using the app: backend label, and expose port 5002. The frontend service is defined to select the frontend pod using the app: frontend label, and expose port 80 as a load balancer, which routes traffic to port 5003 of the frontend container.
/gefyra-demos/kcd-munich> kubectl apply -f manifests/demo.yaml
pod/backend created
pod/frontend created
service/backend created
service/frontend created
Let’s watch the workload getting ready:
kubectl get pods
NAME READY STATUS RESTARTS AGE
backend 1/1 Running 0 2m6s
frontend 1/1 Running 0 2m6s
After ensuring that the backend and frontend pods have finished initializing (check for the READY column in the output), you can access the application by navigating to http://localhost in your web browser. This URL is served from the Kubernetes environment of Docker Desktop.
Upon loading the page, you will see the application’s output displayed in your browser. Although the output may not be visually stunning, it is functional and should provide the necessary functionality for your needs.
Now, let’s explore how we can correct or adjust the color of the output generated by the frontend component.
Using Gefyra “Run Container” with the frontend process
In the first part of this section, you will see how to execute a frontend process on your local machine that is associated with a resource based on the Kubernetes cluster: the backend API. This can be anything ranging from a database to a message broker or any other service utilized in the architecture.
Kick off a local container with Run Container from the Gefyra start screen (Figure 9).
Figure 9: Run a local container.
Once you’ve entered the first step of this process, you will find the kubeconfig` and context to be set automatically. That’s a lifesaver if you don’t know where to find the default kubeconfig on your host.
Just hit the Next button and proceed with the container settings (Figure 10).
Figure 10: Container settings.
In the Container Settings step, you can configure the Kubernetes-related parameters for your local container. In this example, everything happens in the default Kubernetes namespace. Select it in the first drop-down input (Figure 11).
In the drop-down input below Image, you can specify the image to run locally. Note that it lists all images that are being used in the selected namespace (from the Namespace selector). Isn’t that convenient? You don’t need to worry about the images being used in the cluster or find them yourself. Instead, you get a suggestion to work with the image at hand, as we want to do in this example (Figure 12). You could still specify any arbitrary images if you like, for example, a completely new image you just built on your machine.
Figure 11: Select namespace and workload.
Figure 12: Select image to run.
To copy the environment of the frontend container running in the cluster, you will need to select pod/frontend from the Copy Environment From selector (Figure 13). This step is important because you need the backend service address, which is passed to the pod in the cluster using an environment variable.
Finally, for the upper part of the container settings, you need to overwrite the following run command of the container image to enable code reloading:
poetry run flask –app app debug run –port 5002 –host 0.0.0.0
Figure 13: Copy environment of frontend container.
Let’s start the container process on port 5002 and expose this port on the local machine. In addition, let’s mount the code directory (/gefyra-demos/kcd-munich/frontend) to make code changes immediately visible. That’s it for now. A click on the Run button starts the process.
Figure 14: Installing Gefyra components.
It takes a few seconds to install Gefyra’s cluster-side components, prepare the local networking part, and pull the container image to start locally (Figure 14). Once this is ready, you will get redirected to the native container view of Docker Desktop from this container (Figure 15).
Figure 15: Log view.
You can look around in the container using the Terminal tab (Figure 16). Type in the env command in the shell, and you will see all the environment variables coming with Kubernetes.
Figure 16: Terminal view.
We’re particularly interested in the SVC_URL variable that points the frontend to the backend process, which is, of course, still running in the cluster. Now, when browsing to the URL http://localhost:5002, you will get a slightly different output:
Why is that? Let’s look at the code that we already mounted into the local container, specifically the app.py that runs a Flask server (Figure 17).
Figure 17: App.py code.
The last line of the code in the Gefyra example displays the text Hello KCD!, and any changes made to this code are immediately updated in the local container. This feature is noteworthy because developers can freely modify the code and see the changes reflected in real-time without having to rebuild or redeploy the container.
Line 12 of the code in the Gefyra example sends a request to a service URL, which is stored in the variable SVC. The value of SVC is read from an environment variable named SVC_URL, which is copied from the pod in the Kubernetes cluster. The URL, backend.default.svc.cluster.local:5002, is a fully qualified domain name (FQDN) that points to a Kubernetes service object and a port.
These URLs are commonly used by applications in Kubernetes to communicate with each other. The local container process is capable of sending requests to services running in Kubernetes using the native connection parameters, without the need for developers to make any changes, which may seem like magic at times.
In most development scenarios, the capabilities of Gefyra we just discussed are sufficient. In other words, you can use Gefyra to run a local container that can communicate with resources in the Kubernetes cluster, and you can access the app on a local port. However, what if you need to modify the backend while the frontend is still running in Kubernetes? This is where the “bridge” feature of Gefyra comes in, which we will explore next.
Gefyra “bridge” with the backend process
We could choose to run the frontend process locally and connect it to the backend process running in Kubernetes through a bridge. However, this approach may not always be necessary or desirable, especially for backend developers who may not be interested in the frontend. In this case, it may be more convenient to leave the frontend running in the cluster and stop the local instance by selecting the stop button in Docker Desktop’s container view.
First of all, we have to run a local instance of the backend service. It’s the same as with the frontend, but this time with the backend container image (Figure 18).
Figure 18: Running a backend container image.
Compared to the frontend example from above, you can run the backend container image (quay.io/gefyra/gefyra-demo-backend:latest), which is suggested by the drop-down selector. This time we need to copy the environment from the backend pod running in Kubernetes. Note that the volume mount is now set to the code of the backend service to make it work.
After starting the container, you can check http://localhost:5002/color, which serves the backend API response. Looking at the app.py of the backend service shows the source of this response. In line 8, this app returns a JSON response with the color property set to green (Figure 19).
Figure 19: Checking the color.
At this point, keep in mind that we’re only running a local instance of the backend service. This time, a connection to a Kubernetes-based resource is not needed as this container runs without any external dependency.
The idea is to make the frontend process that serves from the Kubernetes cluster on http://localhost (still blue) pick up our backend information to render its output. That’s done using Gefyra’s bridge feature. In the next step, we will overlay the backend process running in the cluster with our local container instance so that the local code becomes effective in the cluster.
Getting back to the Gefyra container list on the start screen, you can find the Bridge column on each locally running container (Figure 20). Once you click this button, you can create a bridge of your local container into the cluster.
Figure 20: The Bridge column is visible on the far right.
In the next dialog, we need to enter the bridge configuration (Figure 21).
Figure 21: Enter the bridge configuration.
Let’s set the “Target” for the bridge to the backend pod, which is currently serving the frontend process in the cluster, and set a timeout for the bridge to 60 seconds. We also need to map the port of the proxy running in the cluster with the local instance.
If your local container is configured to listen on a different port from the cluster, you can specify the mapping here (Figure 22). In this example, the service is running on port 5003 in both the cluster and on the local machine, so we need to map that port. After clicking the Bridge button, it takes a few seconds to return to the container list on Gefyra’s start view.
Figure 22: Specify port mapping.
Observe the change in the icon of the Bridge button, which now depicts a stop symbol (Figure 23). This means the bridge function is now operational and can be terminated by simply clicking this button again.
Figure 23: The Bridge column showing a stop symbol.
At this point, the local code is able to handle requests from the frontend process in the cluster by using the URL stored in the SVC_URL variable, without making any changes to the frontend process itself. To confirm this, you can open http://localhost in your browser (which is served from the Kubernetes of Docker Desktop) and check that the output is now green. This is because the local code is returning the value green for the color property. You can change this value to any valid one in your IDE, and it will be immediately reflected in the cluster. This is the amazing power of this tool.
Remember to release the bridge of your container once you are finished making changes to your backend. This will reset the cluster to its original state, and the frontend will display the original “beautiful” blue H1 again. This approach allows us to intercept containers running in Kubernetes with our local code without modifying the Kubernetes cluster itself. That’s because we did not make any changes to the Kubernetes cluster itself. Instead, we kind of intercepted containers running in Kubernetes with our local code and released that intercept afterwards.
Conclusion
Gefyra is an easy-to-use Docker Desktop extension that connects with Kubernetes to improve development workflows and team collaboration. It lets you run containers as usual while being connected with Kubernetes, thereby saving time and ensuring high dev/prod parity.
The Blueshoe development team would appreciate a star on GitHub and welcomes you to join their Discord community for more information.
About the Author
Michael Schilonka is a strong believer that Kubernetes can be a software development platform, too. He is the co-founder and managing director of the Munich-based agency Blueshoe and the technical lead of Gefyra and Getdeck. He talks about Kubernetes in general and how they are using Kubernetes for development. Follow him on LinkedIn to stay connected.
Quelle: https://blog.docker.com/feed/
Linux-Systeme verstehen, Linux-Shell meistern oder für den LPIC-1 vorbereiten? Die Golem Karrierewelt hat den passenden Workshop! (Golem Karrierewelt, Server-Applikationen)
Quelle: Golem
NEVS hat im Geheimen ein Elektroauto mit einer Reichweite von 1.000 Kilometern entwickelt. Ob es gebaut wird, ist aber unklar. (Elektroauto, Auto)
Quelle: Golem
Vodafone Deutschland traut sich weiterhin nicht, Aussagen zu Docsis 4.0 zu machen. Aber segmentiert hat man viel. (Vodafone, Kabelnetz)
Quelle: Golem
Die Probleme mit der Serverkapazität gab es bereits bei der Einführung des 9-Euro-Tickets. Die Wiederholung sei laut Fahrgastverband Pro Bahn nicht nötig gewesen. (49-Euro-Ticket, Server)
Quelle: Golem
Warner Brothers hat einen ersten Trailer zum zweiten Teil des mehrfach ausgezeichneten Dune vorgestellt. (Filme & Serien, Warner Bros)
Quelle: Golem