Google Cloud Deploy gets continuous delivery productivity enhancements

Since Google Cloud Deploy became generally available in January 2022, we’ve remained focused on our core mission: making it easier to establish and operate software continuous delivery to a Google Kubernetes Engine environment. Through ongoing conversations with developers, DevOps engineers, and business decision makers alike, we’ve received feedback about onboarding speed, delivery pipeline management, and expanding enterprise features.Today, we are pleased to introduce numerous feature additions to Google Cloud Deploy in these areas. Faster onboardingSkaffold is an open source tool that orchestrates continuous development, continuous integration (CI), and continuous delivery (CD), and it’s integral to Google Cloud Deploy. Through Skaffold and Google Cloud Deploy, the local application development loop is seamlessly connected to a continuous delivery capability, bringing consistency to your end-to-end software delivery lifecycle tooling. This may be the first time your team is using Skaffold. To help, Google Cloud Deploy can now generate a Skaffold configuration for single manifest applications when one is not present. When you create a release, the new ‘gcloud deploy releases create … –from-k8s-manifest‘ command provides an application manifest, and generates a Skaffold configuration. This lets your application development teams and continuous delivery operators familiarize themselves with Google Cloud Deploy, reducing early-stage configuration and learning friction as they establish their continuous delivery capabilities. When you use this option, you can review the generated Skaffold configuration, and as your comfort with Skaffold configuration and Google Cloud Deploy increases, you can develop your own Skaffold configurations tailored to your specific delivery pipeline needs.Delivery pipeline managementContinuous delivery pipelines are always in use. New releases navigate a progression sequence as they make their way out to the production target. The journey, however, isn’t always smooth. In that case, you may need to manage your delivery pipeline and related resources more discretely. With the addition of delivery pipeline suspension, you can now temporarily pause problematic delivery pipelines to restrict all release and rollout activity. By pausing the activity, you can undertake an investigation to identify problems and their root cause. Sometimes it isn’t the delivery pipeline that has a problem, but rather a release. Through release abandonment, you can prohibit application releases that have a feature defect, outdated library, or other identified issues from being deployed further. Release abandonment ensures an undesired release won’t be used again, while keeping it available for issue review and troubleshooting.A suspended delivery pipeline and abandoned releasesWhen reviewing or troubleshooting release application manifest issues, you may want to compare application manifests between releases and target environments to determine when an application configuration changed and why. But comparing applications manifests can be hard, requiring you to use the command line to locate and diff multiple files.To help, Google Cloud Deploy now has a Release inspector, which makes it easy to review application manifests and compare against releases and targets within a delivery pipeline.Reviewing and comparing application manifests with the Release InspectorRollout listings within the Google Cloud Deploy console have, to date, been limited to a specific release or target. A complete delivery pipeline rollout listing (and filtering) has been a standing request, and you can now find it on the delivery pipeline details page.Delivery pipeline details now with complete Rollouts listingFinally, execution environments are an important part of configuring custom render and deploy environments. In addition to the ability to specify custom worker pools, Cloud Storage buckets, and service accounts, we’ve added an execution timeout to better support long-running deployments. Expanded enterprise featuresEnterprise environments frequently have numerous requirements to be able to operate, such as security controls, logging, Terraform support, and regional availability.In a previous blog post, we announced support for VPC Security Controls (VPC-SC) in Preview. We are pleased to announce that Google Cloud Deploy VPC-SC is now generally available. We’ve also documented how you can configure customer managed encryption keys (CMEK) with services that depend on Google Cloud Deploy.There are also times when reviewing manifest-render and application deployment logs may not be sufficient for troubleshooting. For these situations, we’ve added Google Cloud Deploy service platform logs, which may provide additional details towards issue resolution.Terraform plays an important role in deploying Google Cloud resources. You can now deploy Google Cloud Deploy delivery pipelines and target resources using Google Cloud Platform’s Terraform provider. With this, you can now deploy Google Cloud Deploy resources as part of a broader Google Cloud Platform resource deployment.Regional availability is important for businesses that need a regional service presence. Google Cloud Deploy is now available in an additional nine regions, bringing the total number of Google Cloud Deploy worldwide regions to 15.The futureComprehensive, easy-to-use, and cost-effective DevOps tools are key to building an efficient software delivery capability, and it’s our hope that Google Cloud Deploy will help you implement complete CI/CD pipelines. And we’re just getting started. Stay tuned as we introduce exciting new capabilities and features to Google Cloud Deploy in the months to come. In the meantime, check out the product page, documentation, quickstart, and tutorials. Finally, If you have feedback on Google Cloud Deploy, you can join the conversation. We look forward to hearing from you.Related ArticleGoogle Cloud Deploy, now GA, makes it easier to do continuous delivery to GKEGoogle Cloud Deploy managed service, now GA, makes it easier to do continuous delivery to Google Kubernetes EngineRead Article
Quelle: Google Cloud Platform

How to Use the Apache httpd Docker Official Image

Deploying and spinning up a functional server is key to distributing web-based applications to users. The Apache HTTP Server Project has long made this possible. However, despite Apache Server’s popularity, users can face some hurdles with configuration and deployment.
Thankfully, Apache and Docker containers can work together to streamline this process — saving you time while reducing complexity. You can package your application code and configurations together into one cross-platform unit. The Apache httpd Docker Official Image helps you containerize a web-server application that works across browsers, OSes, and CPU architectures.
In this guide, we’ll cover Apache HTTP Server (httpd), the httpd Docker Official Image, and how to use each. You’ll also learn some quick tips and best practices. Feel free to skip our Apache intro if you’re familiar, but we hope you’ll learn something new by following along. Let’s dive in.
In this tutorial:

What is Apache Server?
How to use the httpd Docker Official Image
How to use a Dockerfile with your image
How to use your image without a Dockerfile
Configuration and useful tips
How to unlock data encryption through SSL
Pull your first httpd Docker Official Image

What is Apache Server?
The Apache HTTP Server was created as a “commercial-grade, featureful, and freely available source code implementation of an HTTP (Web) server.” It’s equally suitable for basic applications and robust enterprise alternatives.
Like any server, Apache lets developers store and access backend resources — to ultimately serve user-facing content. HTTP web requests are central to this two-way communication. The “d” portion of the “httpd” acronym stands for “daemon.” This daemon handles and routes any incoming connection requests to the server.
Developers also leverage Apache’s modularity, which lets them add authentication, caching, SSL, and much more. This early extensibility update to Apache HTTP Server sparked its continued growth. Since Apache HTTP Server began as a series of NCSA patches, its name playfully embraces its early existence as “a patchy web server.”
Some Apache HTTP Server fun facts:

Apache debuted in 1995 and is still widely used.
It’s modeled after NCSA httpd v1.3.
Apache currently serves roughly 47% of all sites with a known web server

Httpd vs. Other Server Technologies
If you’re experienced with Apache HTTP Server and looking to containerize your application, the Apache httpd Docker Official Image is a good starting point. You may also want to look at NGINX Server, PHP, or Apache Tomcat depending on your use case.
As a note, HTTP Server differs from Apache Tomcat — another Apache server technology. Apache HTTP Server is written in C while Tomcat is Java based. Tomcat is a Java Servlet dedicated to running Java code. It also helps developers create application pages via JavaServer Pages.
What is the httpd Docker Official Image?
We maintain the httpd Docker Official Image in tandem with the Docker community. Developers can use httpd to quickly and easily spin up a containerized Apache web server application. Out of the box, httpd contains Apache HTTP Server’s default configuration.
Why use the Apache httpd Docker Official Image? Here are some core use cases:

Creating an HTML server, as mentioned, to serve static web pages to users
Forming secure server HTTPS connections, via SSL, using Apache’s modules
Using an existing complex configuration file
Leveraging advanced modules like mod_perl, which this GitHub project outlines

While these use cases aren’t specific to our httpd Official Image itself, it’s easy to include these external configurations within your image itself. We’ll explore this process and outline how to use your first Apache container image now.
For use cases such as mod_php, a dedicated image such as the PHP Docker Official Image is probably a better fit.
How to use the httpd Docker Official Image
Before proceeding, you’ll want to download and install Docker Desktop. While we’ll still use the CLI during this tutorial, the built-in Docker Dashboard gives you an easy-to-use UI for managing your images and containers. It’s easy to start, pause, remove, and inspect running containers with the click of a button. Have Desktop running and open before moving on.
The quickest way to leverage the httpd Official Image is to visit Docker Hub, copy the docker pull httpd command into your terminal, and run it. This downloads each package and dependency within your image before automatically adding it into Docker Desktop:
 

 
Some key things happened while we verified that httpd is working correctly in this video:

We pulled our httpd image using the docker pull httpd command.
We found our image in Docker Desktop in the Images pane, chose “Run,” and expanded the Optional settings pane. We named our image so it’s easy to find, and entered 8080 as the host port before clicking “Run” again.
Desktop took us directly into the Containers pane, where our named container, TestApache, was running as expected.
We visited `http://localhost:8080` in our browser to test our basic setup.

This example automatically grabs the :latest version of httpd. We recommend specifying a numbered version or a tag with greater specificity, since these :latest versions can introduce breaking changes. It can be challenging to monitor these changes and test them effectively before moving into production.
That’s a great test case, but what if you want to build something a little more customized? This is where a Dockerfile comes in handy.
How to use a Dockerfile with your image
Though less common than other workflows, using a Dockerfile with the httpd Docker Official Image is helpful for defining custom configurations.
Your Dockerfile is a plain text file that instructs Docker on how to build your image. While building your image manually, this file lets you create configurations and useful image layers — beyond what the default httpd image includes.
Running an HTML server is a common workflow with the httpd Docker Official Image. You’ll want to add your Dockerfile in a directory which contains your project’s complete HTML. We’ll call it public-html in this example:

FROM httpd:2.4

COPY ./public-html/ /usr/local/apache2/htdocs/

 
The FROM instruction tells our builder to use httpd:2.4 as our base image. The COPY instruction copies new files or directories from our specified source, and adds them to the filesystem at a certain location. This setup is pretty bare bones, yet still lets you create a functional Apache HTTP Server image!
Next, you’ll need to both build and run this new image to see it in action. Run the following two commands in sequence:

$ docker build -t my-apache2 .

$ docker run -d –name my-running-app -p 8080:80 my-apache2

 
First, docker build will create your image from your earlier Dockerfile. The docker run command takes this image and starts a container from it. This container is running in detached mode, or in the background. If you wanted to take a step further and open a shell within that running container, you’d enter a third command: docker exec -ti my-running-app sh. However, that’s not necessary for this example.
Finally, visit http://localhost:8080 in your browser to confirm that everything is running properly.
How to use your image without a Dockerfile
Sometimes, you don’t even need nor want a Dockerfile for your image builds. This is the more common approach that most developers will take — compared to using a Dockerfile. It also requires just a couple of commands.
That said, enter the following commands to run your Apache HTTP Server container:
Mac:

$ docker run -d –name my-apache-app -p 8080:80 -v "$PWD":/usr/local/apache2/htdocs/ httpd:2.4

 
Linux:

$ docker run -d –name my-apache-app -p 8080:80 -v

$(pwd):/usr/local/apache2/htdocs/ httpd:2.4

 
Windows:

$ docker run -d –name my-apache-app -p 8080:80 -v "$pwd":/usr/local/apache2/htdocs/ httpd:2.4

 
Note: For most Linux users, the Mac version of this command works — but the Linux version is safest for those running compatible shells. While Windows users running Docker Desktop will have bash available, ”$pwd” is needed for Powershell.
Using -v bind mounts your project directory and $PWD (or its OS-specific variation) effectively expands to your current working directory, if you’re running macOS or Linux. This lets your container access your filesystem effectively and grab what it needs to run. You’re still connecting host port 8080 to container port 80/tcp — just like we did earlier within Docker Desktop — and running your Apache container in the background.
Configuration and useful tips
Customizing your Apache HTTP Server configuration is possible with two quick steps. First, enter the following command to grab the default configuration upstream:
<code>docker run –rm httpd:2.4 cat /usr/local/apache2/conf/httpd.conf > my-httpd.conf</code>
Second, return to your Dockerfile and COPY in your custom configuration from the required directory:

FROM httpd:2.4

COPY ./my-httpd.conf /usr/local/apache2/conf/httpd.conf

That’s it! You’ve now dropped your Apache HTTP Server configurations into place. This might include changes to any modules and any functional additions to help your server run.
How to unlock data encryption through SSL
Apache forms connections over HTTP by default. This is fine for smaller projects, test cases, and server setups where security isn’t important. However, larger applications and especially those that transport sensitive data — like enterprise apps — may require encryption. HTTPS is the standard that all web traffic should use given its default encryption.
This is possible natively through Apache using the mod_ssl encryption module. In a Docker context, running web traffic over SSL means using the COPY instruction to add your server.crt and server.key into your /usr/local/apache2/conf/ directory.
This is a condensed version of this process, and more steps are needed to get SSL up and running. Check out our Docker Hub documentation under the “SSL/HTTPS” section for a complete list of approachable steps. Crucially, SSL uses port 443 instead of port 80 — the latter of which is normally reserved for unencrypted data).
Pull your first httpd Docker Official Image
We’ve demoed how to successfully use the httpd Docker Official Image to containerize and run Apache HTTP Server. This is great for serving web pages and powering various web applications — both secure or otherwise. Using this image lets you deploy cross-platform and cross-browser without encountering hiccups.
Combining Apache with Docker also preserves much of the customizability and functionality developers expect from Apache HTTP Server. To quickly start experimenting, head over to Docker Hub and pull your first httpd container image.

Further reading:

The httpd GitHub Repository
Awesome Compose: A sample PHP application using Apache2

Quelle: https://blog.docker.com/feed/

AWS Data Exchange erhöht das Asset-Größenlimit auf 100 GB

Dritt-Datenanbieter auf AWS Data Exchange können jetzt Amazon S3-Assets in einer Größe von bis zu 100 GB importieren, was eine Steigerung gegenüber dem bisherigen Limit von 10 GB darstellt. Die erhöhte Asset-Größe ermöglicht neue Anwendungsszenarien u. a. im Gesundheitswesen, der Biowissenschaft, im Finanzwesen und Einzelhandel, da Anbieter nun Genomdaten, hochvolumige Finanzdaten und Satellitenbilder lizenzieren können, die oft als Assets über 10 GB gespeichert werden.
Quelle: aws.amazon.com

AWS Comprehend senkt die Annotationslimits zum Trainieren von Modellen für die Erkennung benutzerdefinierter Entitäten

Amazon Comprehend macht es Kunden leichter, mit der Erkennung benutzerdefinierter Entitäten zu starten, indem die Annotationsanforderungen zum Trainieren ihrer Modelle gesenkt wird. Amazon Comprehend ist ein natürlicher Sprachverarbeitungsservice (NLP), der APIs bereitstellt, um Schlüsselphrasen, kontextabhängige Entitäten, Ereignisse und Stimmung aus Text zu extrahieren. Entitäten beziehen sich auf Dinge in Ihrem Dokument, wie etwa Personen, Orte, Organisationen, Kreditkartennummern und so weiter. Die benutzerdefinierte Entitätserkennung (CER) in Amazon Comprehend ermöglicht Ihnen das Trainieren von Modellen mit für Ihr Unternehmen einzigartigen Entitäten in nur wenigen einfachen Schritten. Sie können fast jede Art von Entität identifizieren, indem Sie einfach eine ausreichende Anzahl an Details zum effektiven Trainieren Ihres Modells angeben.
Quelle: aws.amazon.com