Coronavirus: Hollywood Interrupted

Alles auf Stillstand, in Deutschland, in der Welt, in Hollywood. Die Traumfabrik hat aufgehört, Träume zu produzieren. Erst begann das große Verschieben kommender Kinostarts, jetzt sind ohnehin alle Kinos dicht. Wie wird sich das auf das Filmgeschäft auswirken? Von Peter Osteried (Coronavirus, Film)
Quelle: Golem

First Docker GitHub Action is here!

We are happy to announce that today Docker has released its first Github Action! We’ve been working with GitHub, looking into how developers have been using GitHub Actions with Docker to set up their CI/CD workflows. The standard flows you’ll see if you look around are what you’d expect: building an image, tagging it, logging into Hub, and pushing the image. This is the workflow we’ve aimed to support with our Docker build-push action.

Simplify CI/CD workflows

At Docker traditionally much of our CI/CD workflows has been handled through Jenkins using a variety of products to set up and maintain it. For some things this is the best solution like when we are testing Docker Desktop on a whole variety of different hosts and configurations. For others it’s a bit overkill. Like many, we at Docker have been looking at how we can leverage GitHub Actions to simplify our workflows, including how we use Docker itself.

GitHub Actions already leverages Docker in a lot of its workflows. From having Docker pre-installed and configured on the cloud runners to having first class support for containerized actions allows developers to easily use the same Docker workflows they use locally to configure their repo’s CI/CD. Combined with multi-stage builds and you have a powerful environment to work with. 

Docker actions

When we started with Github Actions there were no built-in actions to handle our main build, tag and push flow so we end up with a yaml file, that can’t yet be run locally, full of bash commands. Indeed that’s exactly what you’re given if you choose the “Docker Publish” workflow template from inside GitHub. Though it’s certainly doable it’s not as easy to read and maintain as a script that just uses pre-built actions. This is likely why the community has already published a whole host of actions to do just that. Just go to the GitHub Marketplace and search for Docker actions.

Common things you’ll see beyond just the standard build/tag/push is supporting automatic tagging of images based on the branch you’re building from, logging in to private registries, and setting standard CLI arguments like the Dockerfile path.

Having looked at a number of these we decided to build our own actions off of these ideas and publish them back to the community as official Docker supported GitHub Actions. The first of these, docker/build-push-action, supports much of what has been written above and attempts to build and push images with what we consider to be best practices including:

Tagging based on the git ref (branches, tags, and PRs).Tagging with the git SHA to make it easy to grab the image in later stages of more complex CI/CD flows. For example where you need to run end-to-end tests in a large, self-hosted cluster.Labelling the image with Open Container Initiative labels using data pulled from the GitHub Actions environment.Support for build time arguments and multi-stage targets.Push filter that allows you to configure when just to build the image and when to actually push it depending on any of the data supplied by GitHub Actions and your own scripts. See the examples for one we use ourselves.

A single action approach

But why one big action instead of many small ones? One thing that came up in our discussions with GitHub is how they envisaged that users would create many small actions and chain them together using inputs and outputs but the reality looks to be the opposite. From what we had seen users have been creating big actions and handling the flows internally using inputs for configuration details.

Whilst developing our own actions we found ourselves going the same way, firstly because it’s simply easier to test that way as there currently isn’t any way to run the workflow script locally.

Also this:

– name: build  id: build  uses: docker/build-action@v1  with:     repository: myorg/myrepo    tags: v1– name: login  uses: docker/login-action@v1  with:    registry: myregistry    username: ${{ DOCKER_USERNAME }}    password: ${{ DOCKER_USERNAME }}– name: push  uses: docker/push-action@v1  with:    registry: myregistry    tags: ${{ outputs.build.tags }}

Is a bit more effort to write than:

– name: build-push  uses: docker/build-push-action@v1  with:    username: ${{ DOCKER_USERNAME }}    password: ${{ DOCKER_USERNAME }}    registry: myregistry    repository: myorg/myrepo    tags: v1

The final reason we went with the single action approach was that the logic of how the separate steps link and when they should be skipped is simple to handle in the backend based purely on a couple of inputs. Are the username and password set? Then do a login. Should we push? Then push with the tags that we built the image with. Is the registry set? Then log in to that registry, tag the images with that registry, and push to it rather than defaulting to Docker Hub.

Feedback is welcomed!

All of this is handled by the image that backs the action. The backend is a simple go program that shells out to the Docker CLI, the code for which can be found here and is built and pushed using the action itself. As always, feedback and contributions are always welcome.

If you want to try out our Docker Github Action you can find it here or if you haven’t used Github actions before you can find a guide to get started by Github here. For more news on what else to expect coming soon from Docker remember to look at our public roadmap
The post First Docker GitHub Action is here! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

AWS Security Hub fügt dem AWS Security Finding-Format neue Felder und Ressourcen hinzu

AWS Security Hub hat heute Aktualisierungen und Ergänzungen zum AWS Security Finding Format (ASFF) veröffentlicht, die es integrierten Partnern von Security Hub ermöglichen, umfassendere und detailliertere Ergebnisse an Security Hub zu senden. Wir haben ein neues Feld „Severity.Label“ hinzugefügt, welches das Feld „Severity.Normalized“ ersetzt. Für das Feld „Severity.Label“ sind informative, niedrige, mittlere, hohe und kritische Werte erlaubt und jeder Ergebnisanbieter wählt den entsprechenden Wert für das Ergebnis aus. Wenn das Feld „Severity.Label“ bei einem Ergebnis fehlt, füllt Security Hub dieses automatisch mit dem bestehenden Feld „Severity.Normalized“ aus. Außerdem aktualisieren wir die Art und Weise der Verfolgung des Status eines Ergebnisses. Das bestehende Feld „WorkflowState“ ist veraltet. Wir haben ein neues Workflow-Objekt hinzugefügt, das Informationen zum Investigations-Workflow enthält. Derzeit enthält es nur das Feld „Status“, welches das veraltete Feld „WorkflowState“ ersetzt. Anschließend haben wir den AwsS3Bucket-Ressourcendetails neue Felder sowie einen neuen AwsS3Object-Ressourcentyp und ein zugehöriges Detailobjekt hinzugefügt. Abschließend haben wir die folgenden neuen Ressourcentypen hinzugefügt. Diese Ressourcentypen verfügen noch nicht über ein zugehöriges Detailobjekt: AwsApiGatewayMethod, AwsApiGatewayRestApi, AwsAppStreamFleet, AwsCertificateManagerCertificate, AwsCloudFormationStack, AwsCloudWatchAlarm, AwsCodeCommitRepository, AwsCodeDeployApplication, AwsCodeDeployDeploymentGroup, AwsCodePipelinePipeline, AwsCognitoIdentityPool, AwsCognitoUserPool, AwsEcsService, AwsEcsTaskDefinition, AwsEfsFileSystem, AwsEksCluster, AwsElastiCacheCacheCluster, AwsElbLoadBalancer, AwsEmrCluster, AwsKinesisStream und AwsLogsLogGroup.
Quelle: aws.amazon.com

Einführung von Amazon Personalize Optimizer unter Verwendung von Amazon Pinpoint-Ereignissen

Amazon Personalize Optimizer unter Verwendung von Amazon Pinpoint-Ereignissen ist eine Lösung, mit denen Kunden Integrationen zwischen Amazon Personalize-Kampagnen und Amazon Pinpoint-Projekten erstellen können. Kunden können eine Amazon Personalize-Kampagne mit einem Amazon Pinpoint-Projekt direkt in der Amazon Pinpoint-Konsole verbinden und diese Lösung anschließend verwenden, um eine automatisierte Daten-Pipeline zwischen Amazon Pinpoint und Amazon Personalize zu entwickeln und zu warten. Die Lösung stellt automatisch die notwendigen AWS-Services bereit und konfiguriert diese, um Modelle schnell zu trainieren und zu veröffentlichen, indem sie die Häufigkeit und die Art der Daten definiert, die zur Umschulung der Modelle verwendet werden. Als Resultat werden im Laufe der Zeit personalisiertere Empfehlungen bereitgestellt.
Quelle: aws.amazon.com