Videostreaming: Netflix mit Werbung funktioniert auf vielen Geräten nicht
Wer das werbefinanzierte Netflix nutzen will, guckt sprichwörtlich in die Röhre, wenn bestimmte Hard- oder Software ins Spiel kommen. (Netflix, Onlinewerbung)
Quelle: Golem
Wer das werbefinanzierte Netflix nutzen will, guckt sprichwörtlich in die Röhre, wenn bestimmte Hard- oder Software ins Spiel kommen. (Netflix, Onlinewerbung)
Quelle: Golem
Cannondale hat ein E-Bike vorgestellt, das für kürzere Fahrten in der Stadt und über Land gedacht ist und rund 18 kg wiegt. (E-Bike, Elektromobilität)
Quelle: Golem
Die spiegellose Kamera Fujifilm X-T5 hat bei der Auflösung sowohl beim Fotografieren als auch beim Filmen im Vergleich zum Vorgänger zugelegt. (Digitalkamera, OLED)
Quelle: Golem

Quelle: <a href="Maybe You Should Just Give Elon His “>BuzzFeed
People often use Cloud Build and Artifact Registry in tandem to build and store software artifacts – these include container images, to be sure, but also OS packages and language specific packages. Consider a venn diagram where these same users are also users who use the Google Cloud project as a shared, multi-tenant environment. Because a project is a logical encapsulation for services like Cloud Build and Artifact Registry, administrators of these services want to apply the principle of least privilege in most cases. Of the numerous benefits from practicing this, reducing the blast radius of misconfigurations or malicious users is perhaps most important. Users and teams should be able to use Cloud Build and Artifact Registry safely – without the ability to disrupt or damage one another.With per-trigger service accounts in Cloud Build and per-repository permissions in Artifact Registry, let’s walk through how we can make this possible.The before times Let’s consider the default scenario – before we apply the principle of least privilege. In this scenario, we have a Cloud Build trigger connected to a repository. When an event happens in your source code repository (like merging changes into the main branch), this trigger is, well, triggered, and it kicks off a build in Cloud Build to build an artifact and subsequently push that artifact to Artifact Registry.Fig. 1 – A common workflow involving Cloud Build and Artifact RegistryBut what are the implications of permissions in this workflow? Well, let’s take a look at the permissions scheme. Left unspecified, a trigger will execute a build with the Cloud Build default service account. Of the several permissions granted by default to this service account are Artifact Registry permissions at the project level. Fig. 2 – The permissions scheme of the workflow in Fig. 1Builds, unless specified otherwise, will run using this service account as its identity. This means those builds can interact with any artifact repository in Artifact Registry within that Google Cloud project. So let’s see how we can set this up!Putting it into practiceIn this scenario, we’re going to walk through how you might set up the below workflow, in which we have a Cloud Build build trigger connected to a GitHub repository. In order to follow along, you’ll need to have a repository set up and connected to Cloud Build – instructions can be found here, and you’ll need to replace variable names with your own values.This build trigger will kick off a build in response to any changes to the main branch in that repository. The build itself will build a container image and push it to Artifact Registry.The key implementation detail here is that every build from this trigger will use a bespoke service account that only has permissions to a specific repository in Artifact Registry.Fig. 3 – The permissions scheme of the workflow with principle of least privilegeLet’s start by creating an Artifact Registry repository for container images for a fictional team, Team A.code_block[StructValue([(u’code’, u’gcloud artifacts repositories create ${TEAM_A_REPOSITORY} \rn–repository-format=docker \rn–location=${REGION}’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e158e564a90>)])]Then we’ll create a service account for Team A.code_block[StructValue([(u’code’, u’gcloud iam service-accounts create ${TEAM_A_SA} \rn–display-name=$TEAM_A_SA_NAME’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e158e564f90>)])]And now the fun part. We can create an IAM role binding between this service account and the aforementioned Artifact Registry repository; below is an example of how you would do this with gcloud:code_block[StructValue([(u’code’, u’gcloud artifacts repositories add-iam-policy-binding ${TEAM_A_REPOSITORY} –location $REGION –member=”serviceAccount:${TEAM_A_SA}@${PROJECT_ID}.iam.gserviceaccount.com” –role=roles/artifactregistry.writer’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e158cfacd90>)])]What this effectively does is it gives the service account permissions that come with the artifactregistry.writer role, but only for a specific Artifact Registry repository.Now, for many moons, Cloud Build has already allowed for users to provide a specific service account for use in their build specification – for manually executed builds. You can see an example of this in the following build spec:code_block[StructValue([(u’code’, u”steps:rn- name: ‘bash’rn args: [‘echo’, ‘Hello world!’]rnlogsBucket: ‘LOGS_BUCKET_LOCATION’rn# provide your specific service account belowrnserviceAccount: ‘projects/PROJECT_ID/serviceAccounts/${TEAM_A_SA}rnoptions:rn logging: GCS_ONLY”), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e1587dff310>)])]But, for many teams, automating the execution of builds and incorporating it with how code and configuration flows through their teams and systems is a must. Triggers in Cloud Build are how folks achieve this! When creating a trigger in Cloud Build, you can either connect it to a source code repository or set up your own webhook. Whatever the source may be, triggers depend on systems beyond the reach of permissions we can control in our Google Cloud project using Identity and Access Management. Let’s now consider what could happen when we do not apply the principle of least privilege when using build triggers with a Git repository.What risk are we trying to mitigate?The Supply Chain Levels for Software Artifacts (SLSA) security framework details potential threats in the software supply chain – essentially the process of how your code is written, tested, built, deployed, and run. Fig. 4 – Threats in the software supply chain identified in the SLSA frameworkWith a trigger taking action to start a build based on a compromised source repo, as seen in threat B, we can see how this effect may compound in effect downstream. If builds run based on actions in a compromised repo, we have multiple threats now in play that follow.By minimizing the permissions that these builds have, we reduce the scope of impact that a compromised source repo can have. This walkthrough specifically looks at minimizing the effects of having a compromised package repo in threat G. In this example we are building out, if the source repo is compromised, only packages in the specific Artifact Registry repository created will be affected; this is because our service account associated with the trigger only has permissions to that one repository.Creating a trigger to run builds with a bespoke service account requires only one additional parameter; when using gcloud for example, you would specify the –-service-account parameter as follows:code_block[StructValue([(u’code’, u’gcloud beta builds triggers create github \rn–name=team-a-build \rn–region=${REGION} \rn–repo-name=${TEAM_A_REPO} \rn–repo-owner=${TEAM_A_REPO_OWNER} \rn–pull-request-pattern=main \rn–build-config=cloudbuild.yaml \rn–service-account=projects/${PROJECT_ID}/serviceAccounts/${TEAM_A_SA}@${PROJECT_ID}.iam.gserviceaccount.com’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e1587e1f190>)])]TEAM_A_REPO will be the GitHub repository you created and connected to Cloud Build earlier, TEAM_A_REPO_OWNER will be the GitHub username of the repository owner, and TEAM_A_SA will be the service account we created earlier. Aside from that, all you’ll need is a cloudbuild.yaml manifest in that repository, and your trigger will be set! With this trigger set up, you can now test the scope of permissions your builds that run based on this trigger have, verifying that they only have permission to work with the TEAM_A_REPOSITORY in Artifact Registry.In conclusionConfiguring minimal permissions for build triggers is only one part of the bigger picture, but a great step to take no matter where you are in your journey of securing your software supply chain. To learn more, we recommend taking a deeper dive into the SLSA security framework and Software Delivery Shield – Google Cloud’s fully managed, end-to-end solution that enhances software supply chain security across the entire software development life cycle from development, supply, and CI/CD to runtimes. Or if you’re just getting started, check out this tutorial on Cloud Build and this tutorial on Artifact Registry!Related ArticleIntroducing Cloud Build private pools: Secure CI/CD for private networksWith new private pools, you can use Google Cloud’s hosted Cloud Build CI/CD service on resources in your private network or in other clouds.Read Article
Quelle: Google Cloud Platform
There’s no doubt that WebAssembly (AKA Wasm) is having a moment on the development stage. And while it may seem like a flash in the pan to some, we believe Wasm has a key role in continued containerized development. Docker and Wasm can be complementary technologies.
In the past, we’ve explored how Docker could successfully run Wasm modules alongside Linux or Windows containers. Nearly five months later, we’ve taken another big step forward with the Docker+Wasm Technical Preview. Developers need exceptional performance, portability, and runtime isolation more than ever before.
Chris Crone, a Director of Engineering at Docker, and Second State CEO, Founder Michael Yuan addressed these sticking points at the CNCF’s Wasm Day 2022. Here’s their full session, but feel free to stick around for our condensed breakdown:
You don’t need to learn new processes to develop successfully with Docker and Wasm. Popular Docker CLI commands can tackle this for you. Docker can even manage the WebAssembly runtime thanks to our collaboration with WasmEdge. We’ll dive into why we’re handling this new project and the technical mechanisms that make it possible.
Why WebAssembly and Docker?
How workloads and code are isolated has a major impact on how quickly we can deliver software to users. Chris highlights this by explaining how developers value:
Easy reuse of components and defined interfaces across projects that help build value quickerMaximization of shared compute resources while maintaining safe, sturdy boundaries between workloads — lowering the cost of application deliverySeamless application delivery to users, in seconds, through convenient packaging mechanisms like container images so users see value quicker
We know that workload isolation plays a role in these things, yet there are numerous ways to achieve it — like air gapping, hardware virtualization, stack virtualization (Wasm or JVM), containerization, and so on. Since each has unique advantages and disadvantages, choosing the best solution can be tricky.
Finding the right tools can also be enormously difficult. The CNCF tooling landscape alone is saturated, and while we’re thankful these tools exist, the variety is overwhelming for many developers.
Chris believes that specialized tooling can conquer the task at hand. It’s also our responsibility at Docker to guide these tooling decisions. This builds upon our continued mission to help developers build, share, and run their applications as quickly as possible.
That’s where WasmEdge — and Michael Yuan — come in.
Exciting opportunities with Docker and WasmEdge
Michael showed there’s some overlap between container and WebAssembly use cases. For example, developers from both camps might want to ship microservice applications. Wasm can enable quicker startup times and code-level security, which are beneficial in many cases.
However, WebAssembly doesn’t fit every use case due to threading, garbage collection, and binary packaging limitations. Running applications with Wasm also requires extra tooling, currently.
WasmEdge in action: TensorFlow interface
Michael then kicked off a TensorFlow ML application demo to show what WasmEdge can do. This application wouldn’t work with other WASI-compatible runtimes:
A few things made this demo possible:
Rust: a safe and fast programming language with first-class support for the Wasm compiling target.Tokio: a popular asynchronous runtime that can handle multiple, parallel HTTP requests without multithreading.WasmEdge’s TensorFlow: a plug-in compatible with the WASI-NN spec. Besides Tensorflow, PyTorch and OpenVINO are also supported in WasmEdge.
Note: Tokio and TensorFlow support are WasmEdge features that aren’t available on other WASI-compliant runtimes.
With Rust’s cargo build command, we can compile the program into a Wasm module using the wasm32-wasi target platform. The WasmEdge runtime can execute the resulting .wasm file. Once the application is running, we can perform HTTP queries to run some pretty cool image recognition tasks.
This example exemplifies the draw of WasmEdge as a WASI-compatible runtime. According to its maintainers, “WasmEdge is a lightweight, high-performance, and extensible WebAssembly runtime for cloud native, edge, and decentralized applications. It powers serverless apps, embedded functions, microservices, smart contracts, and IoT devices.”
Making Wasm accessible with Docker
Docker has two magical features. First, Docker and containers work on any machine and anywhere in production. Second, Docker makes it easy to build, share, and reuse components from any project. Container images and other OCI artifacts are easy to consume (and share). Isolation is baked in. Millions of developers are also very familiar with many Docker workflows like docker compose up.
Chris described how standardization and open ecosystems make Docker and container tooling available everywhere. The OCI specifications are crucial here and let us form new solutions that’ll work for nearly anyone and any supported technology (like Wasm).
On the other hand, setting up cross-platform Wasm developer environments is tricky. You also have to learn new tools and workflows — hampering productivity while creating frustration. We believe we can help developers overcome these challenges, and we’re excited to leverage our own platform to make Wasm more accessible.
Demoing Docker+WasmEdge
How does Wasm support look in practice? Chris fired up a demo using a preview of Docker Desktop, complete with WASI support. He created a Docker Compose file with three services:
A frontend static JavaScript client using the NGINX Docker Official ImageA Rust server compiled to wasi/wasm32A MariaDB database
That Rust server runs as a Wasm Module, while the NGINX and MariaDB servers run in Linux containers. Chris built this Rust server using a Dockerfile that compiled from his local platform to a wasm32-wasi target. He also ran WasmEdge’s proprietary AOT compiler to optimize the built Wasm module. However, this step is optional and optimized modules require the WasmEdge runtime.
We’ll leave the nitty gritty to Chris (see 19:43 for the demo) for now. However, know that you can run a Compose build and come away with a wasi/wasm32 platform image. Running docker compose up launches your application which you can then interact with through your Web browser. This is one way to seamlessly run containers and Wasm side by side.
From the Docker CLI, you’ll see the Wasm microservice is less than 2MB. It contains a high-performance HTTP server and a MySQL database client. The NGINX and MariaDB servers are 10MB and 120MB, respectively. Alternatively, your Rust microservice would be tens of megabytes after building it into a Linux binary and running it in a Linux container. This underscores how lightweight Wasm images are.
Since the output is an OCI image, you can store or share it using an OCI-compliant registry like Docker Hub. You don’t have to learn complex new workflows. And while Chris and Michael centered on WasmEdge, Docker should support any WASI runtime.
The approach is interoperable with containers and has early support within Docker Desktop. Although Wasm might initially seem unfamiliar, integration with the Docker ecosystem immediately levels that learning curve.
The future of Docker and Wasm
As Chris mentioned, we’re invested in making Docker and Wasm work well together. Our recent Docker+Wasm Technical Preview is a major step towards boosting interoperability. However, we’re also thrilled to explore how Docker tooling can improve the lives of Wasm-hungry developers — no matter their goals.
Docker wants to get involved with the Wasm community to better understand how developers like you are building your WebAssembly applications. Your use cases and obstacles matter. By sharing our experiences with the container ecosystem with the community, we hope to accelerate Wasm’s growth and help you tackle that next big project.
Get started and learn more
Want to test run Docker and Wasm? Check out Chris’ GitHub page for links to special Wasm-compatible Docker Desktop builds, demo repos, and more. We’d also love to hear your feedback as we continue bolstering Docker+Wasm support!
Finally, don’t miss the chance to learn more about WebAssembly and microservices — alongside experts and fellow developers — at an upcoming meetup.
Quelle: https://blog.docker.com/feed/
Aurora MySQL 2.11, kompatibel mit MySQL 5.7, ist jetzt allgemein verfügbar. Aurora MySQL 2.11 enthält Sicherheitsaktualisierungen und unterstützt auch die von Xeon-Scalable-Prozessoren der dritten Generation angetriebenen R6i-Instances.
Quelle: aws.amazon.com
Ab heute sind High-Memory-Instances von Amazon EC2 mit 24 TB (u-24tb1.112xlarge) Speicher in der Region Asien-Pazifik (Seoul) und High-Memory-Instances mit 18 TB (u-18tb1.112xlarge) Speicher in den Regionen USA Ost (Nord-Virginia) und USA West (Oregon) verfügbar. Diese Instances bieten den Kunden mehr Flexibilität bei der Verwendung und Beschaffung von Instances – Kunden können sie jetzt mit den Kaufoptionen On-Demand, Reserved Instance und Savings Plan verwenden. Bei u-24tb1 und u-18tb1 haben die Kunden die Wahl zwischen 24 TB bzw. 18 TB Speicher – beide bieten 448 vCPUs, 100 Gbit/s Netzwerk- und 38 Gbit/s EBS-Bandbreite.
Quelle: aws.amazon.com
Amazon SageMaker erweitert den Zugang zu acht neuen Graviton2- und Graviton3-basierten Instance-Familien für Machine Learning (ML), damit Kunden mehr Möglichkeiten zur Optimierung ihrer Kosten und Leistung beim Einsatz ihrer ML-Modelle auf SageMaker haben. Jetzt können Kunden ml.c7g, ml.m6g, ml.m6gd, ml.c6g, ml.c6gd, ml.c6gn, ml.r6g und ml.r6gd für die Bereitstellung von Echtzeit- und Asynchronous-Inference-Modellen nutzen.
Quelle: aws.amazon.com
CDK For Kubernetes Plus (CDK8s+) ist eine mehrsprachige Klassenbibliothek für die Definition von Kubernetes-Anwendungen unter Verwendung von allgemeinen, absichtsbasierten Konstrukten. Kunden, die Kubernetes-Anwendungen definieren, weisen darauf hin, dass die Wartbarkeit von Kubernetes-Manifesten eine Herausforderung darstellt. CDK8s+ zielt darauf ab, die Einstiegshürde zu senken und die Wartbarkeit von Kubernetes-Manifesten zu verbessern, indem es ein individuelles Konstrukt für jedes Kubernetes-Kernobjekt anbietet, wodurch eine reichhaltigere API mit geringerer Komplexität offengelegt wird. Mit dieser Einführung ist CDK8s+ nun allgemein verfügbar und stabil im Einsatz. Das bedeutet, dass die API unverändert bleibt und vollständig unterstützt wird (keine gravierenden Änderungen), zumindest bis zur nächsten Hauptversion. CDK8s+ wird als separate Bibliothek für jede Kubernetes-Spec-Version angeboten. Alle Bibliotheken sind jetzt allgemein verfügbar und stabil.
Quelle: aws.amazon.com