Battle Royale: Fortnite erlaubt neues Leben

Die Community inklusive Vorzeige-Streamer Ninja ist wegen Änderungen an der Balance noch sauer, da stellt Epic Games die nächste große Umstellung in Fortnite Battle Royale vor: Künftig können Spieler ihre Teamkameraden an einem Van wiederbeleben – das System erinnert an Apex Legends. (Fortnite, Epic Games)
Quelle: Golem

Feature Friday: A Chat With Security Experts

DockerCon brings industry leaders and experts of the container world to one event where they share their knowledge, experience and guidance. This year is no different. For the next few weeks, we’re going to highlight a few of our amazing speakers and the talks they will be leading.

In this second highlight, we have several industry experts on container and application security that we’re excited to have sharing their knowledge at DockerCon. We’re going to have sessions covering network security, a dissection of a real world Kubernetes vulnerability (and what to do about it), encrypted containers, and the new AWS Firecracker “micro-VM” for containers, just to name a few.
In case you missed it, you can also see our first speaker highlight here, featuring storage, service mesh and networking experts.
 
Zero Trust Networks Come to Docker Enterprise Kubernetes
More on their session here.
 

Spike Curtis 
Tigera Software Developer

Brent Salisbury 
Docker Technical Alliances

What is your breakout about?
Brent: Docker Enterprise with Calico for networking being used in conjunction with Istio is an exciting intersection of securing various layers of networking – all from a single policy interface.
Spike: The Docker-Calico-Istio combination gives you some amazing tools out of the box for securing your application’s network connectivity. The breakout is about showing you how to use them!
Why should people go to your session?
Spike: Networks are super important to every application, but getting the security right is intimidating, so people often leave it until it’s too late. This talk should lower that intimidation factor and give some concrete steps to take to get your network secured from the get-go.
What is your favorite DockerCon moment?
Spike: I think it was standing at the Calico booth at the very first DockerCon. The booth was in the hallway to the main auditorium, so literally everyone had to walk past and you could just see the crowds and fee the excitement. We knew the opportunity was huge.
What are you looking forward to the most at this DockerCon?
Spike: I always really enjoy talks by Michelle Noorali, so looking forward to hearing her speak.
Brent: I am very excited to see the opportunity of having cross-platform capabilities, particularly the coming parity between Windows and Linux.

Crafty Requests: Deep Dive into a Kubernetes CVE
More on Ian’s session here.

Ian Coldwater
Heroku Platform Security Engineer (& Kubernetes Breaker)

What is your breakout about?
I’m going to be doing a deep dive into one of the most serious Kubernetes security vulnerabilities discovered thus far (CVE 2018-1002105), which was all over the news and affected countless clusters. I’ll be diving into how this vulnerability works, which also helps explain the inner workings of Kubernetes itself, and then I’ll talk about how to use this knowledge to secure Kubernetes and mitigate against future security risks.
Why should people go to your session?
This will be a good session for people with different kinds of expertise. For someone who knows a good bit about security but maybe less about containers and Kubernetes, this session will give them a good idea of how Kubernetes works on the back end and how flaws like this can happen. For people who are more familiar with Docker and Kubernetes, this will give them a better understanding of how they can protect their clusters against this vulnerability and others like it.
I think this vulnerability is fascinating and instructive, and we can all learn something from it. Also, live exploit demos are fun.
What are you looking forward to the most at this DockerCon?
This is my first DockerCon so I’m really excited to go this year! I’m looking forward to connecting with and learning from other people who have the same interests.

Enabling High Assurance/Sensitive Container Workloads with Encrypted Images
More on Justin’s session here.

Justin Cormack
Docker Sr. Software Engineer – Security

What is your breakout about?
I am doing a talk with Brandon Lum from IBM about encrypting container images, a project we have been working on for a while now. For many use cases, keeping containers behind access control in the registry is fine, but there are other use cases where you want containers encrypted from build to when they are run. We will demo the integration into containerd, which will later make its way into Docker and Kubernetes. This is a great example of the community working together to add new features.
Also I will be involved in the open source security summit, where we have sessions on supply chain security, bug bounties in the container ecosystem and policy management.
What is your favorite DockerCon moment?
I love it when people launch their new products at DockerCon – in 2014 Google launched some little project called “Kubernetes” there…
What are you looking forward to the most at this DockerCon?
We have a big open source track again this year, which is great, there are lots of exciting community projects. Also excited for some of the announcements!

Deep Dive into Firecracker-Containerd
Learn more about Samuel’s session here.

Samuel Karp
Amazon Web Services Sr. Software Development Engineer

What is your breakout about?
I’ll be talking about how we’re integrating the Firecracker virtual machine manager (VMM),  which is optimized for lightweight, container-like “micro”-VMs, with containerd to make it easier to run containers with the isolation provided by a hypervisor.
Why should people go to your session?
I’m hoping that anyone interested in hypervisor-mediated isolation and using containers will find my session interesting! This session dives deep into the architecture of the firecracker-containerd project, which aims to allow portability between standard OCI container images and the larger container ecosystem with Firecracker micro-VMs.
What are you looking forward to the most at this DockerCon?
I’m looking forward to connecting with people who have use-cases for hypervisor isolation or who are interested in working with us on bringing the project along.  I’m also interested in talking to anyone who uses containers on AWS about their journey and what they’d like to see from AWS in the future.

Thank you all our presenters and see you at DockerCon!

#DockerCon sneak peek: A chat with #security breakout session creators. Register for @DockerCon 2019 today:Click To Tweet

For more information

Register for DockerCon 2019, April 29 – May 2 in San Francisco – Save $250 by registering before April 16!
Sign up and attend these additional events, running conjunction with DockerCon:

Women@DockerCon Summit, Monday, April 29th
Open Source Summit, Thursday, May 2nd
Official Docker Training and Certification
Workshops

The post Feature Friday: A Chat With Security Experts appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

A quick hop across the pond: Supercharging the Dunant subsea cable with SDM technology

In 1858, Queen Victoria sent the first transatlantic telegram to U.S. President James Buchanan, sending a message in Morse Code at a rate of one-word per minute. In Q3 of 2020, when we turn on our private Dunant undersea cable that connects the U.S.A. and France, it will transmit 250 Terabits of data per second—enough to transmit the entire digitized Library of Congress three times every second.To achieve this record-breaking capacity, Dunant will be the first cable in the water to use space-division multiplexing (SDM) technology. SDM increases cable capacity in a cost-effective manner with additional fiber pairs (twelve, rather than six or eight in traditional subsea cables) and power-optimized repeater designs. These advancements were created in partnership with SubCom, a global partner for undersea data transport, which will engineer, manufacture and install the Dunant system utilizing their SDM technology and equipment.Traditional subsea cables are powered from the shore end and rely on a dedicated set of pump lasers to amplify the optical signal for each fiber pair as data traverses the length of the cable. Now, SDM technology allows pump lasers and associated optical components to be shared among multiple fiber pairs, while still working within the unique power constraints of the ocean floor. In this way, the 6,400km-long Dunant will add dedicated capacity, diversity and resilience to our global network, and will enable interconnection to other network infrastructure in the region.First announced in 2018, the Dunant cable is named in honor of Swiss businessman and social activist Henry Dunant, the founder of the Red Cross and first recipient of the Nobel Peace Prize. It joins the Curie cable, named for renowned scientist, Marie Curie, as our second private international cable.Demand for online content has exploded in recent years, driven by more internet users, increased engagement with rich content like video, and new demand for cloud services. When it comes online next year, it’s our hope that Dunant and these advances in submarine cable technology will help users access online content quickly from wherever they may be.
Quelle: Google Cloud Platform

Want repeatable scale? Adopt infrastructure as code on GCP

Imagine you provision a virtual machine for your dev environment, work through the kinks, and then decide you needed to create one just like it for your test environment. Are you confident that you can recreate the same configuration? What about all of the tweaks you made to get things running in dev—did you track them? Can you validate that the two environments are configured identically once provisioned? What happens when you need to scale that same configuration to thousands of machines to support production?If you answered ‘no’ to any of these questions, then you should really be thinking about infrastructure as code (IaC), which lets you make changes to your environment in a way that can be tested, automatically applied and be audited, according to your change management processes. The good news is that if you run on Google Cloud Platform (GCP), it’s already tightly integrated with popular IaC tools. Better yet, adopting IaC principles sets the stage for being able to handle massive growth in demand for your applications.Understanding infrastructure as codeAt a high level, infrastructure as code is a process that allows you to treat your infrastructure provisioning and configuration in the same manner you handle application code. Your provisioning and configuration logic is stored in source control and can take advantage of continuous integration and continuous deployment (CI/CD) pipelines, so that it’s visible and discoverable across your organization.You may be wondering whether you can use IaC processes with different kinds of infrastructures, such as:Virtual machines (or bare metal systems) set up with a configuration automation product like Puppet, Chef, Salt or Ansible? Check!Containers deployed in a Kubernetes cluster? Why not!A pleasant mix of the above? Done!What changes are the actual artifacts that you version in your source control system. Those artifacts can be YAML descriptors, Dockerfiles, shell scripts and their dependencies, among others. It does not matter what tools your are using to provision and configure your infrastructure—the important thing is to capture the process in a way can be repeated and automated!By implementing IaC process in your projects you increase the level of control you have on the design and implementation of the infrastructure that supports your applications. This is due to continuous versioning and review of the descriptors that define your infrastructure. Want the development team to review the changes? Check! The Ops technical manager demands periodical audits? Check!Then, by extending your CI/CD pipeline beyond your application, you can have changes applied to your test and production infrastructures within minutes of committing the changes to the code repository. You can get even fancier, too, by applying a test-driven development model to your infrastructure. Why not—your infrastructure deserves tests too! Open source tools like InSpec let you develop a platform-agnostic compliance test suite that checks the correctness of all the moving parts of the infrastructure.Infrastructure as code gotchasBe on the lookout for pitfalls, though! Just like implementing DevOps for your application stack, infrastructure as code automation requires process and governance changes. For one, system administrators who may have traditionally made configuration changes manually need to adopt a developer mindset, complete with checking in their configuration changes to source control and implementing a managed test and promotion process. If so, manual changes (that should not be allowed by your change management process!) made outside of the IaC pipeline will be lost in subsequent releases. Implementing IaC could bring unnecessary overhead if the change management process you adopt is too heavy. Rule of thumb: if you feel that it’s taking too much time to apply a change, it probably is! Finally, you may also need to train your Ops colleagues who don’t have experience with IaC tools and concepts.How GCP simplifies IaCGCP supports IaC processes by letting you build environments with repeatable and automated processes. These environments include not only the runtime environments, but also networking and related services, Cloud Identity and Access Management (Cloud IAM), as well as DevOps-inspired build/deploy pipelines.Because GCP is built on open standards and open-source projects, you can just re-use your existing expertise to build your next-gen infrastructure in the cloud. Tools like Deployment Manager and first-class support for Terraform help your team fully exploit all the resources that GCP has to offer. Don’t worry about starting from scratch as we have ready-made templates that follow Google’s best practices! Read more about the available tools.GCP’s approach to IaC doesn’t have a steep learning curve or complex interface to master: deploy your whole environment with one command and keep it updated automatically! It also gives you the flexibility of an incremental migration approach where you can lift-and-shift your workloads to GCP and gradually optimize them for the cloud by managing changes via IaC processes.IaC also gives you the chance of achieving linear Ops-team growth in the event of exponential workload growth. With IaC, it doesn’t matter if you are managing an environment with ten containers or one million (apart from the obvious scalability issues to tackle). Want to know more? Read Chapter 18th of the Site Reliability Engineering book.Putting IaC Best Practices to WorkIn this blog post we presented a high level description of what IaC is and why you may want to use it to manage the infrastructure supporting your GCP projects—namely, to have more control of your resources, and to be sure your infrastructure will stand up to increased demand. Click here to learn more about Infrastructure as Code on GCP.
Quelle: Google Cloud Platform

Introducing Lustre file system Cloud Deployment Manager scripts

Data is core to high performance computing (HPC), especially for workloads such as those in life sciences, oil and gas, financial services, and media rendering. Accessing large amounts of data at extremely high speeds and low latencies is essential to HPC, but has always been a key challenge in running HPC workloads.The HPC community has long met this need using storage technologies like the Lustre open-source parallel file system, which is commonly used in supercomputers today. The nearly unlimited scale of the cloud unlocks powerful capabilities for users, while also increasing the demand for fast parallel storage. Unfortunately, the configuration of the Lustre parallel file system is typically a technically challenging and time-consuming task, and can require an expert to implement correctly.In order to simplify the complex process of building and configuring a Lustre cluster for our users, the engineers at Google Cloud Platform (GCP) have developed a set of scripts to easily deploy a Lustre storage cluster on Google Compute Engine using the Google Cloud Deployment Manager. The scripts are available here in the GCP GitHub repository, under the community directory. We’ve worked to make this as simple as possible, even if you don’t have a lot of Lustre experience. We’ll briefly walk you through how to use the scripts here.1. Create a Lustre clusterThough it’s challenging in an on-premises environment, the process to deploy a ready-to-use Lustre cluster in GCP is very simple. First, create a project to contain the Lustre cluster, and ensure that you have GCP quota available to support your expected cluster.Next, clone the git repository to a local device or Cloud Shell with access to gcloud and your project, and change to the lustre directory by running these commands:Once the Lustre deployment manager scripts are downloaded, review the lustre-template.yaml, which has descriptions of each field and example valid input, as well as the description of the YAML fields in the Configuration section of README.md, to understand what each field configures. Then open the lustre.yaml file with your favorite editor (vi, nano, etc.) and edit the configuration fields to satisfy your requirements. At a minimum, ensure that the following fields are complete and valid in your environment:cluster_namezonecidrexternal_ipsmdt_disk_typemdt_disk_size_gbost_disk_typeost_disk_size_gbNote: The rest of this blog post assumes you use the default values populated in the lustre.yaml file for the fields cluster_name and fs_name. If you change these values, make sure to continue your changes throughout the following instructions.This YAML file defines the configuration for a Lustre cluster. When the configuration is deployed, it will create a Lustre cluster with the Lustre file system ready to use, including these components:VPC Network—Network to host Lustre traffic, unless an existing VPC network such as a Shared VPC is provided.VPC Subnet—Subnet to route Lustre traffic, unless an existing VPC Subnet is provided.Cloud NAT—NAT device to route traffic to the internet, unless external IPs are disabled.Firewall rules—Firewall rules will be created to allow inter-node communication, and SSH into the Lustre cluster.Lustre VMs—A set of Lustre virtual machines will be created and configured to host various roles immediately as part of the deployment:MDS—Lustre metadata server and management server serves the independent metadata and Lustre management functionality.OSS—Object storage server serves the file data in a distributed manner.2. Deploy the Lustre clusterOnce the fields are configured to match your preferences, you can deploy and configure the entire Lustre cluster with a single command:gcloud deployment-manager deployments create lustre –config lustre.yamlYou can monitor the progress of the deployment through the command line, or in the deployment manager interface:Once the deployment has completed successfully, you will see output like this, showing that a VPC network, subnet, firewall rules, and VM instances have been created according to the configuration:Next, SSH into the lustre-mds1 instance using either gcloud or the console SSH button. Once you log in, you may see the following message:If you do see this message, wait until the installation is complete. (If you do not see this message, then the installation has already completed.)If you do see this message, wait until the installation is complete. (If you do not see this message, then the installation has already completed.)3. Log in and test LustreOnce the installation is complete, you will see the following message of the day when logging into an instance in the cluster:This message indicates that the Lustre cluster is installed, and that the Lustre file system is mounted and available. You can now mount Lustre clients that have the Lustre client software installed. For example, you can test a mount from the lustre-mds1 node to verify that the Lustre file system is online, like this:The mount command should return quickly with no output. If you experience an issue with this step, check out the Troubleshooting section of our README.md file.You can confirm that Lustre is mounted on your client multiple ways. One way is to check that an entry exists in the mount command, like this:mount | grep lustreYou should see output that includes a line similar to:10.20.0.2@tcp:/lustre on /mnt/lustre type lustre (rw)You can also check the output of the Lustre configuration utility, lctl, to ensure that the entire Lustre file system is mounted and available, using this command:sudo lfs dfYou should see output similar to this that shows the Lustre metadata target(s) (MDT), the Lustre object storage target(s) (OST), the mount point, the total file system size, and used and available storage.Your Lustre file system is now mounted. You can test writing a file to the file system by running these commands:You should see your new file testfile has been created. Change the permissions for /mnt/lustre to allow non-root users to access the file system, or enable authentication in Lustre (the Lustre User/Group Upcall is disabled in these Lustre deployment manager scripts by default, which causes Lustre to fall back to the OS authentication).Exploring further with LustreYour Lustre cluster is now online and ready to host your scratch and HPC data to solve your hardest performance problems. Check out the README.md for even more detail and to learn how to expand your Lustre cluster by adding new OSS nodes.Visit the Google HPC Solutions page to read about other solutions, and try combining your Lustre cluster with some of our other solutions to begin running your HPC workloads in Google Cloud. For example, combine Lustre and Slurm on GCP to create an auto-scaling cluster with access to a powerful Lustre file system. You can also learn more about HPC in the cloud during this Next ‘19 session.Get in touch with questions and feedbackTo ask questions or post customizations to the community, use the Google Cloud Lustre Google discussion group. To request features, provide feedback, or report bugs, use this form.
Quelle: Google Cloud Platform