Amazon Lumberyard Beta 1.11 Now Available, Adds New Animation and Visual Scripting Tools, New Cloud Gems, and More

We’re excited to announce the availability of Lumberyard Beta 1.11. With over 412 improvements, fixes, and features, this release introduces two new tools to help game teams create high quality and complex gameplay with less engineering resources. Additionally, Lumberyard Beta 1.11 adds new AWS integrations so game developers can add text-to-speech and speech recognition to their game experiences. Some highlights include:  
Quelle: aws.amazon.com

Introducing the Docker Global Professional Certification Program

 Docker is excited to announce the first and only official professional certification program for the Docker Enterprise Edition (EE) platform.
The new Docker Certified Associate (DCA) certification, launching at DockerCon Europe on October 16, 2017, serves as a foundational benchmark for real-world container technology expertise with Docker Enterprise Edition. In today’s job market, container technology skills are highly sought after and this certification sets the bar for well-qualified professionals. The professionals that earn the certification will set themselves apart as uniquely qualified to run enterprise workloads at scale with Docker Enterprise Edition and be able to display the certification logo on resumes and social media profiles.
The DCA is the first in a comprehensive multi-tiered certification program and the exam was created by top practitioners using a rigorous development process. It consists of 55 questions to be completed over 80 minutes covering essential skills on Docker Enterprise Edition.  The exam can be taken anywhere in the world at any time and is delivered using remote proctoring technology to ensure exam security while creating a simple and streamlined test taking experience for candidates.
Be among the first to earn the DCA designation and gain recognition for your enterprise container skills.
Get Started now
 
Be Among the First to Get Certified, at DockerCon Europe
Be one of the first to get your DCA on-site at DockerCon Europe. If you’ve reviewed the study guide and think you’ve got what it takes, join us in Copenhagen and take the exam. Testing is offered Tuesday through Thursday and we’ve got some special gifts to hand out to our first Docker Certified Associates.

Be the first #DockerCertified Associate – #Docker launches first official certification exam for…Click To Tweet

Learn more:

Docker Professional Certification program
Docker Trainings

The post Introducing the Docker Global Professional Certification Program appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Kubernetes 1.8 release integrates with containerd 1.0 Beta

Intent of containerd effort
When containerd was first developed it had two goals. The first was to solve the upgrade problem with running containers and provide a codebase where OCI runtimes, like runc, could be integrated into Docker.  However, as needs change in the container space and after speaking  with various members of the community at the beginning of this year, we decided to expand the scope of containerd and make it a fully functional container daemon with storage, image distribution and runtime.
containerd fully supports the OCI Runtime and Image specifications that are part of the recently released 1.0 specifications. Additionally, it was important to build a stable runtime for users and platform builders. We wanted containerd to be fully functional; but also, it needed to retain a small core codebase so that it is easy to maintain and support in the long run with an LTS release receiving backported patches on a stable API.
To demonstrate the progress made on the project,  Stephen Day presented the current status of containerd 1.0 alpha at the Moby Summit in LA two weeks ago,:

Check out the getting started with containerd guide to get your feet wet with containerd if you want to integrate it in your own container based system.

Introduction of the cri-containerd effort
Docker and Kubernetes both have similar requirements when it comes to a container runtime. They need something small, stable and easy to maintain. They also need an API that abstracts away platform and system specific details so that they can build a featureset for users without being slowed down by the messy syscalls and various driver support that is required to execute containers on a variety of operating systems.        
In order to have Kubernetes consume containerd for its container runtime we needed to implement the CRI interface.  CRI stands for “Container Runtime Interface” and is responsible for distribution and the lifecycle of pods and containers running on a cluster.
At Docker, we have a full time engineer working on the cri-containerd project along with the other maintainers to finish the cri-containerd integration to get Kubernetes running on containerd. Here is a presentation Liu Lantao from Google presented 2 weeks ago at Moby Summit LA about the status of cri-containerd:

Kubernetes CRI containerd integration by Lantao Liu (Google) from Docker, Inc.
Moby Summit LA allowed the various teams from different companies involved in these projects to meet and demo the latest about containerd, cri-containerd, bucketbench, and libnetwork CNI implementation. You can find a recap of the summit on the Moby blog, and get the latest updates from the teams at Moby Summit Copenhagen in a few weeks.

#MobySummit @APrativadi doing the first public demo of cri-containerd, @kubernetesio + @containerd + libnetwork drivers  used as CNI plugins pic.twitter.com/sMSWlS9ANM
— chanezon (@chanezon) September 14, 2017

.@Kubernetesio 1.8 release integrates w/ @containerd 1.0 Beta Click To Tweet

Learn more:

Getting started with containerd
Getting started guide for CRI-containerd
Kubernetes OS images with LinuxKit

The post Kubernetes 1.8 release integrates with containerd 1.0 Beta appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Introducing Network Policy support for Google Container Engine, with Project Calico and Tigera

By Andy Randall, Co-founder and VP Product, Partners & Customer Success, Tigera

[Editor’s Note: Today we announced the beta of Kubernetes Network Policy in Google Container Engine, a feature we implemented in close collaboration with our partner Tigera, the company behind Project Calico. Read on for more details from Tigera co-founder and vice president of product, Andy Randall.]

When it comes to network policy, a lot has changed. Back in the day, we protected enterprise data centers with a big expensive network appliance called a firewall that allowed you to define rules about what traffic to allow in and out of the data center. In the cloud world, virtual firewalls provide similar functionality for virtual machines. For example, Google Compute Engine allows you to configure firewall rules on VPC networks.

In a containerized microservices environment such as Google Container Engine, network policy is particularly challenging. Traditional firewalls provide great perimeter security for intrusion from outside the cluster (i.e. “north-south” traffic), but aren’t designed for “east-west” traffic within the cluster at a finer-grained level. And while Container Engine automates the creation and destruction of containers (each with its own IP address), not only do you have many more IP endpoints than you used to, the automated create-run-destroy life-cycle of a container can result in churn up to 250x that of virtual machines.

Traditional firewall rules are no longer sufficient for containerized environments; we need a more dynamic, automated approach that is integrated with the orchestrator. (For those interested in why we can’t just continue with traditional virtual network / firewall approaches, see Christopher Liljenstolpe’s blog post, Micro-segmentation in the Cloud Native World.)

We think the Kubernetes Network Policy API and the Project Calico implementation present a solution to this challenge. Given Google’s leadership role in the community, and its commitment to running Container Engine on the latest Kubernetes release, it’s only natural that they would be the first to include this capability in their production hosted Kubernetes service, and we at Tigera are delighted to have helped support this effort.

Kubernetes Network Policy 1.0
What exactly does Kubernetes Network Policy let you do? Kubernetes Network Policy allows you to easily specify the connectivity allowed within your cluster, and what should be blocked. (It is a stable API as of Kubernetes v1.7.)

You can find the full API definition in the Kubernetes documentation but the key points are as follows:

Network policies are defined by the NetworkPolicy resource type. These are applied  to the Kubernetes API server like any other resource (e.g., kubectl apply -f my-network-policy.yaml).
By default, all pods in a namespace allow unrestricted access. That is, they can accept incoming network connections from any source.
A NetworkPolicy object contains a selector expression (“podSelector”) that selects a set of pods to which the policy applies, and the rules about which incoming connections will be allowed (“ingress” rules). Ingress rules can be quite flexible, including their own namespace selector or pod selector expressions.
Policies apply to a namespace. Every pod in that namespace selected by the policy’s podSelector will have the ingress rules applied, so any connection attempts that are not explicitly allowed are rejected. Calico enforces this policy extremely efficiently using iptables rules programmed into the underlying host’s kernel. 

Here is an example NetworkPolicy resource to give you a sense of how this all fits together:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
spec:
podSelector:
matchLabels:
role: db
ingress:
– from:
– namespaceSelector:
matchLabels:
project: myproject
– podSelector:
matchLabels:
role: frontend
ports:
– protocol: TCP
port: 6379

In this case, the policy is called “test-network-policy” and applies to the default namespace. It restricts inbound connections to every pod in the default namespace that has the label “role: db” applied. The selectors are disjunctive, i.e., either can be true. That means that connections can come from any pod in a namespace with label “project: myproject”, OR any pod in the default namespace with the label “role: frontend”. Further, these connections must be on TCP port 6379 (the standard port for Redis).

As you can see, Network Policy is an intent-based policy model, i.e., you specify your desired end-state and let the system ensure it happens. As pods are created and destroyed, or a label (such as “role: db”) is applied or deleted from existing pods, you don’t need to update anything: Calico automatically takes care of things behind the scenes and ensures that every pod on every host has the right access rules applied.

As you can imagine, that’s quite a computational challenge at scale, and Calico’s policy engine contains a lot of smarts to meet Container Engine’s production performance demands. The good news is that you don’t need to worry about that. Just apply your policies and Calico takes care of the rest.

Enabling Network Policy in Container Engine

For new and existing clusters running at least Kubernetes v1.7.6, you can enable network policy on Container Engine via the UI, CLI or API. For new clusters, simply set the flag (or check the box in the UI) when creating the cluster. For existing clusters there is a two-step process:

Enable the network policy add-on on the master.
Enable network policy for the entire cluster’s node-pools.

Here’s how to do that during cluster creation:

# Create a cluster with Network Policy Enabled
gcloud beta container clusters create <CLUSTER> –project=<PROJECT_ID>
–zone=&ltZONE> –enable-network-policy –cluster-version=1.7.6

Here’s how to do it for existing clusters:

# Create a cluster with Network Policy Enabled

# Enable the addon
gcloud beta container clusters update <CLUSTER> –project=<PROJECT_ID>
–zone=<ZONE>–update-addons=NetworkPolicy=ENABLE

# Enable on nodes (This re-creates the node pools)
gcloud beta container clusters update <CLUSTER> –project=<PROJECT_ID>
–zone=<ZONE> –enable-network-policy

Looking ahead 
Environments running Kubernetes 1.7 can use the NetworkPolicy API capabilities that we discussed above, essentially ingress rules defined by selector expressions. However, you can imagine wanting to do more, such as:

Applying egress rules (restricting which outbound connections a pod can make) 
Referring to IP addresses or ranges within rules 

The good news is that the new Kubernetes 1.8 includes these capabilities, and Google and Tigera are working together to make them available in Container Engine. And, beyond that, we are working on even more advanced policy capabilities. Watch this space!

Attend our joint webinar! 
Want to learn more? Google Product Manager Matt DeLio will join Casey Davenport, the Kubernetes Networking SIG leader and a software engineer at Tigera, to talk about best practices and design patterns for securing your applications with network policy. Register here for the October 5th webinar.

Quelle: Google Cloud Platform

Google Container Engine – Kubernetes 1.8 takes advantage of the cloud built for containers

By Dan Paik, Product Manager, Container Engine

Next week, we will roll out Kubernetes 1.8 to Google Container Engine for early access customers. In addition, we are advancing significant new functionality in Google Cloud to give Container Engine customers a great experience across Kubernetes releases. As a result, Container Engine customers get new features that are only available on Google Cloud Platform, for example highly available clusters, cluster auto-scaling and auto-repair, GPU hardware support, container-native networking and more.

Since we founded Kubernetes back in 2014, Google Cloud has been the leading contributor to the Kubernetes open source project in every release including 1.8. We test, stage and roll out Kubernetes on Google Cloud, and the same team that writes it, supports it, ensuring you receive the latest innovations faster without risk of compatibility breaks or support hassles.

Let’s take a look at the new Google Cloud enhancements that make Kubernetes run so well.

Speed and automation 
Earlier this week we announced that Google Compute Engine, Container Engine and many other GCP services have moved from per-minute to per-second billing. We also lowered the minimum run charge to one minute from 10 minutes, giving you even finer granularity so you only pay for what you use.

Many of you appreciate how quickly you can spin up a cluster on Container Engine. We’ve made it even faster – improving cluster startup time by 45%, so you’re up and running faster, and better able to take advantage of the pricing minimum-time charge. These improvements also apply to scaling your existing node pools.

A long-standing ask has been high availability masters for Container Engine. We are pleased to announce early access support for high availability, multi-master Container Engine clusters, which increase our SLO to 99.99% uptime. You can elect to run your Kubernetes masters and nodes in up to three zones within a region for additional protection from zonal failures. Container Engine seamlessly shifts load away from failed masters and nodes when needed. Sign up here to try out high availability clusters.

In addition to speed and simplicity, Container Engine automates Kubernetes in production, giving developers choice, and giving operators peace of mind. We offer several powerful Container Engine automation features:

Node Auto-Repair is in beta and opt-in. Container Engine can automatically repair your nodes using the Kubernetes Node Problem Detector to find common problems and proactively repair nodes and clusters. 
Node Auto-Upgrade is generally available and opt-in. Cluster upgrades are a critical Day 2 task and to give you automation with full control, we now offer Maintenance Windows (beta) to specify when you want Container Engine to auto-upgrade your masters and nodes. 
Custom metrics on the Horizontal Pod Autoscaler will soon be in beta so you can scale your pods on metrics other than CPU utilization. 
Cluster Autoscaling is generally available with performance improvements enabling up to 1,000 nodes, with up to 30 pods in each node as well as letting you specify a minimum and maximum number of nodes for your cluster. This will automatically grows or shrinks your cluster depending on workload demands. 

Container-native networking – Container Engine exclusive! – only on GCP
Container Engine now takes better advantage of GCP’s unique, software-defined network with first-class Pod IPs and multi-cluster load balancing.

Aliased IP support is in beta. With aliased IP support, you can take advantage of several network enhancements and features, including support for connecting Container Engine clusters over a Peered VPC. Aliased IPs are available for new clusters only; support for migrating existing clusters will be added in an upcoming release. 
Multi-cluster ingress will soon be in alpha. You will be able to construct highly available, globally distributed services by easily setting up Google Cloud Load Balancing to serve your end users from the closest Container Engine cluster. To apply for access, please fill out this form. 
Shared VPC support will soon be in alpha. You will be able to create Container Engine clusters on a VPC shared by multiple projects in your cloud organization. To apply for access, please fill out this form. 

Machine learning and hardware acceleration
Machine learning, data analytics and Kubernetes work especially well together on Google Cloud. Container Engine with GPUs turbocharges compute-intensive applications like machine learning, image processing, artificial intelligence and financial modeling. This release brings you managed CUDA-as-a-Service in containers. Big data is also better on Container Engine with new features that make GCP storage accessible from Spark on Kubernetes.

NVIDIA Tesla P100 GPUs are available in alpha clusters. In addition to the NVIDIA Tesla K80, you can now create a node with up to 4 NVIDIA P100 GPUs. P100 GPUs can accelerate your workloads by up to 10x compared to the K80! If you are interested in alpha testing your CUDA models in Container Engine, please sign up for the GPU alpha. 
Cloud Storage is now accessible from Spark. Spark on Kubernetes can now communicate with Google BigQuery and Google Cloud Storage as data sources and sinks from Spark using bigdata-interop connectors. 
CronJobs are now in beta so you can now schedule cron jobs such as data processing pipelines to run on a given schedule in your production clusters! 

Extensibility 
As more enterprises use Container Engine, we are actively improving extensibility so you can match Container Engine to your environment and standards.

Ubuntu node image is now generally available. To offer more flexibility and choice, you can now select Ubuntu as your node image when creating a node. Container Optimized OS (COS) remains our default node image. 
Custom Resource Definition (CRD) is a lightweight way to create API resource types in Kubernetes and make it easy to interact with custom controllers via kubectl. Please note that CRD replaces the now deprecated Third Party Resource (TPR) object. 

Security and reliability 
We designed Container Engine with enterprise security and reliability in mind. This release adds several new enhancements.

Role Based Access Control (RBAC) is now generally available. This feature allows a cluster administrator to specify fine-grained policies describing which users, groups, and service accounts are allowed to perform which operations on which API resources. 
Network Policy Enforcement using Calico is in beta. Starting from Kubernetes 1.7.6, you can help secure your Container Engine cluster with network policy pod-to-pod ingress rules. Kubernetes 1.8 adds additional support for CIDR-based rules, allowing you to whitelist access to resources outside of your Kubernetes cluster (e.g., VMs, hosted services, and even public services), so that you can integrate your Kubernetes application with other IT services and investments. Additionally, you can also now specify pod-to-pod egress rules, providing tighter controls needed to ensure service integrity. Learn more about here. 
Node Allocatable is generally available. Container Engine includes the Kubernetes Node Allocatable feature for more accurate resource management, providing higher node stability and reliability by protecting node components from out-of-resource issues. 
Priority / Preemption is in alpha clusters. Container Engine implements Kubernetes Priority and Preemption so you can associate priority pods to priority levels such that you can preempt lower-priority pods to make room for higher-priority ones when you have more workloads ready to run on the cluster than there are resources available. 

Enterprise-ready container operations – monitoring and management designed for Kubernetes 
In Kubernetes 1.7, we added view-only workload, networking, and storage views to the Container Engine user interface. In 1.8, we display even more information, enable more operational and development tasks without having to leave the UI, and improve integration with Stackdriver and Cloud Shell.

The following features are all generally available:

Easier configurations: You can now view and edit your YAML files directly in the UI. We also added easy to use shortcuts for the most common user actions like rolling updates or scaling a deployment.
Node information details: Cluster view now shows details such as node health status, relevant logs, and pod information so you can easily troubleshoot your clusters and nodes. 
Stackdriver Monitoring integration: Your workload views now include charts showing CPU, memory, and disk usage. We also link to corresponding Stackdriver pages for a more in-depth look. 
Cloud Shell integration: You can now generate and execute exact kubectl commands directly in the browser. No need to manually switch context between multiple clusters and namespaces or copy and paste. Just hit enter! 
Cluster recommendations: Recommendations in cluster views suggest ways that you can improve your cluster, for example, turning on autoscaling for underutilized clusters or upgrading nodes for version alignment. 

In addition, Audit Logging is available to early access customers. This features enables you to view your admin activity and data access as part of Cloud Audit Logging. Please complete this form to take part in the Audit Logging early access program.

Container Engine everywhere 
Container Engine customers are global. To keep up with demand, we’ve expanded our global capacity to include our latest GCP regions: Frankfurt (europe-west3), Northern Virginia (us-east4) and São Paulo (southamerica-east1). With these new regions Container Engine is now available in a dozen locations around the world, from Oregon to Belgium to Sydney.

Customers of all sizes have been benefiting from containerizing their applications and running them on Container Engine. Here are a couple of recent examples:

Mixpanel, a popular product analytics company, processes 5 trillion data points every year. To keep performance high, Mixpanel uses Container Engine to automatically scale resources.

“All of our applications and our primary database now run on Google Container Engine. Container Engine gives us elasticity and scalable performance for our Kubernetes clusters. It’s fully supported and managed by Google, which makes it more attractive to us than elastic container services from other cloud providers,” says Arya Asemanfar, Engineering Manager at Mixpanel. 

RealMassive, a provider of real-time commercial real estate information, was able to cut its cloud hosting costs in half by moving to microservices on Container Engine.

“What it comes down to for us is speed-to-market and cost. With Google Cloud Platform, we can confidently release services multiple times a day and launch new markets in a day. We’ve also reduced our cloud hosting costs by 50% by moving to microservices on Google Container Engine,” says Jason Vertrees, CTO at RealMassive. 

Bitnami, an application package and deployment platform, shows you how to use Container Engine networking features to create a private Kubernetes cluster that enforces service privacy so that your services are available internally but not to the outside world.

Try it today! 
In a few days, all Container Engine customers will have access to Kubernetes 1.8 in alpha clusters. These new updates will help even more businesses run Kubernetes in production to get the most from their infrastructure and application delivery. If you want to be among the first to access Kubernetes 1.8 on your production clusters, please join our early access program.

You can find the complete list of new features in the Container Engine release notes. For more information, visit our website or sign-up for our free trial at no cost.
Quelle: Google Cloud Platform