KQueen: The Open Source Multi-cloud Kubernetes Cluster Manager

The post KQueen: The Open Source Multi-cloud Kubernetes Cluster Manager appeared first on Mirantis | Pure Play Open Cloud.
Kubernetes got a lot of traction in 2017, and I’m one of those people who believes that in 2 years the Kubernetes API could become the standard interface for consuming infrastructure. In other words, the same way that today we get an IP address and ssh key for our virtual machine on AWS, we might get a Kubernetes API endpoint and kubeconfig from our cluster. With the recent AWS announcement about EKS bringing this reality even closer, let me give you my perspective on k8s installation and trends that we see at Mirantis.
Existing tools
Recently I did some personal research, and I discovered the following numbers around the Kubernetes community:

~22 k8s distribution support providers
~10 k8s community deployment tools
~20 CaaS vendors

There are lot of companies that provide Kubernetes installation and management, including Stackpoint.io, Kubermatic (Loodse), AppsCode, Giant Swarm, Huawei Container Engine, CloudStack Container Service, Eldarion Cloud (Eldarion), Google Container Engine,  Hasura Platform, Hypernetes (HyperHQ), KCluster, VMware Photon Platform, OpenShift (Red Hat), Platform9 Managed Kubernetes, and so on.
All of those vendor solutions focus more or less on “their way” of k8s cluster deployment, which usually means specific a deployment procedure of defined packages, binaries and artifacts. Moreover, while some of these installers are available as open source packages, they’re not intended to be modified, and when delivered by a vendor, there’s often no opportunity for customer-centric modification.
There are reasons, however, why this approach is not enough for the enterprise customer use case. Let me go through them.
Customization: Our real Kubernetes deployments and operation have demonstrated that we cannot just deploy a cluster on a custom OS with binaries. Enterprise customers have various security policies, configuration management databases, and specific tools, all of which are required to be installed on the OS. A very good example of this situation is one of a customer from the financial sector. The first time the started their golden OS image at AWS, it took 45 minutes to boot. This makes it impossible for some customers to run the native managed k8s offering at public cloud providers.
Multi-cloud: Most existing vendors don’t solve the question of how to manage clusters in multiple regions, let alone at multiple providers. Enterprise customers want to run distributed workloads in private and public clouds. Even in the case of on-premise baremetal deployment, people don’t a build single huge cluster for whole company. Instead, they separate resources based on QA/testing/production, application-specific, or team-specific clusters, which often causes complications with existing solutions. For example, OpenShift manages a single Kubernetes cluster instance. One of our customers wound up with an official design where they planned to run 5 independent OpenShift instances without central visibility or any way to manage deployment. Another good example is CoreOS Tectonic, which provides a great UI for RBAC management and cluster workload, but has the same problem — it only manages a single cluster, and as I said, nobody stays with single cluster.
most existing vendors do not solve the question of how to manage clusters in multiple locations
“My k8s cluster is better than yours” syndrome: In the OpenStack world, where we originally came from, we’re used to complexity. OpenStack was very complex and Mirantis was very successful, because we could install it the most quickly, easily, and correctly. Contrast this with the current Kubernetes world; with multiple vendors, it is very difficult to differentiate in k8s installation. My view is that k8s provisioning is commodity with very low added value, which makes k8s installation more as vendor checkbox feature, rather than decision making point or unique capability. At the moment, however, let me borrow my favourite statement from a Kubernetes community leader: “Yes, there are lot of k8s installers, but very few deploy k8s 100% correctly.”
Moreover, all public cloud providers will eventually offer their own managed k8s offering, which will put various k8s SaaS providers out of business. After all, there is no point paying for managed k8s on AWS to a third-party company if AWS provides EKS.
K8s provisioning is a commodity, with very low added value.
Visibility & Audit: Lastly, but most importantly, deployment is just the beginning. Users need to have visibility, with information on what runs where in what setup. It’s not just about central monitoring, logging and alerting; it’s also about audit. Users need audit features such as “all docker images used in all k8s clusters” or “version of all k8s binaries”. Today, if you do find such a tool, it usually has gaps at the multi-cluster level, can providing information only for single clusters.
To summarize, I don’t currently see any existing Kubernetes tool that provides all of those features.
KQueen as Open Cluster Manager
Based on all of these points, we at Mirantis decided to build a provisioner-agnostic Kubernetes cluster manager to deploy, manage and operate various Kubernetes clusters on various public/private cloud providers. Internally, we have called the project KQueen and, and it follows several design principles:

Kubernetes as a Service environment deployment: Provide a multi-tenant self-service portal for k8s cluster provisioning.
Operations: Focus on the audit, visibility, and security of Kubernetes clusters, in addition to actual operations.
Update and Upgrade: Automate updating and upgrading of clusters through specific provisioners.

Multi-Cloud Orchestration: Support the same abstraction layer for any public, private, or bare metal provider.
Platform Agnostic Deployment (of any Kubernetes cluster): Enable provisioning of a Kubernetes cluster by various community installers/provisioners, including those with customizations, rather than a black box with a strict installation procedure.
Open, Zero Lock-in Deployment: Provide a pure-play open source solution without any closed source.

 

Easy integration: Provide a documented REST API for managing Kubernetes clusters and integrating this management interface into existing systems.

We have one central backend service called queen. This service listens for user requests (via the API) and can orchestrate and operate clusters.
KQueen supplies the backend API for provider-agnostic cluster management. It enables access from the UI, CLI, or API, and manages provisioning of Kubernetes clusters. It uses the following workflow:

Trigger deployment on the provisioner, enabling KQueen to use various provisioners (AKS, GKE, Jenkins) for Kubernetes clusters. For example, you can use the Jenkins provisioner to trigger installation of Kubernetes based on a particular job.
The provisioner installs the Kubernetes cluster using the specific provider.
The provisioner returns the Kubernetes kubeconfig and API endpoint. This config is stored in the KQueen backend (etcd).
KQueen manages, operates, monitors, and audits the Kubernetes clusters. It reads all information from the API and displays it as a simple overview visualization. KQueen can also be extended by adding other audit components.

KQueen in action
The KQueen project can help define enterprise-scale kubernetes offerings across departments and give them freedom in specific customizations. If you’d like to see it in action, you can see a generic KQueen demo showing the architecture design and managing a cluster from a single place, as well as a demo based on Azure AKS demo. In addition, watch this space for a tutorial on how to set up and use KQueen for yourself. We’d love your feedback!
The post KQueen: The Open Source Multi-cloud Kubernetes Cluster Manager appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Twitter Says No, Hundreds Of Twitter Employees Are Not Reading Your DMs

Twitter Says No, Hundreds Of Twitter Employees Are Not Reading Your DMs

Via YouTube screenshot

On Monday, conservative activist and filmmaker James O'Keefe published undercover footage of Twitter engineers alleging the social network has hundreds of employees reading “everything you post online” — including direct messages.

But according to Twitter, these claims are factually incorrect and misleadingly portrayed by O'Keefe's media organization Project Veritas.

The undercover video shows Twitter engineer Clay Haynes telling a Project Veritas activist that hundreds of employees have seemingly unrestricted access to Twitter users' private data. “There’s teams dedicated to it.. at least, three or four hundred people,” Haynes is heard saying on camera. “They’re paid to look at d**k pics.”

Twitter disputes these claims. “We do not proactively review DMs. Period,” a company spokesperson told BuzzFeed News. “A limited number of employees have access to such information, for legitimate work purposes, and we enforce strict access protocols for those employees.”

Twitter did not answer questions about the number of employees who have such access or the specifics of precautions it takes to protect sensitive user data.

“… technically accurate to a degree, but exaggerated for effect by drunk idiots.”

A former senior Twitter employee echoed the company's comment, observing that the claims in Monday's Project Veritas video were “technically accurate to a degree, but exaggerated for effect by drunk idiots.” The former employee noted that the group of engineers able to view such sensitive data is “pretty small” and that it is only permitted access to it “in response to a report (i.e. 'so and such is harassing me via DM').”

Two former senior Twitter employees also noted that the bulk of moderation work is conducted algorithmically — in part due to the number of reports Twitter processes daily. In the Project Veritas video, a Twitter engineer seems to confirm this.

Monday's undercover video was the latest in a series of Project Veritas sting videos aimed at investigating Twitter. Last week, O'Keefe released footage of Twitter engineers alleging the company is “more than happy to help the Department of Justice in their little investigation” by handing over Trump tweets and direct messages. Twitter pushed back hard on the claims last week, noting in a statement that it “only responds to valid legal requests, and does not share any user information with law enforcement without such a request.” In the same statement, a Twitter spokesperson condemned Project Veritas' videos for tricking employees into talking under false pretenses.

“We deplore the deceptive and underhanded tactics by which this footage was obtained and selectively edited to fit a pre-determined narrative,” Twitter said last week. “Twitter is committed to enforcing our rules without bias and empowering every voice on our platform, in accordance with the Twitter Rules.”

This isn't the first time Twitter has come under scrutiny for how it manages employee access to user data. Last November, the company faced criticism after a contract employee temporarily suspended president Trump's account. The suspension prompted questions as to whether employees could access user information and take action to suspend or ban users at will. Since the November incident Twitter has apparently taken steps to better secure the employee permissions, according to an individual familiar with the decision making process.

Quelle: <a href="Twitter Says No, Hundreds Of Twitter Employees Are Not Reading Your DMs“>BuzzFeed

[Podcast] PodCTL #21 – Effective RBAC for Kubernetes

One of the strongest signals we heard coming out of KubeCon was the breadth of “Enterprise” companies deploying Kubernetes into production. As more containerized applications are placed in secure, often regulated, environments, having proper authorization is a critical element of providing defense-in-depth. In this week’s show, we looked at the “Effective RBAC” talk from KubeCon […]
Quelle: OpenShift

Azure Analysis Services now available in East US, West US 2, and more

Since being generally available in April 2017, Azure Analysis Services has quickly become the clear choice for enterprise organizations delivering corporate business intelligence (BI) in the cloud. The success of any modern data driven organization requires that information be available at the fingertips of every business user, not just IT professionals and data scientists, to guide their day-to-day decisions.

Self-service BI tools have made huge strides in making data accessible to business users. However, most business users don’t have the expertise or desire to do the heavy lifting that is typically required including finding the right sources of data, importing the raw data, transforming it into the right shape, and adding business logic and metrics before they can explore the data to derive insights. With Azure Analysis Services a BI professional can create a semantic model over the raw data and share it with business users so that all they need to do is connect to the model from any BI tool and immediately explore the data to gain insights. Azure Analysis Services use a highly optimized in memory engine to provide responses to user queries at the speed of thought.

We are excited to share that Azure Analysis Services is now available in 4 additional regions including East US, West US 2, USGov-Arizona and USGov-Texas. This means that Azure Analysis Services is available in the following regions: Australia Southeast, Brazil South, Canada Central, East US, East US 2, Japan East, North Central US, North Europe, South Central US, Southeast Asia, UK South, West Central US, West Europe, West India, West US and West US 2.

New to Azure Analysis Services? Find out how you can try Azure Analysis Services or learn how to create your first data model.
Quelle: Azure

Core Workloads API GA

DaemonSet, Deployment, ReplicaSet, and StatefulSet are GAEditor’s Note: We’re happy to announce that the Core Workloads API is GA in Kubernetes 1.9! This blog post from Kenneth Owens reviews how Core Workloads got to GA from its origins, reveals changes in 1.9, and talks about what you can expect going forward.In the Beginning …There were Pods, tightly coupled containers that share resource requirements, networking, storage, and a lifecycle. Pods were useful, but, as it turns out, users wanted to seamlessly, reproducibly, and automatically create many identical replicas of the same Pod, so we created ReplicationController.Replication was a step forward, but what users really needed was higher level orchestration of their replicated Pods. They wanted rolling updates, roll backs, and roll overs. So the OpenShift team created DeploymentConfig. DeploymentConfigs were also useful, and OpenShift users were happy. In order to allow all OSS Kubernetes uses to share in the elation, and to take advantage of set-based label selectors, ReplicaSet and Deployment were added to the extensions/v1beta1 group version providing rolling updates, roll backs, and roll overs for all Kubernetes users.That mostly solved the problem of orchestrating containerized 12 factor apps on Kubernetes, so the community turned its attention to a different problem. Replicating a Pod <n> times isn’t the right hammer for every nail in your cluster. Sometimes, you need to run a Pod on every Node, or on a subset of Nodes (for example, shared side cars like log shippers and metrics collectors, Kubernetes add-ons, and Distributed File Systems). The state of the art was Pods combined with NodeSelectors, or static Pods, but this is unwieldy. After having grown used to the ease of automation provided by Deployments, users demanded the same features for this category of application, so DaemonSet was added to extension/v1beta1 as well.For a time, users were content, until they decided that Kubernetes needed to be able to orchestrate more than just 12 factor apps and cluster infrastructure. Whether your architecture is N-tier, service oriented, or micro-service oriented, your 12 factor apps depend on stateful workloads (for example, RDBMSs, distributed key value stores, and messaging queues) to provide services to end users and other applications. These stateful workloads can have availability and durability requirements that can only be achieved by distributed systems, and users were ready to use Kubernetes to orchestrate the entire stack.While Deployments are great for stateless workloads, they don’t provide the right guarantees for the orchestration of distributed systems. These applications can require stable network identities, ordered, sequential deployment, updates, and deletion, and stable, durable storage. PetSet was added to the apps/v1beta1 group version to address this category of application. Unfortunately, we were less than thoughtful with its naming, and, as we always strive to be an inclusive community, we renamed the kind to StatefulSet.Finally, we were done….Or were we?Kubernetes 1.8 and apps/v1beta2Pod, ReplicationController, ReplicaSet, Deployment, DaemonSet, and StatefulSet came to collectively be known as the core workloads API. We could finally orchestrate all of the things, but the API surface was spread across three groups, had many inconsistencies, and left users wondering about the stability of each of the core workloads kinds. It was time to stop adding new features and focus on consistency and stability.Pod and ReplicationController were at GA stability, and even though you can run a workload in a Pod, it’s a nucleus primitive that belongs in core. As Deployments are the recommended way to manage your stateless apps, moving ReplicationController would serve no purpose. In Kubernetes 1.8, we moved all the other core workloads API kinds (Deployment, DaemonSet, ReplicaSet, and StatefulSet) to the apps/v1beta2 group version. This had the benefit of providing a better aggregation across the API surface, and allowing us to break backward compatibility to fix inconsistencies. Our plan was to promote this new surface to GA, wholesale and as is, when we were satisfied with its completeness. The modifications in this release, which are also implemented in apps/v1, are described below.Selector Defaulting DeprecatedIn prior versions of the apps and extensions groups, label selectors of the core workloads API kinds were, when left unspecified, defaulted to a label selector generated from the kind’s template’s labels.This was completely incompatible with strategic merge patch and kubectl apply. Moreover, we’ve found that defaulting the value of a field from the value of another field of the same object is an anti-pattern, in general, and particularly dangerous for the API objects used to orchestrate workloads.Immutable SelectorsSelector mutation, while allowing for some use cases like promotable Deployment canaries, is not handled gracefully by our workload controllers, and we have always strongly cautioned users against it. To provide a consistent, usable, and stable API, selectors were made immutable for all kinds in the workloads API.We believe that there are better ways to support features like promotable canaries and orchestrated Pod relabeling, but, if restricted selector mutation is a necessary feature for our users, we can relax immutability in the future without breaking backward compatibility.The development of features like promotable canaries, orchestrated Pod relabeling, and restricted selector mutability is driven by demand signals from our users. If you are currently modifying the selectors of your core workload API objects, please tell us about your use case via a GitHub issue, or by participating in SIG apps.Default Rolling UpdatesPrior to apps/v1beta2, some kinds defaulted their update strategy to something other than RollingUpdate (e.g. app/v1beta1/StatefulSet uses OnDelete by default). We wanted to be confident that RollingUpdate worked well prior to making it the default update strategy, and we couldn’t change the default behavior in released versions without breaking our promise with respect to backward compatibility. In apps/v1beta2 we enabled RollingUpdate for all core workloads kinds by default.CreatedBy Annotation DeprecatedThe “kubernetes.io/created-by” was a legacy hold over from the days before garbage collection. Users should use an object’s ControllerRef from its ownerReferences to determine object ownership. We deprecated this feature in 1.8 and removed it in 1.9.Scale SubresourcesA scale subresource was added to all of the applicable kinds in apps/v1beta2 (DaemonSet scales based on its node selector).Kubernetes 1.9 and apps/v1In Kubernetes 1.9, as planned, we promoted the entire core workloads API surface to GA in the apps/v1 group version. We made a few more changes to make the API consistent, but apps/v1 is mostly identical to apps/v1beta2. The reality is that most users have been treating the beta versions of the core workloads API as GA for some time now. Anyone who is still using ReplicationControllers and shying away from DaemonSets, Deployments, and StatefulSets, due to a perceived lack of stability, should plan migrate their workloads (where applicable) to apps/v1. The minor changes that were made during promotion are described below.Garbage Collection Defaults to DeletePrior to apps/v1 the default garbage collection policy for Pods in a DaemonSet, Deployment, ReplicaSet, or StatefulSet, was to orphan the Pods. That is, if you deleted one of these kinds, the Pods that they owned would not be deleted automatically unless cascading deletion was explicitly specified. If you use kubectl, you probably didn’t notice this, as these kinds are scaled to zero prior to deletion. In apps/v1 all core worloads API objects will now, by default, be deleted when their owner is deleted. For most users, this change is transparent.Status ConditionsPrior to apps/v1 only Deployment and ReplicaSet had Conditions in their Status objects. For consistency’s sake, either all of the objects or none of them should have conditions. After some debate, we decided that Conditions are useful, and we added Conditions to StatefulSetStatus and DaemonSetStatus. The StatefulSet and DaemonSet controllers currently don’t populate them, but we may choose communicate conditions to clients, via this mechanism, in the future.Scale Subresource Migrated to autoscale/v1We originally added a scale subresource to the apps group. This was the wrong direction for integration with the autoscaling, and, at some point, we would like to use custom metrics to autoscale StatefulSets. So the apps/v1 group version uses the autoscaling/v1 scale subresource.Migration and DeprecationThe question most you’re probably asking now is, “What’s my migration path onto apps/v1 and how soon should I plan on migrating?” All of the group versions prior to apps/v1 are deprecated as of Kubernetes 1.9, and all new code should be developed against apps/v1, but, as discussed above, many of our users treat extensions/v1beta1 as if it were GA. We realize this, and the minimum support timelines in our deprecation policy are just that, minimums.In future releases, before completely removing any of the group versions, we will disable them by default in the API Server. At this point, you will still be able to use the group version, but you will have to explicitly enable it. We will also provide utilities to upgrade the storage version of the API objects to apps/v1. Remember, all of the versions of the core workloads kinds are bidirectionally convertible. If you want to manually update your core workloads API objects now, you can use kubectl convert to convert manifests between group versions.What’s Next?The core workloads API surface is stable, but it’s still software, and software is never complete. We often add features to stable APIs to support new use cases, and we will likely do so for the core workloads API as well. GA stability means that any new features that we do add will be strictly backward compatible with the existing API surface. From this point forward, nothing we do will break our backwards compatibility guarantees. If you’re looking to participate in the evolution of this portion of the API, please feel free to get involved in GitHub or to participate in SIG Apps.–Kenneth Owens, Software Engineer, GoogleDownload KubernetesGet involved with the Kubernetes project on GitHub Post questions (or answer questions) on Stack Overflow Connect with the community on SlackFollow us on Twitter @Kubernetesio for latest updates
Quelle: kubernetes

Win 2: GPD stellt neuen Windows-10-Handheld vor

Nach dem Gaming-Handheld GPD Win hat der chinesische Hersteller den Nachfolger Win 2 vorgestellt, der mit einem größeren Display, größerem Akku und einem Core-M-Prozessor kommen soll. Finanziert wird das Gerät über Crowdfunding – das Finanzierungsziel wurde bereits nach wenigen Stunden erreicht. (Games, USB 3.0)
Quelle: Golem