Trump Begins Tweeting From @POTUS After Obama Hands It Over

President Donald Trump didn&;t waste any time putting the @POTUS Twitter account into action. Trump gained access to the account after taking the oath of office Friday, and used it shortly afterward to tweet a link to the text of his inaugural address.

The @POTUS account, along with a number of other social media accounts created under the Obama administration, were peacefully transferred to the Trump administration in the first such social handover of its kind. To carry out the transition, Twitter added a “44” to the end of the Obama administration&039;s accounts, and spawned new @POTUS, @FLOTUS, @VP, @WhiteHouse and @PressSec accounts, duplicating their followers.

Twitter says the Trump administration&039;s accounts will retain all the Obama administration&039;s accounts followers. But the migration process takes some time. As of this writing, the new @POTUS account has around 7 million followers, while @POTUS44 has 14 million.

A prolific tweeter, Trump hasn&039;t discarded his @realDonaldTrump account. He&039;s tweeted 10 times from it as president. Meanwhile, he posted just single tweet from @POTUS. Prior to taking office, Trump regularly tweeted criticism of the media, political opponents and public figures who took stands against him. Whether he continues this pattern will be a main point of intrigue in the early days of his administration that promises to contain plenty.

As for Barack Obama? He&039;s back on Twitter using his old @BarackObama handle with its 80 million followers.

The transition of presidential Twitter accounts was not an entirely seamless one. Early in the day the background photo featured on Trump&039;s @POTUS account was a Getty shot of Barack Obama&039;s 2009 inauguration. It was not a vestige of the Obama-era account, and was replaced later in the day.

And, for a short time, Vice President Mike Pence&039;s tweets from @VP were protected.

All the issues now appear to have been resolved, and the peaceful transition of social media power is complete.

Quelle: <a href="Trump Begins Tweeting From @POTUS After Obama Hands It Over“>BuzzFeed

How we run Kubernetes in Kubernetes aka Kubeception

Editor’s note: Today’s post is by the team at Giant Swarm, showing how they run Kubernetes in Kubernetes.Giant Swarm’s container infrastructure started out with the goal to be an easy way for developers to deploy containerized microservices. Our first generation was extensively using fleet as a base layer for our infrastructure components as well as for scheduling user containers.In order to give our users a more powerful way to manage their containers we introduced Kubernetes into our stack in early 2016. However, as we needed a quick way to flexibly spin up and manage different users’ Kubernetes clusters resiliently we kept the underlying fleet layer.As we insist on running all our underlying infrastructure components in containers, fleet gave us the flexibility of using systemd unit files to define our infrastructure components declaratively. Our self-developed deployment tooling allowed us to deploy and manage the infrastructure without the need for imperative configuration management tools.However, fleet is just a distributed init and not a complete scheduling and orchestration system. Next to a lot of work on our tooling, it required significant improvements in terms of communication between peers, its reconciliation loop, and stability that we had to work on. Also the uptake in Kubernetes usage would ensure that issues are found and fixed faster.As we had made good experience with introducing Kubernetes on the user side and with recent developments like rktnetes and stackanetes it felt like time for us to also move our base layer to Kubernetes.Why Kubernetes in KubernetesNow, you could ask, why would anyone want to run multiple Kubernetes clusters inside of a Kubernetes cluster? Are we crazy? The answer is advanced multi-tenancy use cases as well as operability and automation thereof.Kubernetes comes with its own growing feature set for multi-tenancy use cases. However, we had the goal of offering our users a fully-managed Kubernetes without any limitations to the functionality they would get using any vanilla Kubernetes environment, including privileged access to the nodes. Further, in bigger enterprise scenarios a single Kubernetes cluster with its inbuilt isolation mechanisms is often not sufficient to satisfy compliance and security requirements. More advanced (firewalled) zoning or layered security concepts are tough to reproduce with a single installation. With namespace isolation both privileged access as well as firewalled zones can hardly be implemented without sidestepping security measures.Now you could go and set up multiple completely separate (and federated) installations of Kubernetes. However, automating the deployment and management of these clusters would need additional tooling and complex monitoring setups. Further, we wanted to be able to spin clusters up and down on demand, scale them, update them, keep track of which clusters are available, and be able to assign them to organizations and teams flexibly. In fact this setup can be combined with a federation control plane to federate deployments to the clusters over one API endpoint.And wouldn’t it be nice to have an API and frontend for that?Enter GiantnetesBased on the above requirements we set out to build what we call Giantnetes – or if you’re into movies, Kubeception. At the most basic abstraction it is an outer Kubernetes cluster (the actual Giantnetes), which is used to run and manage multiple completely isolated user Kubernetes clusters.The physical machines are bootstrapped by using our CoreOS bootstrapping tool, Mayu. The Giantnetes components themselves are self-hosted, i.e. a kubelet is in charge of automatically bootstrapping the components that reside in a manifests folder. You could call this the first level of Kubeception.Once the Giantnetes cluster is running we use it to schedule the user Kubernetes clusters as well as our tooling for managing and securing them.We chose Calico as the Giantnetes network plugin to ensure security, isolation, and the right performance for all the applications running on top of Giantnetes.Then, to create the inner Kubernetes clusters, we initiate a few pods, which configure the network bridge, create certificates and tokens, and launch virtual machines for the future cluster. To do so, we use lightweight technologies such as KVM and qemu to provision CoreOS VMs that become the nodes of an inner Kubernetes cluster. You could call this the second level of Kubeception.Currently this means we are starting Pods with Docker containers that in turn start VMs with KVM and qemu. However, we are looking into doing this with rkt qemu-kvm, which would result in using a rktnetes setup for our Giantnetes.The networking solution for the inner Kubernetes clusters has two levels. It on a combination of flannel’s server/client architecture model and Calico BGP. While a flannel client is used to create the network bridge between the VMs of each virtualized inner Kubernetes cluster, Calico is running inside the virtual machines to connect the different Kubernetes nodes and create a single network for the inner Kubernetes. By using Calico, we mimic the Giantnetes networking solution inside of each Kubernetes cluster and provide the primitives to secure and isolate workloads through the Kubernetes network policy API.Regarding security, we aim for separating privileges as much as possible and making things auditable. Currently this means we use certificates to secure access to the clusters and encrypt communication between all the components that form a cluster is (i.e. VM to VM, Kubernetes components to each other, etcd master to Calico workers, etc). For this we create a PKI backend per cluster and then issue certificates per service in Vault on-demand. Every component uses a different certificate, thus, avoiding to expose the whole cluster if any of the components or nodes gets compromised. We further rotate the certificates on a regular basis.For ensuring access to the API and to services of each inner Kubernetes cluster from the outside we run a multi-level HAproxy ingress controller setup in the Giantnetes that connects the Kubernetes VMs to hardware load balancers.Looking into Giantnetes with kubectlLet’s have a look at a minimal sample deployment of Giantnetes.In the above example you see a user Kubernetes cluster `customera` running in VM-containers on top of Giantnetes. We currently use Jobs for the network and certificate setups.Peeking inside the user cluster, you see the DNS pods and a helloworld running.Each one of these user clusters can be scheduled and used independently. They can be spun up and down on-demand.ConclusionTo sum up, we could show how Kubernetes is able to easily not only self-host but also flexibly schedule a multitude of inner Kubernetes clusters while ensuring higher isolation and security aspects. A highlight in this setup is the composability and automation of the installation and the robust coordination between the Kubernetes components. This allows us to easily create, destroy, and reschedule clusters on-demand without affecting users or compromising the security of the infrastructure. It further allows us to spin up clusters with varying sizes and configurations or even versions by just changing some arguments at cluster creation. This setup is still in its early days and our roadmap is planning for improvements in many areas such as transparent upgrades, dynamic reconfiguration and scaling of clusters, performance improvements, and (even more) security. Furthermore, we are looking forward to improve on our setup by making use of the ever advancing state of Kubernetes operations tooling and upcoming features, such as Init Containers, Scheduled Jobs, Pod and Node affinity and anti-affinity, etc.Most importantly, we are working on making the inner Kubernetes clusters a third party resource that can then be managed by a custom controller. The result would be much like the Operator concept by CoreOS. And to ensure that the community at large can benefit from this project we will be open sourcing this in the near future.– Hector Fernandez, Software Engineer & Puja Abbassi, Developer Advocate, Giant Swarm
Quelle: kubernetes

OpenStack Developer Mailing List Digest January 14-20

SuccessBot Says

stevemar 1 : number of open keystone bugs < 100!
morgan 2 : Good policy meeting, provided history and background that cleared up a lot of confusion
Tell us yours via OpenStack IRC channels with message “ <message>”
All

FIPS Compliance

Previous threads 3 have been discussing enabling Federal Information Processing Standards (FIPS).
Various OpenStack projects make md5 calls. Not for security purposes, just hash generation, but even that blocks enabling FIPS.
A patch has been proposed for newest versions of Python for users to set if these are used for security or not 4 .

Won’t land until next versions of Python, but in place for current RHEL and CentOS versions.
We will create a wrapper around md5 with a useforsecurity=False parameter to check the signature of hashlib.md5.

Steps forward:

Create the wrapper
Replace all md5 calls in OpenStack projects with the wrapper.

Unfortunately the patch 4 has stopped having progress since 2013. We should get that merged first.

Even if this did land, it would be a while before it was adopted, since it would land in Python 3.7.

Full thread

Refreshing and Revalidating API Compatibility Guidelines

In the last TC meeting 5 , a tag was in review for supporting API compatibility 6 .
The tag evaluates projects by using the API guideline which is out of date 7.

A review has been posted to refresh these guidelines 8 .
API compatibility of overtime is a fundamental aspect of OpenStack interoperability. Not only do we need to get it it right, we need to make sure we understand it.

Full thread

Base Services

in open stack all components can assume that a number of external services won&;t be present and available (e.g. A message queue, database).
The Architecture working group has started this effort 9 .
This proposal 10 is a prerequisite in order for us to have more strategic discussions with adding base services.
Review the proposal and/or join the Architecture working group meeting 11
Once solidified the technical committee will have a final discussion and approval.
Full thread

Improving Vendor Discoverability

In previous Technical Committee meetings, it was agreed that vendor discoverability needs to be improved.
This is done today with the OpenStack Foundation marketplace 12 .

This is powered by the community driven project call driver log which is a big JSON file 13.

Various people in the community did not know the marketplace worked and we&8217;re unhappy that the projects themselves weren&8217;t owning it.
The goal of this discussion is to have this process be more community driven than it is today.
Suggestion: Split driver log into smaller JSON files that are inside each project to maintain.

Projects will set how they validate vendors into this list.
There’s a trend today for third party CI’s being a choice of validation 14.

Full thread

Nominations for OpenStack PTLs Are Now Open!

Will remain open until January 29, 2017 23:45 UTC.
Candidates must submit a text file openstack/election repository 15

Filename convention is $cyclename/$projectname/$ircname.txt.
To be eligible, you need to have contributed an accepted patch to one of the corresponding program’s projects 16 during the Neutron-Ocata timeframe (April 11, 2016 00:00 UTC to January 23, 2017 23:59 UTC).

Additional information about the nomination process 17
Approved candidates will be listed 18.
Electorate should confirm their email address in Gerrit 19 in Settings ←Contact Information ←Preferred Email prior to Jan 25, 2017 23:59 UTC.
Full thread

The Process of Creating stable/ocata branches

As previously mentioned 20, it’s possible for teams to setup stable branches when ready.
The release team will not be automatically setting up branches this cycle.

The release liaison within teams will need to inform when ready.
The PTL or release liaison may request a new branch by submitting a patch to the openstack/releases repository specifying the tagged version to be used as the base of the branch.

Guidelines for when projects should branch:

Projects using the cycle-with-milestone release model should include the request for their stable branch along with the RC1 tag request (target week is R-3 week, so use Feb 2 as the deadline)
Library projects should be branched with, or shortly after, their final release this week (use Jan 19 as the deadline)
I will branch the requirements repository shortly after all of the cycle-with-milestone projects have branched. After the   requirements repository is branched and the master requirements list is opened, projects that have not branched will be tested with requirements as the requirements master branch advances and stable/ocata stays stable. Waiting too long to create the stable/ocata branch may result in broken CI jobs in either stable/ocata or master. Don&8217;t delay any further than necessary.
Projects using the cycle-trailing release model should branch by R-0 (23 Feb). The remaining two weeks before the trailing deadline should be used for last-minute fixes, which will need to be backported into the branch to create the final release.
Other projects, including cycle-with-intermediary and independent  projects that create branches, should request their stable branch when they are ready to declare a final version and start working on Pike-related changes. This must be completed before the final release week, use 16 Feb as the deadline.

See the README.rst file in openstack/releases for more details about how to format branch specifications.
Full thread

Why Are Projects Trying To Avoid Barbican, Still?

Some projects are wanting to implement their own secret storage to avoid Barbican or avoid adding a dependency on it.

Some developers are doing this to make the operator’s lives simpler.

Barbican Positives:

Barbican has been around for a few years and deployed by several companies that have probably been audited for security purposes.
Most of the technology involved in Barbican is proven to be secure. This has been analyzed by the OpenStack’s own security group.
Doesn’t have a requirement on hardware TPM, so no hardware cost.
Several services provide the option of using Barbican, but not a hard requirement.

Feedback of problems with Barbican:

Relying on something that cannot be guaranteed will be present in a deployment.

The base service 9 proposal could help with this.

OpenStack specific solution. Some companies are using solutions that integrate with other things:

Keywhiz 21 to work with Kubernetes and their existing systems.

Devstack plugin just sets up Barbican. It’s not actually configuring any existing services to use it.
No fixed key manager for testing. The Barbican team pushed back on maintaining this because it’s not secure.
API stability with version 2 ←3 changes were made without a deprecation path or guarantees.
Tokens are open ended for users. Keystone and Barbican need to be much closer.

Castellan provides an abstraction for key management, but today only Barbican.
Rackspace recently made Barbican available. Maybe it’s easier now to perform an HA deployment.
Full thread

POST /api-wg/news

New guidelines:

Accurate status code vs backwards compatibility 22
Fix no sample file in browser 23

Guidelines proposed for freeze:

Add guidelines on usage of state vs. status 24
Clarify the status values in versions 25
Add guideline for invalid query parameters 26

Under review:

Add guidelines for boolean names 27
Define pagination guidelines 28
Add API capabilities discovery guideline 29

Full thread

Release Countdown for Week R-4 Jan 23-27

Focus:

This week begins feature freeze for all milestone-based projects.
No feature patches should be landed after this point.
PTLs may grant exceptions
Soft string freeze begins.

Review teams should reject any modifications to user-facing strings.

Requirement freeze begins.

Only critical requirements and constraints changes will be allowed.

Release Tasks:

Prepare final release and branch requests for all client libraries.
Review stable branches for unreleased changes and prepare those releases.
Milestone based projects should ensure that membership of $project-release gerri groups is up to date with the team who will finalize the project release.

General Notes:

RC1 target week in R-3 is only one week after freeze.

Important Dates:

Ocata 3 Milestone, with Feature and Requirements Freezes: 26 Jan
Ocata RC1 target: 2 Feb
Ocata Final Release candidate deadline: 16 Feb
Ocata release schedule 30

Full thread

 
[1] &; http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2017-01-18.log.html
[2] &8211; http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2017-01-18.log.html
[3] &8211; http://lists.openstack.org/pipermail/openstack-dev/2016-November/107035.html
[4] &8211; http://bugs.python.org/issue9216
[5] &8211; http://eavesdrop.openstack.org/meetings/tc/2017/tc.2017-01-17-20.00.log.html
[6] &8211; https://review.openstack.org/#/c/418010/
[7] &8211; http://specs.openstack.org/openstack/api-wg/guidelines/evaluating_api_changes.html
[8] &8211; https://review.openstack.org/#/c/421846/
[9] &8211; https://review.openstack.org/421956
[10] &8211; https://review.openstack.org/421957
[11] &8211; http://eavesdrop.openstack.org/
[12] &8211; https://www.openstack.org/marketplace/drivers/
[13] &8211; http://git.openstack.org/cgit/openstack/driverlog/tree/etc/default_data.json
[14] &8211; https://etherpad.openstack.org/p/driverlog-validation
[15] &8211; http://governance.openstack.org/election/how-to-submit-your-candidacy
[16] &8211; http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml
[17] &8211; https://governance.openstack.org/election/
[18] &8211; https://governance.openstack.org/election/pike-ptl-candidates
[19] &8211; https://review.openstack.org
[20] &8211; http://lists.openstack.org/pipermail/openstack-dev/2016-December/108923.html
[21] &8211; https://github.com/square/keywhiz
[22] &8211; https://review.openstack.org/#/c/422264/
[23] &8211; https://review.openstack.org/#/c/421084/
[24] &8211; https://review.openstack.org/#/c/411528/
[25] &8211; https://review.openstack.org/#/c/411849/
[26] &8211; https://review.openstack.org/417441
[27] &8211; https://review.openstack.org/#/c/411529/
[28] &8211; https://review.openstack.org/#/c/390973/
[29] &8211; https://review.openstack.org/#/c/386555/
[30] &8211; http://releases.openstack.org/ocata/schedule.html
Quelle: openstack.org

OpenShift for Developers: Set Up a Full Cluster in Under 30 Minutes

After you play around with OpenShift locally, you will come to the realization that you would enjoy having a 24/7 install of OpenShift that you can publicly host your projects on. This is where a lot of Developers stumble because they aren’t system administrators. For that reason, I took some time to create a video that shows how to install OpenShift Origin 1.4 from start to finish. This means that I create a bare virtual machine, install the operating system, install dependencies (like docker), and then use ansible to install OpenShift. After the install, I then show how to setup wildcard DNS for a public hostname. All in under 30 minutes.
Quelle: OpenShift

The New White House Website Says Almost Nothing About Tech

Prior to today&;s presidential inauguration, the official website of the president, Whitehouse.gov, was reset to reflect the new occupant of 1600 Pennsylvania Ave.: Donald Trump. And while the largely barebones new site features language related to the new administration&039;s energy, defense, trade, and job growth policies, it features scant information related to technology, a sector that directly affects all four.

There are only two direct mentions of issues germane to the internet or technology: One, in the “Making Our Military Strong Again” subsection, is some boilerplate about the importance of cyberwarfare:

Cyberwarfare is an emerging battlefield, and we must take every measure to safeguard our national security secrets and systems. We will make it a priority to develop defensive and offensive cyber capabilities at our U.S. Cyber Command, and recruit the best and brightest Americans to serve in this crucial area.

The second comes at the end of the biography page of first lady Melania Trump, and concerns her campaign against cyberbullying:

Mrs. Trump cares deeply about issues impacting women and children, and she has focused her platform as First Lady on the problem of cyber bullying among our youth.

That cyberwarfare and cyberbullying are the only two mentions of the internet or technology on the White House page is somewhat ironic, given revelations about Russian interference in the presidential election, and the scores of pro-Trump trolls that plagued social media during the campaign.

There is no mention of automation —acknowledged by economists as a major threat to American jobs — or specific policies important to Silicon Valley, including the status of visas for high-tech workers, and a proposed one-time tax holiday on repatriation of foreign income to encourage big tech firms to bring money back into the US.

Quelle: <a href="The New White House Website Says Almost Nothing About Tech“>BuzzFeed

CPU Management in Docker 1.13

Resource management for containers is a huge requirement for production users. Being able to run multiple containers on a single host and ensure that one container does not starve the others in terms of cpu, memory, io, or networking in an efficient way is why I like working with containers. However, cpu management for containers is still not as straightforward as what I would like. There are many different options when it comes to dealing with restricting the cpu usage for a container. With things like memory, its is very easy for people to think that , –memory 512m gives the container up to 512mb. With CPU, it&;s hard for people to understand a container’s limit with the current options.
In 1.13 we added a –cpus flag, which is the best tech for limiting cpu usage of a container with a sane UX that the majority of users can understand. Let’s take a look at a couple of the options in 1.12 to show why this is necessary.
There are various ways to set a cpu limit for a container. Cpu shares, cpuset, cfs quota and period are the three most common ways. We can just go ahead and say that using cpu shares is the most confusing and worst functionality out of all the options we have. The numbers don’t make sense. For example, is 5 a large number or is 512 half of my system’s resources if there is a max of 1024 shares?  Is 100 shares significant when I only have one container; however, if I add two more containers each with 100 shares, what does that mean?  We could go in depth on cpu shares but you have to remember that cpu shares are relative to everything else on the system.
Cpuset is a viable alternative but it takes much more thought and planning to use it correctly and use it in the correct circumstances. The cfs scheduler along with quota and period are some of the best options for limiting container cpu usage but they come with bad user interfaces. Specifying cpu usage in nanoseconds for a user is sometimes hard to determine when you want to do simple tasks such as limiting a container to one core.
In 1.13 though, if you want a container to be limited to one cpu then you can just add –cpus 1.0 to your Docker run/create command line. If you would like two and a half cpus as the limit of the container then just add –cpus 2.5. In Docker we are using the CFS quota and period to limit the container’s cpu usage to what you want and doing the calculations for you.
If you are limiting cpu usage for your containers, look into using this new flag and API to handle your needs. This flag will work on both Linux and Windows when using Docker.  
For more information on the feature you can look at the docs https://docs.docker.com/engine/admin/resource_constraints/
For more information on Docker 1.13 in general, check out these links:

Read the product documentation
Learn more about the latest Docker 1.13 release
Get started and install Docker
Attend the next Docker Online Meetup on Wed 1/25 at 10am PST

Introducing a new CPU management flag in Docker 1.13 Click To Tweet

The post CPU Management in Docker 1.13 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

People Are Protesting Outside Uber HQ Because Travis Kalanick Met With Trump

Travis Kalanick, Uber&;s chief executive

Money Sharma / AFP / Getty Images

People are blocking off the entrance to Uber’s headquarters in San Francisco on the morning of Donald Trump’s inauguration in protest of the company’s collaboration with incoming president. Travis Kalanick, Uber’s chief executive, joined Trump’s roster of advisers on business and tech last month.

When Kalanick was named as one of Trump&039;s advisers, he told BuzzFeed News in a statement that “I look forward to engaging with our incoming president and this group on issues that affect our riders, drivers and the 450+ cities where we operate.”

“As a company we&039;re committed to working with government on issues that affect riders, drivers and the cities where we operate. Just as we worked with the Obama Administration, we&039;ll work with the Trump Administration, too,” Uber said in a statement.

Other companies have faced similar protests leading up to Trump&039;s inauguration. About 60 people protested outside Palantir Technologies, whose board member Peter Thiel is a top adviser to Trump, on Thursday to pressure the company to be more transparent about how it would use its databases to potentially help the Trump administration. Many tech companies, including Uber, have said they would not aid in building a “Muslim registry” – something Trump has signaled support for during his campaign.

Uber confirmed that it told staff to work from home or other offices until the entrance to HQ is no longer blocked by protesters.

Quelle: <a href="People Are Protesting Outside Uber HQ Because Travis Kalanick Met With Trump“>BuzzFeed