14 Executives Who Have Left Uber This Year

The world’s most highly valued private company has had a challenging 2017.

Staff / Reuters

Uber, the ride-hailing giant with a valuation of $70 billion, has been rocked by an unrelenting series of scandals and staff departures since the beginning of the year.

It started on January 19, when the Federal Trade Commission hit Uber with a $20 million fine for misleading drivers about pay. Only a week later, a #DeleteUber campaign went viral after Uber turned off surge pricing at New York's JFK airport during a taxi workers' strike against President Trump's travel ban. People were also angered that CEO Travis Kalanick had joined one of Trump's advisory boards. Within a few days, Kalanick had quit Trump's advisory board. But by early February, the New York Times reported that nearly 200,000 people removed Uber's app from their smartphones.

Then, on February 19, former Uber engineer Susan Fowler published an explosive blog post in which she alleged a systemic culture of sexual harassment and gender bias at the company. In response to Fowler's blog, Uber launched a harassment and discrimination investigation on February 20 led by former attorney general Eric Holder and Uber board member Ariana Huffington. The investigation has already led to 20 firings.

Amidst all this, Uber has been facing a contentious lawsuit from its self-driving car rival. On February 23, Alphabet's self-driving car company Waymo sued Uber for theft of trade secrets and patent infringement. At the center of the suit is Anthony Levandowski, an engineer who worked for almost a decade on Alphabet's self-driving car efforts until he left to launch a self-driving truck startup called Otto. Not long after, Uber acquired Otto, and Levandowski became the head of its self-driving car program. In its suit, Alphabet alleges that Otto itself was a ruse designed to steal its self-driving car technology.

Throughout Uber's series of public crises, critics have questioned whether Kalanick is fit to continue leading the company. And on June 11, sources told BuzzFeed News that Kalanick is considering taking a leave of absence from Uber.

Holder's report on the investigation into the claims of sexual harassment at the company is expected to be released on June 13.


View Entire List ›

Quelle: <a href="14 Executives Who Have Left Uber This Year“>BuzzFeed

Writing VNFs for OPNFV

The post Writing VNFs for OPNFV appeared first on Mirantis | Pure Play Open Cloud.

[NOTE:  This article is an excerpt from Understanding OPNFV, by Amar Kapadia. You can also download an ebook version of the book here.]
The entire OPNFV stack, ultimately, serves one purpose: to run virtual network functions (VNFs) that in turn constitute network services. We will look at two major considerations: how to write VNFs, and how to onboard them. We’ll conclude by analyzing how a vIMS VNF, Clearwater, has been onboarded by OPNFV.
Writing VNFs
We looked at three types of VNF architectures in Chapter 2: cloud hosted, cloud optimized, and cloud native. As a VNF creator or a buyer, your first consideration is to pick the architecture.
Physical network functions that are simply converted into a VNF without any optimizations are likely to be cloud hosted. Cloud hosted applications are monolithic and generally stateful. These VNFs require a large team that may or may not be using an agile development methodology. These applications are also dependent on the underlying infrastructure to provide high availability, and typically cannot be scaled out or in. In some cases, these VNFs may also need manual configuration.
Some developers refactor cloud hosted VNFs to make them more cloud friendly, or “cloud optimized”. A non-disruptive way to approach this effort is to take easily separable aspects of the monolithic application and convert them into services accessible via REST APIs. The VNF state may then be moved to a dedicated service, so the rest of the app becomes stateless. Making these changes allows for greater velocity in software development and the ability to perform cloud-centric operations such as scale-out, scale-in and self-healing.
While converting an existing VNF to be fully cloud native may be overly burdensome, all new VNFs should be written exclusively as cloud native if possible. (We have already covered cloud native application patterns in Chapter 2.) By using a cloud native architecture, developers and users can get much higher velocity in innovation and a high degree of flexibility in VNF orchestration and lifecycle management. In an enterprise end-user study conducted by Mirantis and Intel, the move to cloud native programming showed an average increase of iterations/year from 6 to 24 (4x increase) and a typical increase in the number of user stories/iteration of 20-60%. Enterprise cloud native apps are not the same as cloud native VNFs, but these benefits should generally apply to NFV as well.
Ultimately, there is no right or wrong architecture choice for existing VNFs (new VNFs should be designed as cloud native). The chart below shows VNF app architecture trade-offs.
Trade-offs between VNF Architectures

VNF Onboarding
The next major topic to consider when integrating VNFs into OPNFV scenarios is VNF onboarding. A VNF by itself is not very useful; the MANO layer needs associated metadata and descriptors to manage these VNFs. The VNF Package, which includes the VNF Descriptor (VNFD), describes what the VNF requires, how to configure the VNF, and how to manage its lifecycle. Along with this information, the VNF onboarding process may be viewed in four steps.
VNF Onboarding Steps

A detailed discussion of these steps is out of scope for this book and instead we will focus on the VNF package.
For successful VNF onboarding, the following types of attributes need to be specified in the VNF package. This list is by no means comprehensive; it is meant to be a sample. The package may include:

Basic information such as:

Pricing
SLA
Licensing model
Provider
Version

VNF packaging (tar or CSAR etc.)
VNF configuration
NFVI requirements such as:

vCPU
Memory
Storage
Data plane acceleration
CPU architecture
Affinity/anti-affinity

VNF lifecycle management:

Start/stop
Scaling
Healing
Update
Upgrade
Termination

Currently, the industry lacks standards in the areas of VNF packaging and descriptors. Each MANO vendor or MANO project and each NFV vendor has its own format. By the time, you add VIM-specific considerations, you get an unmanageably large development and interop matrix. It could easily take months of manual work for a user to onboard VNFs to their specific MANO+VIM choice because the formats have to be adapted and then tested. Both users and VNF providers find this process less than ideal. Both sides are always wondering which models to support, and what components to proactively test against.
The VNF manager (VNFM) might further complicate the situation. For simple VNFs, a generic VNFM might be adequate. For more complex VNFs such as VoLTE (voice over LTE), a custom (read: proprietary) VNFM might be needed, and would be provided by the VNF vendor. Needless to say, the already complex interop matrix becomes even more complex in this case.
In addition to manual work and wasted time, there are other issues exposed by the lack of standards. For example, there is no way for a VNF to be sure it will be provided resources that match its requirements. There may also be gaps in security, isolation, scaling, self-healing and other lifecycle management phases.
OPNFV recognizes the importance of standardizing the VNF onboarding process. The MANO working group, along with the Models project (see Chapter 5) is working on standardizing VNF onboarding for OPNFV. The projects address multiple issues including VNF package development, VNF package import, VNF validation/testing (basic and in-service), VNF import into a catalog, service blueprint creation, and VNFD models. The three main modeling languages being considered are: UML, TOSCA-NFV simple profile, and YANG:

UML: The Unified Modeling Language (UML) is standardized by the Object Management Group (OMG) and can be used for a variety of use cases. ETSI is using UML for standardizing their VNFD specification. At a high level, UML could be considered an application-centric language.
TOSCA-NFV simple profile: TOSCA is a cloud-centric modeling language. A TOSCA blueprint describes a graph of node templates, along with their connectivity. Next, workflows specify how a series of actions occur, which can get complex when considering various dependencies. Finally, TOSCA also allows for policies that trigger workflows based on events. The TOSCA-NFV simple profile specification covers an NFV-specific data model using the TOSCA language.
YANG: YANG is a modeling language standardized by IETF. Unlike TOSCA or UML, YANG is a network-centric modeling language. YANG models the state data and configurations of network elements. YANG describes a tree of nodes and relationships between them.

OPNFV is considering all three approaches, and in some cases hybrid approaches with multiple modeling languages, to solve the VNF onboarding problem. Given the importance of this issue, there is also considerable collaboration with outside organizations and projects such as ETSI, TMForum, OASIS, ON.Lab, and so on.
Clearwater vIMS on OPNFV
Clearwater is a virtual IP multimedia system (vIMS) software VNF project, open-sourced by Metaswitch. It is a complex cloud native application with a number of interconnected virtual instances.
Clearwater vIMS

For OPNFV testing, TOSCA is used as the VNFD modeling language. The TOSCA blueprint first describes each of the nodes and their connectivity. A snippet of this code is shown below:
VNF Descriptor for Homestead HSS Mirror
homestead_host:
    type: clearwater.nodes.MonitoredServer
    capabilities:
      scalable:
        properties:
          min_instances: 1
    relationships:
      – target: base_security_group
        type: cloudify.openstack.server_connected_to_security_group
      – target: homestead_security_group
        type: cloudify.openstack.server_connected_to_security_group

 homestead:
    type: clearwater.nodes.homestead
    properties:
      private_domain: clearwater.local
      release: { get_input: release }
    relationships:
      – type: cloudify.relationships.contained_in
        target: homestead_host
      – type: app_connected_to_bind
        target: bind
Next, the TOSCA blueprint describes a number of workflows. The workflows cover full lifecycle management of Clearwater. Finally, the blueprint describes policies that trigger workflows based on events.
Clearwater TOSCA Workflows

The TOSCA code fragment below shows a scale up policy based on a threshold, that then triggers a workflow to scale up Sprout SIP router instances from the initial 1 to a maximum of 5.
TOSCA Scaleup Policy and Workflow for Sprout SIP
  policies:
     up_scale_policy:
       type: cloudify.policies.types.threshold
       properties:
         service: cpu.total.user
         threshold: 25
         stability_time: 60
       triggers:
         scale_trigger:
           type: cloudify.policies.triggers.execute_workflow
           parameters:
             workflow: scale
             workflow_parameters:
               scalable_entity_name: sprout
               delta: 1
               scale_compute: true
               max_instances: 5

Initial Deployment of the Clearwater VNF

Once the blueprint is complete, an orchestrator needs to interpret and act upon the TOSCA blueprint. For purposes of testing Clearwater, OPNFV uses Cloudify, a MANO product from Gigaspaces available in both commercial and open source flavors. Cloudify orchestrates each of the workflows described in the above blueprint. Specifically, the workflow to deploy the VNF looks like this:
Running this entire series of steps in an automated fashion in Functest requires the following:
Step 1: Deploy VIM, SDN controller, NFVI
Step 2: Deploy the MANO software (could be Heat, Open-O or Cloudify, which is the current choice). For testing purposes, it is possible to use the full MANO stack (NFVO + VNFM) or just the VNFM.
Step 3: Test the VNF. For project Clearwater, Functest runs more than 100 default signaling tests covering most vIMS test cases (calls, registration, redirection, busy, and so on).
We have talked about a specific VNF, but this approach is pragmatic enough to be applied to other VNFs – open source or proprietary. Using OPNFV as a standard way to onboard VNFs brings great value to the industry because of the complexity of the VNF onboarding landscape. No one vendor or user has the resources or time to perform testing against a full interop matrix. But as a community, this is eminently possible.
At this point, it is worth taking a bit of a detour to illustrate the power of open source. The initial project Clearwater testing work was done by an intern at Orange. The work became quite popular, and has been adopted by numerous vendors, influenced the OPNFV MANO working group, and even convinced some operators to use OPNFV as a VNF onboarding vehicle.
In summary, we saw how VNFs can target different application architectures, what is involved in onboarding VNFs, and a concrete example of how the Clearwater vIMS VNF has been onboarded by OPNFV for testing purposes. In the next chapter, we will discuss how you can benefit from and get involved with the OPNFV project.
[NOTE:  This article is an excerpt from Understanding OPNFV, by Amar Kapadia. You can also download an ebook version of the book here.]
The post Writing VNFs for OPNFV appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

The Case For Interviewing Alex Jones

Last Sunday night, as the second episode of her new NBC news magazine show was wrapping up, Megyn Kelly ran a short teaser for next week’s highly anticipated interview: a sit-down with America's best-known conspiracy theorist, Alex Jones.

The backlash began almost immediately. On Twitter, a #ShameOnNBC hashtag circulated widely within the first few hours. Parents of children who were lost in the Sandy Hook massacre — which Jones has argued repeatedly is a hoax — took to Twitter alongside advocacy groups to condemn not just Jones, but Kelly and NBC as well. The liberal media watchdog site Media Matters suggested that “Megyn Kelly turned to Alex Jones because her struggling show needs a viral moment.” Monday morning, Chelsea Clinton took to Twitter to voice her displeasure with her former employer, NBC. “I hope no parent, no person watches this,” Clinton tweeted. Later in the afternoon, the Wall Street Journal reported that JP Morgan was dropping its local and digital ads around Kelly’s program and all NBC content until after the interview airs.

The argument behind the outrage suggests that featuring Jones on a primetime network television interview show is an irresponsible use of a powerful news platform. To sit Jones across from one of America's most recognizable (and highest paid) news personalities is to legitimize a man with fringe views that many find abhorrent. Furthermore, they note, such exposure could theoretically extend Jones’ reach; what if malleable minds see something they like in Jones' interview and become fans or regular viewers?

It’s a valid argument, but one that misunderstands the media’s role in the Trump era — not to mention Jones’ role inside the pro-Trump media ecosystem. Like it or not, Alex Jones is an architect of our current political moment and as such, the mainstream media shouldn’t try to shield its audience from him or pretend he doesn’t exist — it should be interrogate him.

Jones is a far-fringe personality, and a wildly popular one. While his more outlandish views suggest a man embraced only by the tinfoil hat community — he’s alleged that 9/11 is likely an inside job and that bombs engineered by the government to control the population have turned our frogs gay — Jones’ influence is real and widely felt. If you attended any Trump rally in the leadup to the 2016 election you likely saw his ubiquitous navy “Hillary For Prison” t-shirts, which Jones hawked through his Infowars store (until they sold out, that is). At the Republican National Convention in Cleveland last summer, Jones was greeted like royalty.

Since Jones backed the Trump campaign in 2015, his influence has grown significantly, especially among young males. “So many people watch him now, he’s almost the mainstream,” one of the broadcaster’s young supporters told the New Republic last summer. That piece, which interviewed a number of newly minted Jones fans, describe a similar pattern of conversion: young men intrigued by a viral Infowars video and subsequently won over by Jones’ charisma and message.

According to audience measurement outfit Quantcast reports that Infowars.com pulled in 476 million views during 2016; Alexa suggests that Infowars.com currently receives 340,625 daily unique visitors. And that doesn’t begin to account for the scores of listeners Jones brings in over terrestrial radio or the millions of video views amassed on YouTube.

All 25 former and current Jones associates I spoke with this spring for a profile of him independently suggested that Jones’ influence on the outcome of the election was profound. “Alex doesn't have listeners — he has followers,” one said. “That rural vote for Trump nobody saw coming? It wasn’t only Alex, but I think you can ascribe a significant portion — many first-time voters — of those votes to him.” Another longtime associate of Jones went further. “He's like the Goebbels of 2016. He really won the election for Trump.”

Influence is difficult to quantify, but the money generated by Jones’ media empire provides a helpful gauge. According to one former employee, Jones bragged that the Infowars store grossed $18 million between 2012 and 2013 — though another source puts that number closer to $10 million. Another former employee claimed that Jones’ “moneybombs” — telethon-style fundraisers held to raise money for the ‘information war’ against the mainstream media — can easily pull down $100,000 in a day. Recent court filings show that Jones is paying $516,000 a year in alimony, which which suggests an annual income well into the millions.

But more important is Jones’ perceptible impact on our modern political culture. Jones is, in many ways, the grandfather of the Pro-Trump media, which operates as mirror image of its mainstream counterpart with its own audience, and interpretation of truth. And it's no coincidence that conspiracy culture — be it Seth Rich, Pizzagate, or even elements of Trump/Russia — is ascendent across both fringe and mainstream media at the same time that Jones has become more famous than at any other point in his 20-plus year career.

It’s precisely this influence that makes Jones worthy of interrogation on a national news platform. To suggest otherwise is to fall back on an old, outdated idea of the mainstream media as gatekeepers. The media’s job now is not simply uncovering and sharing news, it's helping its audiences navigate the often treacherous sea of information and “alternative facts.” Jones, the pro-Trump media, and the #MAGAsphere are loud, influential voices with huge, active communities and ties to the White House. It is unwise and increasingly difficult to ignore their very real threat, both to their individual targets and to the mainstream media as a whole.

So an in-depth interview with someone like Jones in front of a big primetime audience is an opportunity, albeit a perilous one. Jones rarely gives sit-down interviews. The opportunity to force him to answer for his most abhorrent views on subjects like Sandy Hook is potentially valuable. At one moment in the teaser, she cuts him off on an answer about Sandy Hook. “That’s a dodge,” Kelly says. “That doesn’t excuse what you did and said about Newtown. You know it.”

Jones is a savvy media manipulator, but also volatile and prone to becoming flustered. Wouldn’t those who find him monstrous welcome the opportunity to see him tripped up, thrown off his game, or unable to respond to a pointed question? Or for the opportunity to have Jones’ rhetoric picked apart and undercut and forever memorialized on video? And what about the opportunity to expose Jones to a new audience who could very well unite around their shared hatred of Jones and Infowars?

To put an interview subject like Jones on his back feet is a tall order for any interviewer. And there’s precedent to suggest that individuals in Kelly’s position — traditional media figures, less in tune with the quirks and pitfalls of the pro-Trump media — might not be equipped to deal with a troll like Jones. CBS’ Scott Pelley, for example, was tripped up when interviewing New Right blogger and pro-Trump media personality Mike Cernovich. And there are hints that Sunday’s interview could veer into similar territory. Kelly’s description of Jones in a tweet previewing the interview as a “conservative” talk show host suggested that Kelly might have overlooked the political nuances of Jones, his show, and his followers. Similarly, the network’s simple decision to wait so long between the interview and air date illustrates a misunderstanding of Jones’ abilities to manipulate a news cycle in bad faith. (After the interview, Jones took to the air to demean Kelly, calling her “Not feminine — cold, robotic, dead,” noting, “I felt zero attraction to Megyn Kelly.”)

One thing is certain: Kelly’s handling of the Jones interview and Jones himself will spark outrage regardless how the interview comes out. And her polarizing reputation — built on a long career at Fox News covering sometimes fraught subjects — will further infuse it with controversy. As will, unfortunately, her gender: Kelly has been the target of vitriolic, misogynist criticism from all sides over the years.

Even now, six days ahead of its Father's Day air date, the interview seems to be hurtling towards catastrophe. On Monday afternoon, Jones called upon NBC to kill the interview, alleging it's been unfairly edited. In doing so, he's commandeered a narrative that shouldn't have been his to control and put NBC in a no-win situation. Pull the interview and cave to Jones; Air the interview and invite an InfoWars-driven barrage of “fake news” insults.

It’s the kind of devious manipulation that’s made Jones — and the pro-Trump media ecosystem he helped create — into an efficient and effective machine, capable of constructing compelling, spurious narratives. To bring a national spotlight on Jones and Infowars is to acknowledge the seriousness of the far-right’s information war. To ignore it, in the hope that it goes away, isn’t just naive, it’s dangerous.

Quelle: <a href="The Case For Interviewing Alex Jones“>BuzzFeed

Using Ansible Validations With Red Hat OpenStack Platform – Part 2

In Part 1 we demonstrated how to set up a Red Hat OpenStack Ansible environment by creating a dynamic Ansible inventory file (check it out if you’ve not read it yet!).
Next, in Part 2 we demonstrate how to use that dynamic inventory with included, pre-written Ansible validation playbooks from the command line.

Time to Validate!
The openstack-tripleo-validations RPM provides all the validations. You can find them in /usr/share/openstack-tripleo-validations/validations/ on the director host. Here’s a quick look, but check them out on your deployment as well.

With Red Hat OpenStack Platform we ship over 20 playbooks to try out, and there are many more upstream.  Check the community often as the list of validations is always changing. Unsupported validations can be downloaded and included in the validations directory as required.
A good first validation to try is the ceilometerdb-size validation. This playbook ensures that the ceilometer configuration on the Undercloud doesn’t allow data to be retained indefinitely. It checks the metering_time_to_live and event_time_to_live parameters in /etc/ceilometer/ceilometer.conf to see if they are either unset or set to a negative value (representing infinite retention). Ceilometer data retention can lead to decreased performance on the director node and degraded abilities for third party tools which rely on this data.
Now, let’s run this validation using the command line in an environment where we have one of the values it checks set correctly and the other incorrectly. For example:
[stack@undercloud ansible]$ sudo awk ‘/^metering_time_to_live|^event_time_to_live/’ /etc/ceilometer/ceilometer.conf

metering_time_to_live = -1

event_time_to_live=259200
Method 1: ansible-playbook
The easiest way is to run the validation using the standard ansible-playbook command:
$ ansible-playbook /usr/share/openstack-tripleo-validations/validations/ceilometerdb-size.yaml

So, what happened?
Ansible output is colored to help read it more easily. The green “OK” lines for the “setup” and “Get TTL setting values from ceilometer.conf” tasks represent Ansible successfully finding the metering and event values, as per this task:
 – name: Get TTL setting values from ceilometer.conf
   become: true
   ini: path=/etc/ceilometer/ceilometer.conf section=database key={{ item }} ignore_missing_file=True
   register: config_result
   with_items:
     – “{{ metering_ttl_check }}”
     – “{{ event_ttl_check }}”
And the red and blue outputs come from this task:
 – name: Check values
   fail: msg=”Value of {{ item.item }} is set to {{ item.value or “-1″ }}.”
   when: item.value|int < 0 or item.value  == None
   with_items: “{{ config_result.results }}”
Here, Ansible will issue a failed result (the red) if the “Check Values” task meets the conditional test (less than 0 or non-existent). So, in our case, since metering_time_to_live was set to -1 it met the condition and the task was run, resulting in the only possible outcome: failed.
With the blue output, Ansible is telling us it skipped the task. In this case this represents a good result. Consider that the event_time_to_live value is set to 259200. This value does not match the conditional in the task (item.value|int < 0 or item.value  == None). And since the task only runs when the conditional is met, and the task’s only output is to produce a failed result, it skips the task. So, a skip means we have passed for this value.
For even more detail you can run ansible-playbook in a verbose mode, by adding -vvv to the command:
$ ansible-playbook -vvv /usr/share/openstack-tripleo-validations/validations/ceilometerdb-size.yaml
You’ll find an excellent and interesting amount of information is returned and worth the time to review. Give it a try on your own environment. You may also want to learn more about Ansible playbooks by reviewing the full documentation.
Now that you’ve seen your first validation you can see how powerful they are. But the CLI is not the only way to run the validations.
Ready to go deeper with Ansible? Check out the latest collection of Ansible eBooks, including free samples from every title!
In the final part of the series we introduce validations with both the OpenStack scheduling service, Mistral, and the director web UI. Check back soon!
The “Operationalizing OpenStack” series features real-world tips, advice and experiences from experts running and deploying OpenStack.
Quelle: RedHat Stack