Is This An Ad? The Treasury Secretary's Wife's Instagram

This is Louise Linton, actress, lawyer, and the wife of U.S. treasury secretary Steven Mnuchin.

This is Louise Linton, actress, lawyer, and the wife of U.S. treasury secretary Steven Mnuchin.

Linton is a 36-year-old from Scotland who has appeared in stuff like CSI:NY and Cabin Fever: Reboot. She married Mnuchin, 54, this summer.

Pool / AFP

And on Monday, she posted this photo on Instagram, complete with brand #tags that are typical of #spon.

And on Monday, she posted this photo on Instagram, complete with brand #tags that are typical of #spon.

deleted Instagram

And THEN she GOT INTO IT with a commenter named Jenni who called her out for (apparently) using taxpayer money for her #daytrip.

And THEN she GOT INTO IT with a commenter named Jenni who called her out for (apparently) using taxpayer money for her #daytrip.

deleted Instagram

Sure, you could use this for a deep discussion of class war, social media, and the Trump administration.

But what I really want to know is: Was this an ad?

I mean, who tags brands in a post that's not #spon, right? Although it's not typical for the family members of high level government officials to do fashion ads, Linton isn't a typical politician's spouse – for example, she recently acted in a movie staring Charlie Sheen.

Plus, she's no stranger to fashion advertising. According to her personal website, she's the “inaugural brand ambassador” for a line of handbags called the “Linton Collection” from a Scottish brand called Dunmore. (At publishing time, Dunmore has not replied to request for clarification from BuzzFeed News on if she's still currently a brand ambassador, and if promoting these other brands on her Instagram would be a conflict.)

I'm curious: what do we assume is going on in this photo (don't cheat and scroll down).

So was this an #ad #spon #partner?

According to the New York Times, an administration spokesperson said that she was not compensated by those brands she tagged. But if there's one thing I know about the Instagram #spon game, sometimes people and brands have different definitions of what “compensated” means, especially when brands engage in the common practice of gifting thousands of dollars' worth of merchandise or travel to a celebrity in the hopes that they'll post about it.

So BuzzFeed News reached out to the luxury brands to ask if there was any “gifted” merchandise or compensation. Both Tom Ford and Valentino confirmed that there was nothing of the sort — no loaned items, freebies or anything. We will update as soon as we hear from Hermés and Roland Mouret, but I think it's safe to call it at this point: it's not an ad.

Louise Linton, you beautiful sassy creature, keep living your wild life and continue to pay for all your own stuff.

Quelle: <a href="Is This An Ad? The Treasury Secretary's Wife's Instagram“>BuzzFeed

Prelude To Operational Simplicity – A Two Act Play

The post Prelude To Operational Simplicity – A Two Act Play appeared first on Mirantis | Pure Play Open Cloud.
MCP heralded the coming of continuous innovation for cloud infrastructure by including DriveTrain as a lifecycle management system capable of consuming incremental technology.  Instead of large integrated releases after every OpenStack Foundation release as before, with MCP, Mirantis embarked upon continuous delivery on the order of every few weeks.  DriveTrain could then methodically consume some or all of the latest innovation, bring it through a CI/CD pipeline, validate it in a staging environment, and promote it into production without downtime.  Thus, the difficulties historically associated with the lifecycle management of OpenStack have been replaced by an automated, repeatable process, and can be done with little or no downtime.
Even better, this innovation is not limited to OpenStack, but applies to all the open cloud components of MCP, including containers with Kubernetes, SDN services with Mirantis OpenContrail, and more.
Act 1 – OpenStack Upgrades Made “Doubly” Simple
Initially, because there were no major upgrades of OpenStack or other components to upgrade to just yet, the innovation DriveTrain consumed consisted of new features for MCP as well as updates and fixes.  Now, with support for the latest OpenStack release, Ocata, that has changed.  
A detailed description and demo of DriveTrain completing an OpenStack upgrade from MCP’s initial support of Mitaka to Ocata (skipping Newton with a “double upgrade”) can be seen here. As you can see from the overview of this process from one of our best and brightest engineering directors, Jakub Pavlik, what used to take days of careful planning followed by days of downtime for an OpenStack upgrade and validation can now be automated by DriveTrain within a few hours, with zero workload downtime!  
Here is a summary of the highlights:

Preparation and testing of one Ocata VM takes about 40 minutes during which the production Mitaka cloud is live.
After validation, the upgrade to an Ocata-based highly available production control plane takes about 42 minutes with zero downtime for running workloads while the Mitaka control plane is offline.
Next, the Mitaka compute nodes can connect to the Ocata control plane and cascade through upgrades to Ocata at a later time.  
Jakub also demonstrated a rollback to Mitaka, which requires about 9 minutes based on restoring the original Mitaka database from the first step in the process.  

Amazing.  Simply amazing.
Act 2 – Now With OpenContrail Too
But it’s not just OpenStack that’s benefiting from DriveTrain, and not just in the lab. It’s also great to see Mirantis customers taking advantage of continuous delivery live in their Managed Open Clouds. 
Let’s take another example, this time with Mirantis OpenContrail at one of our customers. This is no demo; we are talking about the SDN of a major running OpenStack cloud in production and serving thousands of users.  Again, in the past this would have taken days of planning and many days, if not weeks, of downtime.
But not anymore.  Overall this upgrade required about 5 hours, including cascading through all the compute nodes.  Highlights on this one include:

A major upgrade of Mirantis OpenContrail 3.0.2 to 3.1.1
Over 1000 code changes executed by DriveTrain with zero issues
The pull from github took about 10 minutes, installing the latest OpenContrail salt formula required about 2 minutes, the DB backup took about 10 minutes, the upgrade of the Mirantis OpenContrail control plane took 1 hour
80 compute nodes were upgraded in a cascading fashion at ~20/hour (or ~4 hours in total)
Zero downtime.

Now that is continuous innovation and operational simplicity!
Curtain Drops
If you’d like to see a live demo of DriveTrain in action, join us on Thursday, September 14, for Get Control of Your Cloud Infrastructure, Upgrades and LCM with Mirantis DriveTrain.
(Image courtesy of Famartin)
The post Prelude To Operational Simplicity – A Two Act Play appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Video interviews at the Denver PTG (Sign up now!)

Earlier this year, at the PTG in Atlanta I did video interviews with some of the Red Hat engineering who were there.

You can see these videos on the RDO YouTube channel

Or you can see the teaser video here:

This year, I’ll be expanding that to everyone – not just Red Hat – to emphasize the awesome cooperation and collaboration that happens across projects, and across companies.

If you’ll be at the PTG, please consider signing up to talk to me about your project. I’ll be conducting interviews starting on Tuesday morning, and you can sign up here

Please see the “planning for your interview” tab of that spreadsheet for the answers to all of your questions about the interviews. Or contact me directly at rbowen AT red hat DOT com if you have more questions.
Quelle: RDO

Azure Network Watcher introduces Connectivity Check (Preview)

Diagnosing network connectivity and performance issues in the cloud can be a challenge as your network evolves in complexity. We are pleased to announce the preview of a new feature to check network connectivity in a variety of scenarios when using VM.

The Azure Network Watcher Connectivity Check feature helps to drastically reduce the amount of time needed to find and detect connectivity issues in the infrastructure. The results returned can provide valuable insights to whether a connectivity issue is due to a platform or a potential user configuration. Network Watcher Connectivity Check can be used from the Azure portal, using PowerShell, Azure CLI, and REST API.

Connectivity Check is supported in a variety of scenarios – VM to a VM, VM to an external endpoint, and VM to an on-premise endpoint. Leveraging a common and typical network topology, the example below illustrates how Connectivity Check can help resolve network reachability issues using the Azure portal. There is a VNet hosting a multi-tier web application and four subnets, amongst which are an application subnet and a database subnet.

Figure 1 – Multi-tier web application

On the Azure portal, navigate to Azure Network Watcher and under Network Diagnostic Tools click on Connectivity Check. Once there, you can specify the Source and Destination VM and click the “Check” button to begin the connectivity check.

A status indicating reachable or unreachable is returned once the connectivity check completes. The number of hops, the minimum, average and maximum latency to reach the destination are also returned.

Figure 2 – Connectivity Check – access from portal

In this example, a connectivity check was done from the VM running the application tier to the VM running the database tier. The status is returned as unreachable, and it’s important to note, one of the hops indicated a red status. Clicking on the hop indicates the presence of an NSG rule that is blocking all traffic, thereby blocking end-to-end connectivity.

Figure 3 – Unreachable status

The NSG rule configuration error was rectified and a connectivity check was repeated as illustrated below, where the results now indicate an end-to-end connectivity. The network latency between source and destination, along with hop information is also provided.

Figure 4 – Reachable status

The destination for Connectivity Check can be an IP address, an FQDN, or an ARM URI.
 
We believe the Connectivity Check feature will give you deeper insights to network performance in Azure. We welcome you to reach out, as your feedback from using Network Watcher is crucial to help steer the product development and eco system growth.
Quelle: Azure

Test Drive Docker Enterprise Edition at VMworld 2017

Docker will be at VMworld 2017 next week (August 27-31) in Las Vegas to highlight new developments with Docker Enterprise Edition (EE), the only Container as a Service (CaaS) platform for managing and securing Windows, Linux and mainframe applications across any infrastructure, both on premises and in the cloud.
Stop by Booth #1206 to learn more about:

How VMs and containers work together for improved application lifecycle management
How containers and Docker EE can help IT with day-to-day maintenance and operations tasks
How IT can lead modernization efforts with Docker EE and become drivers of innovation in their organizations

Just as VMware vSphere simplified the management of VMs and made virtualization the de facto standard inside the data center, Docker is driving containerization of your entire application portfolio with Docker EE and helping organizations like yours to achieve their cloud and app modernization goals without requiring you to change how you operate.
Test Drive Docker EE in the Booth
Don’t miss the chance to get hands-on experience with Docker with our in-booth labs. Led by Docker experts, you will get to see for yourself how Docker brings all applications—traditional and cloud-native, Windows and Linux, on-prem and in the cloud—into a single experience for IT. Learn how standard IT tasks like patching and rolling updates are 10x easier with Docker EE and see how you can centralize security and access control through Docker EE’s management interface.
Pre-register here to sign up for one of the lab sessions and get a free Docker t-shirt when you show up!

Monday, August 28th @ 1:15pm
Tuesday, August 29th @ 1:15pm
Wednesday, August 30th @ 1:15pm

Congrats to 2017 vExperts!
We didn’t forget you! Come by our booth to collect special vExpert swag or better yet, sign up for our vExpert challenge! Special prizes in store for winners of our challenge.
Register here for special vExpert Challenge time slots and compete against fellow vExperts for some great Docker swag.

Monday, August 28th @ 3:10pm
Tuesday, August 29th @ 3:10pm

Sign up for a #Docker in-booth lab at #VMworld 2017Click To Tweet

To learn more about Docker solutions for IT:

Visit IT Starts with Docker and sign up for ongoing alerts
Learn more about Docker Enterprise Edition
Start a hosted trial
Sign up for upcoming webinars

The post Test Drive Docker Enterprise Edition at VMworld 2017 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

New Offers in Azure Marketplace!

14 great new cloud offerings were published to Azure Marketplace last month. Check ‘em out!

– The Azure Marketplace Team

Denodo Platform:  is a leading data virtualization product that provides agile, high performance data integration and data abstraction across the broadest range of enterprise, cloud and big data sources; exposes real-time data models for expediting the use by other applications and business users. Learn more on Azure Marketplace.

Kinetica (BYOL):  is a GPU-accelerated, in-memory analytics database that delivers truly real-time response to queries on large, complex and streaming data sets: 100x faster performance than traditional databases. Learn more on Azure Marketplace.

Informatica Big Data Management 10.1.1. U2 (BYOL): provides data management solutions to quickly and holistically integrate, govern, and secure big data for your business. Learn more on Azure Marketplace.

Lumify: Altamira LUMIFY is a powerful big data fusion, analysis, and visualization platform that supports the development of actionable intelligence. Learn more on Azure Marketplace.

AppGate: AppGate for Azure supports fine-grained, dynamic access control to Azure resources. Learn more on Azure Marketplace.

NetConnect: NetConnect secures data by locking it within the cloud environment, and enabling users to remotely interact with files and applications as if they were local to their device. Learn more on Azure Marketplace.

SQLstream Blaze: Enterprises like Amazon use SQLstream Blaze to easily build, test, deploy, or update streaming applications in minutes, that keep operations running at optimal efficiency, protect systems from security threats, and support real-time customer engagement. Learn more on Azure Marketplace.

CARTO Builder:  solves this with its drag and drop analytics capabilities, by empowering business analysts to optimize operations and quickly deploy location applications. Learn more on Azure Marketplace.

vSEC:CMS C-Series: The vSEC:CMS system is fully functional with minidriver enabled smart cards and it streamlines all aspects of a smart card management system by connecting to enterprise directories. Learn more on Azure Marketplace.

VU Application Server: Manage all your security with simplicity Administrate your policies in a flexible platform VU Application Server is the authentication server that allows companies, institutions and organizations to unfold a robust authentication strategy for local & remote access to applications. Learn more on Azure Marketplace.

Identity Orchestration and Management Portal: Imagine collapsing multiple management portals for the many online services and on-premise applications into a single management interface in a browser. Learn more on Azure Marketplace.

Solution Templates

Viptela vEdge Cloud Router (3 NICs): Viptela vEdge Cloud is a software router that supports all of the capabilities available on Viptela's industry leading SD-WAN platform. Learn more on Azure Marketplace.

Informatica Big Data Management 10.1.1. U2 (BYOL): provides data management solutions to quickly and holistically integrate, govern, and secure big data for your business. Learn more on Azure Marketplace.

Teradata Server Management: monitors Teradata Database instances and generates alerts related to database and OS errors and operational state changes. Learn more on Azure Marketplace.

Quelle: Azure

Kubernetes Meets High-Performance Computing

Editor’s note: today’s post is by Robert Lalonde, general manager at Univa, on supporting mixed HPC and containerized applications  Anyone who has worked with Docker can appreciate the enormous gains in efficiency achievable with containers. While Kubernetes excels at orchestrating containers, high-performance computing (HPC) applications can be tricky to deploy on Kubernetes. In this post, I discuss some of the challenges of running HPC workloads with Kubernetes, explain how organizations approach these challenges today, and suggest an approach for supporting mixed workloads on a shared Kubernetes cluster. We will also provide information and links to a case study on a customer, IHME, showing how Kubernetes is extended to service their HPC workloads seamlessly while retaining scalability and interfaces familiar to HPC users.HPC workloads unique challengesIn Kubernetes, the base unit of scheduling is a Pod: one or more Docker containers scheduled to a cluster host. Kubernetes assumes that workloads are containers. While Kubernetes has the notion of Cron Jobs and Jobs that run to completion, applications deployed on Kubernetes are typically long-running services, like web servers, load balancers or data stores and while they are highly dynamic with pods coming and going, they differ greatly from HPC application patterns.Traditional HPC applications often exhibit different characteristics: In financial or engineering simulations, a job may be comprised of tens of thousands of short-running tasks, demanding low-latency and high-throughput scheduling to complete a simulation in an acceptable amount of time.A computational fluid dynamics (CFD) problem may execute in parallel across many hundred or even thousands of nodes using a message passing library to synchronize state. This requires specialized scheduling and job management features to allocate and launch such jobs and then to checkpoint, suspend/resume or backfill them.Other HPC workloads may require specialized resources like GPUs or require access to limited software licenses. Organizations may enforce policies around what types of resources can be used by whom to ensure projects are adequately resourced and deadlines are met.HPC workload schedulers have evolved to support exactly these kinds of workloads. Examples include Univa Grid Engine, IBM Spectrum LSF and Altair’s PBS Professional. Sites managing HPC workloads have come to rely on capabilities like array jobs, configurable pre-emption, user, group or project based quotas and a variety of other features.Blurring the lines between containers and HPCHPC users believe containers are valuable for the same reasons as other organizations. Packaging logic in a container to make it portable, insulated from environmental dependencies, and easily exchanged with other containers clearly has value. However, making the switch to containers can be difficult. HPC workloads are often integrated at the command line level. Rather than requiring coding, jobs are submitted to queues via the command line as binaries or simple shell scripts that act as wrappers. There are literally hundreds of engineering, scientific and analytic applications used by HPC sites that take this approach and have mature and certified integrations with popular workload schedulers. While the notion of packaging a workload into a Docker container, publishing it to a registry, and submitting a YAML description of the workload is second nature to users of Kubernetes, this is foreign to most HPC users. An analyst running models in R, MATLAB or Stata simply wants to submit their simulation quickly, monitor their execution, and get a result as quickly as possible. Existing approachesTo deal with the challenges of migrating to containers, organizations running container and HPC workloads have several options:Maintain separate infrastructuresFor sites with sunk investments in HPC, this may be a preferred approach. Rather than disrupt existing environments, it may be easier to deploy new containerized applications on a separate cluster and leave the HPC environment alone. The challenge is that this comes at the cost of siloed clusters, increasing infrastructure and management cost.Run containerized workloads under an existing HPC workload managerFor sites running traditional HPC workloads, another approach is to use existing job submission mechanisms to launch jobs that in turn instantiate Docker containers on one or more target hosts. Sites using this approach can introduce containerized workloads with minimal disruption to their environment. Leading HPC workload managers such as Univa Grid Engine Container Edition and IBM Spectrum LSF are adding native support for Docker containers. Shifter and Singularity are important open source tools supporting this type of deployment also. While this is a good solution for sites with simple requirements that want to stick with their HPC scheduler, they will not have access to native Kubernetes features, and this may constrain flexibility in managing long-running services where Kubernetes excels.Use native job scheduling features in KubernetesSites less invested in existing HPC applications can use existing scheduling facilities in Kubernetes for jobs that run to completion. While this is an option, it may be impractical for many HPC users. HPC applications are often either optimized towards massive throughput or large scale parallelism. In both cases startup and teardown latencies have a discriminating impact. Latencies that appear to be acceptable for containerized microservices today would render such applications unable to scale to the required levels.All of these solutions involve tradeoffs. The first option doesn’t allow resources to be shared (increasing costs) and the second and third options require customers to pick a single scheduler, constraining future flexibility.Mixed workloads on KubernetesA better approach is to support HPC and container workloads natively in the same shared environment. Ideally, users should see the environment appropriate to their workload or workflow type.One approach to supporting mixed workloads is to allow Kubernetes and the HPC workload manager to co-exist on the same cluster, throttling resources to avoid conflicts. While simple, this means that neither workload manager can fully utilize the cluster. Another approach is to use a peer scheduler that coordinates with the Kubernetes scheduler. Navops Command by Univa is a solution that takes this third approach, augmenting the functionality of the Kubernetes scheduler. Navops Command provides its own web interface and CLI and allows additional scheduling policies to be enabled on Kubernetes without impacting the operation of the Kubernetes scheduler and existing containerized applications. Navops Command plugs into the Kubernetes architecture via the ‘schedulerName’ attribute in the pod spec as a peer scheduler that workloads can choose to use instead of the Kubernetes stock scheduler as shown below.With this approach, Kubernetes acts as a resource manager, making resources available to a separate HPC scheduler. Cluster administrators can use a visual interface to allocate resources based on policy or simply drag sliders via a web UI to allocate different proportions of the Kubernetes environment to non-container (HPC) workloads, and native Kubernetes applications and services.From a client perspective, the HPC scheduler runs as a service deployed in Kubernetes pods, operating just as it would on a bare metal cluster. Navops Command provides additional scheduling features including things like resource reservation, run-time quotas, workload preemption and more. This environment works equally well for on-premise, cloud-based or hybrid deployments.Deploying mixed workloads at IHMEOne client having success with mixed workloads is the Institute for Health Metrics & Evaluation (IHME), an independent health research center at the University of Washington. In support of their globally recognized Global Health Data Exchange (GHDx), IHME operates a significantly sized environment comprised of 500 nodes and 20,000 cores running a mix of analytic, HPC, and container-based applications on Kubernetes. This case study describes IHME’s success hosting existing HPC workloads on a shared Kubernetes cluster using Navops Command.For sites deploying new clusters that want access to the rich capabilities in Kubernetes but need the flexibility to run non-containerized workloads, this approach is worth a look. It offers the opportunity for sites to share infrastructure between Kubernetes and HPC workloads without disrupting existing applications and businesses processes. It also allows them to migrate their HPC workloads to use Docker containers at their own pace.
Quelle: kubernetes