More than 60 Red Hat-led sessions confirmed for OpenStack Summit Boston

This Spring&;s 2017 OpenStack Summit in Boston should be another great and educational event. The OpenStack Foundation has posted the final session agenda detailing the entire week&8217;s schedule of events. And once again Red Hat will be very busy during the four-day event, including delivering more than 60 sessions, from technology overviews to deep dive&8217;s around the OpenStack services for containers, storage, networking, compute, network functions virtualization (NFV), and much, much more. 

As a Headline sponsor this Fall, we also have a full day breakout room on Monday, where we plan to present additional product and strategy sessions. And we will have two keynote presenters on stage: President and CEO, Jim Whitehurst, and Vice President and Chief Technologist, Chris Wright. 
To learn more about Red Hat&8217;s general sessions, look at the details below. We&8217;ll add the agenda details of our breakout soon. Also, be sure to visit us at our booth in the center of the Marketplace to meet the team and check out our live demonstrations. Finally, we&8217;ll have Red Hat engineers, product managers, consultants, and executives in attendance, so be sure to talk to your Red Hat representative to schedule an in-person meeting while there.
And in case you haven&8217;t registered yet, visit our OpenStack Summit page for a discounted registration code to help get you to the event. We look forward to seeing you in Boston this April.
For more details on each session, click on the title below:
Monday sessions

Kuryr & Fuxi: delivering OpenStack networking and storage to Docker swarm containers
Antoni Segura Puimedon, Vikas Choudhary, and Hongbin Lu (Huawei)

Multi-cloud demo
Monty Taylor

Configure your cloud for recovery
Walter Bentley

Kubernetes and OpenStack at scale
Stephen Gordon

No longer considered an epic spell of transformation: in-place upgrade!
Krzysztof Janiszewski and Ken Holden

Fifty shades for enrollment: how to use Certmonger to win OpenStack
Ade Lee and Rob Crittenden

OpenStack and OVN &; what&8217;s new with OVS 2.7
Russell Bryant, Ben Pfaff (VMware), and Justin Pettit (VMware)

Federation with Keycloak and FreeIPA
Martin Lopes, Rodrigo Duarte Sousa, and John Dennis

7 &;must haves&; for highly effective Telco NFV deployments
Anita Tragler and Greg Smith (Juniper Networks, Inc.)

Containerizing OpenStack deployments: lessons learned from TripleO
Flavio Percoco

Project update &8211; Heat
Rabi Mishra, Zane Bitter, and Rico Lin (EasyStack)

Tuesday sessions

OpenStack Telemetry and the 10,000 instances
Julien Danjou and Alex Krzos

Mastering and troubleshooting NFV issues
Sadique Puthen and Jaison Raju

The Ceph power show &8211; hands-on with Ceph: Episode 2 &8211; &;The Jewel Story&8217;
Karan Singh, Daniel Messer, and Brent Compton

SmartNICs &8211; paving the way for 25G/40G/100G speed NFV deployments in OpenStack
Anita Tragler and Edwin Peer (Netronome)

Scaling NFV &8211; are containers the answer?
Azhar Sayeed

Free my organization to pursue cloud native infrastructure!
Dave Cain and Steve Black (East Carolina University)

Container networking using Kuryr &8211; a hands-on lab
Sudhir Kethamakka and Amol Chobe (Ericsson)

Using software-defined WAN implementation to turn on advanced connectivity services in OpenStack
Ali Kafel and Pratik Roychowdhury (OpenContrail)

Don&8217;t fail at scale: how to plan for, build, and operate a successful OpenStack cloud
David Costakos and Julio Villarreal Pelegrino

Red Hat OpenStack Certification Program
Allessandro Silva

OpenStack and OpenDaylight: an integrated IaaS for SDN and NFV
Nir Yechiel and Andre Fredette

Project update &8211; Kuryr
Antoni Segura Puimedon and Irena Berezovsky (Huawei)

Barbican workshop &8211; securing the cloud
Ade Lee, Fernando Diaz (IBM), Dave McCowan (Cisco Systems), Douglas Mendizabal (Rackspace), Kaitlin Farr (Johns Hopkins University)

Bridging the gap between deploying OpenStack as a cloud application and as a traditional application
James Slagle

Real time KVM and how it works
Eric Lajoie

Wednesday sessions

Projects Update &8211; Sahara
Telles Nobrega and Elise Gafford

Project update &8211; Mistral
Ryan Brady

Bite off more than you can chew, then chew it: OpenStack consumption models
Tyler Britten, Walter Bentley, and Jonathan Kelly (MetacloudCisco)

Hybrid messaging solutions for large scale OpenStack deployments
Kenneth Giusti and Andrew Smith

Project update &8211; Nova
Dan Smith, Jay Pipes (Mirantis), and Matt Riedermann (Huawei)

Hands-on to configure your cloud to be able to charge your users using official OpenStack components
Julien Danjou, Christophe Sautheir (Objectif Libre), and Maxime Cottret (Objectif Libre)

To OpenStack or not OpenStack; that is the question
Frank Wu

Distributed monitoring and analysis for telecom requirements
Tomofumi Hayashi, Yuki Kasuya (KDDI Research), and Ryota Mibu (NEC)

OVN support for multiple gateways and IPv6
Russell Bryant and Numan Siddique

Kuryr-Kubernetes: the seamless path to adding pods to your datacenter networking
Antoni Segura Puimedon, Irena Berezovsky (Huawei), and Ilya Chukhnakov (Mirantis)

Unlocking the performance secrets of Ceph object storage
Karan Singh, Kyle Bader, and Brent Compton

OVN hands-on tutorial part 1: introduction
Russell Bryant, Ben Pfaff (VMware), and Justin Pettit (VMware)

Kuberneterize your baremetal nodes in OpenStack!
Ken Savich and Darin Sorrentino

OVN hands-on tutorial part 2: advanced
Russell Bryant, Ben Pfaff (VMware), and Justin Pettit (VMware)

The Amazon effect on open source cloud business models
Flavio Percoco, Monty Taylor, Nati Shalom (GigaSpaces), and Yaron Haviv (Iguazio)

Neutron port binding and impact of unbound ports on DVR routers with floatingIP
Brian Haley and Swaminathan Vasudevan (HPE)

Upstream contribution &8211; give up or double down?
Assaf Muller

Hyper cool infrastructure
Randy Robbins

Strategic distributed and multisite OpenStack for business continuity and scalability use cases
Rob Young

Per API role-based access control
Adam Young and Kristi Nikolla (Massachusetts Open Cloud)

Logging work group BoF
Erno Kuvaja, Rochelle Grober, Hector Gonzalez Mendoza (Intel), Hieu LE (Fujistu) and Andrew Ukasick (AT&T)

Performance and scale analysis of OpenStack using Browbeat
 Alex Krzos, Sai Sindhur Malleni, and Joe Talerico

Scaling Nova: how CellsV2 affects your deployment
Dan Smith

Ambassador community report
Erwan Gallen, Lisa-Marie Namphy (OpenStack Ambassador), Akihiro Hasegawa (Equinix), Marton Kiss (Aptira), and Akira Yoshiyama (NEC)

Thursday sessions

Examining different ways to get involved: a look at open source
Rob Wilmoth

CephFS backed NFS share service for multi-tenant clouds
Victoria Martinez de la Cruz, Ramana Raja, and Tom Barron

Create your VM in a (almost) deterministic way &8211; a hands-on lab
Sudhir Kethamakka and Geetika Batra

RDO&8217;s continuous packaging platform
Matthieu Huin, Fabien Boucher, and Haikel Guemar (CentOS)

OpenDaylight Network Virtualization solution (NetVirt) with FD.io VPP data plane
Andre Fredette, Srikanth Vavilapalli (Ericsson), and Prem Sankar Gopanna (Ericsson)

Ceph snapshots for fun & profit
Gregory Farnum

Gnocchi and collectd for faster fault detection and maintenance
Julien Danjou and Emma Foley

Project update &8211; TripleO
Emillien Macchi, Flavio Percoco, and Steven Hardy

Project update &8211; Telemetry
Julien Danjou, Mehdi Abaakouk, and Gordon Chung (Huawei)

Turned up to 11: low latency Ceph block storage
Jason Dillaman, Yuan Zhou (INTC), and Tushar Gohad (Intel)

Who reads books anymore? Or writes them?
Michael Solberg and Ben Silverman (OnX Enterprise Solutions)

Pushing the boundaries of OpenStack &8211; wait, what are they again?
Walter Bentley

Multi-site OpenStack &8211; deployment option and challenges for a telco
Azhar Sayeed

Ceph project update
Sage Weil

 
Quelle: RedHat Stack

More than 60 Red Hat-led sessions confirmed for OpenStack Summit Boston

This Spring&;s 2017 OpenStack Summit in Boston should be another great and educational event. The OpenStack Foundation has posted the final session agenda detailing the entire week&8217;s schedule of events. And once again Red Hat will be very busy during the four-day event, including delivering more than 60 sessions, from technology overviews to deep dive&8217;s around the OpenStack services for containers, storage, networking, compute, network functions virtualization (NFV), and much, much more. 

As a Headline sponsor this Fall, we also have a full day breakout room on Monday, where we plan to present additional product and strategy sessions. And we will have two keynote presenters on stage: President and CEO, Jim Whitehurst, and Vice President and Chief Technologist, Chris Wright. 
To learn more about Red Hat&8217;s general sessions, look at the details below. We&8217;ll add the agenda details of our breakout soon. Also, be sure to visit us at our booth in the center of the Marketplace to meet the team and check out our live demonstrations. Finally, we&8217;ll have Red Hat engineers, product managers, consultants, and executives in attendance, so be sure to talk to your Red Hat representative to schedule an in-person meeting while there.
And in case you haven&8217;t registered yet, visit our OpenStack Summit page for a discounted registration code to help get you to the event. We look forward to seeing you in Boston this April.
For more details on each session, click on the title below:
Monday sessions

Kuryr & Fuxi: delivering OpenStack networking and storage to Docker swarm containers
Antoni Segura Puimedon, Vikas Choudhary, and Hongbin Lu (Huawei)

Multi-cloud demo
Monty Taylor

Configure your cloud for recovery
Walter Bentley

Kubernetes and OpenStack at scale
Stephen Gordon

No longer considered an epic spell of transformation: in-place upgrade!
Krzysztof Janiszewski and Ken Holden

Fifty shades for enrollment: how to use Certmonger to win OpenStack
Ade Lee and Rob Crittenden

OpenStack and OVN &; what&8217;s new with OVS 2.7
Russell Bryant, Ben Pfaff (VMware), and Justin Pettit (VMware)

Federation with Keycloak and FreeIPA
Martin Lopes, Rodrigo Duarte Sousa, and John Dennis

7 &;must haves&; for highly effective Telco NFV deployments
Anita Tragler and Greg Smith (Juniper Networks, Inc.)

Containerizing OpenStack deployments: lessons learned from TripleO
Flavio Percoco

Project update &8211; Heat
Rabi Mishra, Zane Bitter, and Rico Lin (EasyStack)

Tuesday sessions

OpenStack Telemetry and the 10,000 instances
Julien Danjou and Alex Krzos

Mastering and troubleshooting NFV issues
Sadique Puthen and Jaison Raju

The Ceph power show &8211; hands-on with Ceph: Episode 2 &8211; &;The Jewel Story&8217;
Karan Singh, Daniel Messer, and Brent Compton

SmartNICs &8211; paving the way for 25G/40G/100G speed NFV deployments in OpenStack
Anita Tragler and Edwin Peer (Netronome)

Scaling NFV &8211; are containers the answer?
Azhar Sayeed

Free my organization to pursue cloud native infrastructure!
Dave Cain and Steve Black (East Carolina University)

Container networking using Kuryr &8211; a hands-on lab
Sudhir Kethamakka and Amol Chobe (Ericsson)

Using software-defined WAN implementation to turn on advanced connectivity services in OpenStack
Ali Kafel and Pratik Roychowdhury (OpenContrail)

Don&8217;t fail at scale: how to plan for, build, and operate a successful OpenStack cloud
David Costakos and Julio Villarreal Pelegrino

Red Hat OpenStack Certification Program
Allessandro Silva

OpenStack and OpenDaylight: an integrated IaaS for SDN and NFV
Nir Yechiel and Andre Fredette

Project update &8211; Kuryr
Antoni Segura Puimedon and Irena Berezovsky (Huawei)

Barbican workshop &8211; securing the cloud
Ade Lee, Fernando Diaz (IBM), Dave McCowan (Cisco Systems), Douglas Mendizabal (Rackspace), Kaitlin Farr (Johns Hopkins University)

Bridging the gap between deploying OpenStack as a cloud application and as a traditional application
James Slagle

Real time KVM and how it works
Eric Lajoie

Wednesday sessions

Projects Update &8211; Sahara
Telles Nobrega and Elise Gafford

Project update &8211; Mistral
Ryan Brady

Bite off more than you can chew, then chew it: OpenStack consumption models
Tyler Britten, Walter Bentley, and Jonathan Kelly (MetacloudCisco)

Hybrid messaging solutions for large scale OpenStack deployments
Kenneth Giusti and Andrew Smith

Project update &8211; Nova
Dan Smith, Jay Pipes (Mirantis), and Matt Riedermann (Huawei)

Hands-on to configure your cloud to be able to charge your users using official OpenStack components
Julien Danjou, Christophe Sautheir (Objectif Libre), and Maxime Cottret (Objectif Libre)

To OpenStack or not OpenStack; that is the question
Frank Wu

Distributed monitoring and analysis for telecom requirements
Tomofumi Hayashi, Yuki Kasuya (KDDI Research), and Ryota Mibu (NEC)

OVN support for multiple gateways and IPv6
Russell Bryant and Numan Siddique

Kuryr-Kubernetes: the seamless path to adding pods to your datacenter networking
Antoni Segura Puimedon, Irena Berezovsky (Huawei), and Ilya Chukhnakov (Mirantis)

Unlocking the performance secrets of Ceph object storage
Karan Singh, Kyle Bader, and Brent Compton

OVN hands-on tutorial part 1: introduction
Russell Bryant, Ben Pfaff (VMware), and Justin Pettit (VMware)

Kuberneterize your baremetal nodes in OpenStack!
Ken Savich and Darin Sorrentino

OVN hands-on tutorial part 2: advanced
Russell Bryant, Ben Pfaff (VMware), and Justin Pettit (VMware)

The Amazon effect on open source cloud business models
Flavio Percoco, Monty Taylor, Nati Shalom (GigaSpaces), and Yaron Haviv (Iguazio)

Neutron port binding and impact of unbound ports on DVR routers with floatingIP
Brian Haley and Swaminathan Vasudevan (HPE)

Upstream contribution &8211; give up or double down?
Assaf Muller

Hyper cool infrastructure
Randy Robbins

Strategic distributed and multisite OpenStack for business continuity and scalability use cases
Rob Young

Per API role-based access control
Adam Young and Kristi Nikolla (Massachusetts Open Cloud)

Logging work group BoF
Erno Kuvaja, Rochelle Grober, Hector Gonzalez Mendoza (Intel), Hieu LE (Fujistu) and Andrew Ukasick (AT&T)

Performance and scale analysis of OpenStack using Browbeat
 Alex Krzos, Sai Sindhur Malleni, and Joe Talerico

Scaling Nova: how CellsV2 affects your deployment
Dan Smith

Ambassador community report
Erwan Gallen, Lisa-Marie Namphy (OpenStack Ambassador), Akihiro Hasegawa (Equinix), Marton Kiss (Aptira), and Akira Yoshiyama (NEC)

Thursday sessions

Examining different ways to get involved: a look at open source
Rob Wilmoth

CephFS backed NFS share service for multi-tenant clouds
Victoria Martinez de la Cruz, Ramana Raja, and Tom Barron

Create your VM in a (almost) deterministic way &8211; a hands-on lab
Sudhir Kethamakka and Geetika Batra

RDO&8217;s continuous packaging platform
Matthieu Huin, Fabien Boucher, and Haikel Guemar (CentOS)

OpenDaylight Network Virtualization solution (NetVirt) with FD.io VPP data plane
Andre Fredette, Srikanth Vavilapalli (Ericsson), and Prem Sankar Gopanna (Ericsson)

Ceph snapshots for fun & profit
Gregory Farnum

Gnocchi and collectd for faster fault detection and maintenance
Julien Danjou and Emma Foley

Project update &8211; TripleO
Emillien Macchi, Flavio Percoco, and Steven Hardy

Project update &8211; Telemetry
Julien Danjou, Mehdi Abaakouk, and Gordon Chung (Huawei)

Turned up to 11: low latency Ceph block storage
Jason Dillaman, Yuan Zhou (INTC), and Tushar Gohad (Intel)

Who reads books anymore? Or writes them?
Michael Solberg and Ben Silverman (OnX Enterprise Solutions)

Pushing the boundaries of OpenStack &8211; wait, what are they again?
Walter Bentley

Multi-site OpenStack &8211; deployment option and challenges for a telco
Azhar Sayeed

Ceph project update
Sage Weil

 
Quelle: RedHat Stack

Mirantis Releases Kubernetes Distribution and Updated Mirantis OpenStack

The post Mirantis Releases Kubernetes Distribution and Updated Mirantis OpenStack appeared first on Mirantis | Pure Play Open Cloud.
Mirantis Cloud Platform 1.0 is a distribution of OpenStack and Kubernetes that can orchestrate VMs, Containers and Bare Metal

SUNNYVALE, CA – April 19, 2017 – Mirantis, the managed open cloud company, today announced availability of a commercially-supported distribution of OpenStack and Kubernetes, delivered in a single, integrated package, and with a unique build-operate-transfer delivery model.

“Today, infrastructure consumption patterns are defined by the public cloud, where everything is API driven, managed and continuously delivered. Mirantis OpenStack, which featured Fuel as an installer, was the easiest OpenStack distribution to deploy, but every new version required a forklift upgrade,” said Boris Renski, Mirantis co-founder and CMO. “Mirantis Cloud Platform departs from the traditional installer-centric architecture and towards an operations-centric architecture, continuously delivered by either Mirantis or the customers’ DevOps team with zero downtime. Updates no longer happen once every 6-12 months, but are introduced in minor increments on a weekly basis. In the next five to ten years, all vendors in the space will either find a way to adapt to this pattern or they will disappear.”

Along with launching Mirantis Cloud Platform (MCP) 1.0, Mirantis is also first to introduce a unique delivery model for the platform. Unlike traditional vendors that sell software subscriptions, Mirantis will onboard customers to MCP through a build-operate-transfer delivery model. The company will operate an open cloud platform for customers for a period of at least twelve months with up to four nines SLA prior to off boarding the operational burden to customer&;s team, if desired. The delivery model ensures that not just the software, but also the customer&8217;s team and process are aligned with DevOps best practices.

Unlike any other solution in the industry, customers onboarded to MCP have an option to completely transfer the platform under their own management. Everything in MCP is based on popular open standards with no lock-in, making it possible for customers to break ties with Mirantis and run the platform independently should they choose to do so.

“We are happy to see a growing number of vendors embrace Kubernetes and launch a commercially supported offering based on the technology,&; said Allan Naim from the Kubernetes and Container Engine Product Team.

&;As the industry embraces composable, open infrastructure, the &8220;LAMP stack of cloud&8221; is emerging, made up of OpenStack, Kubernetes, and other key open technologies,” said Mark Collier, chief operating officer, OpenStack Foundation. “Mirantis Cloud Platform presents a new vision for the OpenStack distribution, one that embraces diverse compute, storage and networking technologies continuously rather than via major upgrades on six-month cycles.&8221;

Specifically, Mirantis Cloud Platform 1.0 is:

Open Cloud Software &; providing a single platform to orchestrate VMs, containers and bare metal compute resources by:

Expanding Mirantis OpenStack to include Kubernetes for container orchestration.
Complementing the virtual compute stacks with best-in-class open source software defined networking (SDN), specifically Mirantis OpenContrail for VMs and bare metal, and Calico for containers.
Featuring Ceph, the most popular open source software defined storage (SDS), for both Kubernetes and OpenStack.

DriveTrain &8212; Mirantis DriveTrain sets the foundation for DevOps style lifecycle management of the open cloud software stack by enabling continuous integration, continuous testing and continuous delivery through a CI/ CD pipeline. DriveTrain enables:

Increased Day 1 flexibility to customize the reference architecture and configurations during initial software installation.
Greater ability to perform Day 2 operations such as post-deployment configuration, functionality and architecture changes.
Seamless version updates through an automated pipeline to a virtualized control plane to minimize downtime.

StackLight &8212; enables strict compliance to availability SLAs by providing continuous monitoring of the open cloud software stacks through a unified set of software services and dashboards.

StackLight avoids lock-in by including best-in-breed open source software for log management, metrics and alerts.
It includes a comprehensive DevOps portal that displays information such as StackLight visualization and DriveTrain configuration settings.
The entire Mirantis StackLight toolchain is purpose-built for MCP to enable up to 99.99% uptime service level agreements with Mirantis Managed OpenStack.

With the release of MCP, Mirantis is also announcing end-of-life for Mirantis OpenStack (MOS) and Fuel by September 2019. Mirantis will be working with all customers currently using MOS on a tailored transition plan from MOS to MCP.

To learn more about MCP, watch an overview video and sign up for the introductory webinar at www.mirantis.com/mcp.

About Mirantis
Mirantis delivers open cloud infrastructure to top enterprises using OpenStack, Kubernetes and related open source technologies. The company is a major contributor of code to many open infrastructure projects and follows a build-operate-transfer model to deliver its Mirantis Cloud Platform and cloud management services, empowering customers to take advantage of open source innovation with no vendor lock-in. To date Mirantis has helped over 200 enterprises build and operate some of the largest open clouds in the world. Its customers include iconic brands such as AT&;T, Comcast, Shenzhen Stock Exchange, eBay, Wells Fargo Bank and Volkswagen. Learn more at www.mirantis.com.

###

Contact information:
Joseph Eckert for Mirantis
jeckertflak@gmail.comThe post Mirantis Releases Kubernetes Distribution and Updated Mirantis OpenStack appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Trust revolutions and the need for blockchain

In my previous post, I outlined what makes blockchain a transformative technology. It builds trust in data and business networks, which makes it the latest part of a long history of trust as the basis for economic transactions.
People trust each other based on personal knowledge. I trust you because I know you and what you have done.
That worked for small groups of people; tribes and ancient villages. If your roof leaks and the rains are coming, I will help you to fix it, because I trust that you will provide some reciprocal help at a future date. Sociologists say this type of favor-based personal trust breaks down once a community has more than about 150 people.
Therefore, throughout history, humanity had to invent new trust mechanisms to scale the economy.
The first real innovation in trust was coins, first minted around 640 BC in Lydia (modern Turkey). They are a universal mechanism of immediate asset exchange that enable someone to sell and olive crop, take the coins to the local market and buy clothes for their family. Coins have no inherent value, but each person trusts them because everyone else does and they are backed by a central trust authority, in the past a king or emperor, with power derived from the gods. As long as the king and his kingdom stood, the coin had value and citizens could trust it.
Coins enabled accumulation of wealth. There are only so many favors you can accumulate, but there is no limit to the number of coins you can have. This enabled villages to gather wealth and become cities, and it enabled cities to levy taxes to build temples, walls, roads and theaters.
Coins also enabled portability of wealth. A professional soldier in a field in England could take coins back to his family in Greece, for example.
This trust in coins, based on central trust authorities, underpinned the classical world, enabled trading networks such as that of the Phoenicians and helped to build the first great empires: Roman, Persian and Han. Human economic activity then stayed roughly the same for the next 2,000 years.
Around 1500 AD, the ‘new world’ was re-discovered by Europeans, creating a demand for exploration and investment. There was also a wave of religious reformation across Europe, which meant that it became acceptable to make money from money (this had been considered a sin).
To build a ship and sail it across the ocean was incredibly expensive and highly risky. Very few individuals could afford that, but groups of people could fund a share in a ship and take a share of the risk and profits, which was more acceptable. The concept of limited liability companies was born.
Insurers such as Lloyd’s of London offered to spread the risk on those ships and banks started to collect money from small investors, buy up the shares and pass on the returns in smaller (but more reliable) amounts.
The real innovation here was intangible money, money which you cannot see but which will come back to you in the future. You can hold a coin in your hand; you can’t hold a share. You can hold a piece of paper which says ‘share’, but you are ultimately trusting in the limited company, the insurers, the banks and the legislation which underpins that. Lawyers call these “legal fictions.”  They don’t actually exist, yet we put our trust in them anyway.
This concept of a monetary system saw an explosion in the amount of capital available for exploration and trade. It underpinned the great commercial empires like the East India Company and the Hudson’s Bay Company, funded the Industrial Revolution and paved the way for Bretton Woods and the modern economic system we have today.
All of that is based on central trust authorities. But we&;ve moved on from kings and gods as the basis for currency, in favor of corporations, banks and governments. That’s why we need blockchain, which I’ll discuss in my next post.
Learn more about the IBM cloud-based blockchain platform.
The post Trust revolutions and the need for blockchain appeared first on news.
Quelle: Thoughts on Cloud

Furious Indians Are Leaving Snapchat One-Star Reviews In The App Store Because They're Mad At The CEO

A former Snap Inc. employee has claimed that CEO Evan Spiegel allegedly said that he didn&;t &;want to expand into poor countries like India and Spain.&;

A former Snap Inc. employee has claimed in a lawsuit that CEO Evan Spiegel said that Snapchat was “only for rich people”, and that he didn’t “want to expand into poor countries like India and Spain.”

A former Snap Inc. employee has claimed in a lawsuit that CEO Evan Spiegel said that Snapchat was “only for rich people”, and that he didn’t “want to expand into poor countries like India and Spain."

Lucas Jackson / Reuters

The news was reported by Variety earlier this week.

In a statement provided to BuzzFeed News, a Snap Inc. spokesperson said: “This is ridiculous. Obviously Snapchat is for everyone&; It&;s available worldwide to download for free.”

Over the weekend, however, Indians battered the Snapchat app with angry reviews and poor ratings in the Indian App Store.

Over the weekend, however, Indians battered the Snapchat app with angry reviews and poor ratings in the Indian App Store.

BuzzFeed News screenshot

They called Spiegel “delusional”…

They called Spiegel "delusional"...

App Store


View Entire List ›

Quelle: <a href="Furious Indians Are Leaving Snapchat One-Star Reviews In The App Store Because They&039;re Mad At The CEO“>BuzzFeed

How do you build 12-factor apps using Kubernetes?

The post How do you build 12-factor apps using Kubernetes? appeared first on Mirantis | Pure Play Open Cloud.
It&;s said that there are 12 factors that define a cloud-native application.  It&8217;s also said that Kubernetes is designed for cloud native computing. So how do you create a 12-factor application using Kubernetes?  Let&8217;s take a look at exactly what twelve factor apps are and how they relate to Kubernetes.
What is a 12 factor application?
The Twelve Factor App is a manifesto on architectures for Software as a Service created by Heroku. The idea is that in order to be really suited to SaaS and avoid problems with software erosion &; where over time an application that&8217;s not updated gets to be out of sync with the latest operating systems, security patches, and so on &8212; an app should follow these 12 principles:

Codebase
One codebase tracked in revision control, many deploys
Dependencies
Explicitly declare and isolate dependencies
Config
Store config in the environment
Backing services
Treat backing services as attached resources
Build, release, run
Strictly separate build and run stages
Processes
Execute the app as one or more stateless processes
Port binding
Export services via port binding
Concurrency
Scale out via the process model
Disposability
Maximize robustness with fast startup and graceful shutdown
Dev/prod parity
Keep development, staging, and production as similar as possible
Logs
Treat logs as event streams
Admin processes
Run admin/management tasks as one-off processes

Let&8217;s look at what all of this means in terms of Kubernetes applications.
Principle I. Codebase
Principle 1 of a 12 Factor App is &;One codebase tracked in revision control, many deploys&;.
For Kubernetes applications, this principle is actually embedded in the nature of container orchestration itself.  Typically, you create your code using a source control repository such as a git repo, then store specific versions of your images in the Docker Hub. When you define the containers to be orchestrated as part of a a Kubernetes Pod, Deployment, DaemonSet, you also specify a particular version of the image, as in:

spec:
     containers:
     – name: AcctApp
       image: acctApp:v3

In this way, you might have multiple versions of your application running in different deployments.
Applications can also behave differently depending on the configuration information with which they run.
Principle II. Dependencies
Principle 2 of a 12 Factor App is &8220;Explicitly declare and isolate dependencies&8221;.
Making sure that an application&8217;s dependencies are satisfied is something that is practically assumed. For a 12 factor app, that includes not just making sure that the application-specific libraries are available, but also not counting on, say, shelling out to the operating system and assuming system libraries such as curl will be there.  A 12 factor app must be self-contained.
That includes making sure that the application is isolated enough that it&8217;s not affected by conflicting libraries that might be installed on the host machine.
Fortunately, if an application does have any specific or unusual system requirements, both of these requirements are handily satisfied by containers; the container includes all of the dependencies on which the application relies, and also provides a reasonably isolated environment in which the container runs.  (Contrary to popular belief, container environments are not completely isolated, but for most situations, they are Good Enough.)
For applications that are modularized and depend on other components, such as an HTTP service and a log fetcher, Kubernetes provides a way to combine all of these pieces into a single Pod, for an environment that encapsulates those pieces appropriately.
Principle III. Config
Principle 3 of a 12 Factor App is &8220;Store config in the environment&8221;.
The idea behind this principle is that an application should be completely independent from its configuration. In other words, you should be able to move it to another environment without having to touch the source code.
Some developers achieve this goal by creating configuration files of some sort, specifying details such as directories, hostnames, and database credentials. This is an improvement, but it does carry the risk that someone will check a config file into the source control repository.
Instead, 12 factor apps store their configurations as environment variables; these are, as the manifesto says, &8220;unlikely to be checked into the repository by accident&8221;, and they&8217;re operating system independent.
Kubernetes enables you to specify environment variables in manifests via the Downward API, but as these manifests themselves do get checked int source control, that&8217;s not a complete solution.
Instead, you can specify that environment variables should be populated by the contents of Kubernetes ConfigMaps or Secrets, which can be kept separate from the application.  For example, you might define a Pod as:
apiVersion: v1
kind: Pod
metadata:
 name: secret-env-pod
spec:
 containers:
   – name: mycontainer
     image: redis
     env:
       – name: SECRET_USERNAME
         valueFrom:
           secretKeyRef:
             name: mysecret
             key: username
       – name: SECRET_PASSWORD
         valueFrom:
           secretKeyRef:
             name: mysecret
             key: password
       – name: CONFIG_VERSION
         valueFrom:
           configMapKeyRef:
             name: redis-app-config
             key: version.id
As you can see, this Pod receives three environment variables, SECRET_USERNAME, SECRET_PASSWORD, and CONFIG_VERSION, the first two from from referenced Kubernetes Secrets, and the third from a Kubernetes ConfigMap.  This enables you to keep them out of configuration files.
Of course, there&8217;s still a risk of someone mis-handling the files used to create these objects, but it&8217;s them together and institute secure handling policies than it is to weed out dozens of config files scattered around a deployment.
What&8217;s more, there are those in the community that point out that even environment variables are not necessarily safe for their own reasons.  For example, if an app crashes, it may save all of the environment variables to a log or even transmit them to another service.  Diogo Mónica points to a tool called Keywhiz you can use with Kubernetes, creating secure secret storage.
Principle IV. Backing services
Principle 4 of the 12 Factor App is &8220;Treat backing services as attached resources&8221;.
In a 12 Factor app, any services that are not part of the core application, such as databases, external storage, or message queues, should be accessed as a service &8212; via an HTTP or similar request &8212; and specified in the configuration, so that the source of the service can be changed without affecting the core code of the application.
For example, if your application uses a message queuing system, you should be able to change from RabbitMQ to ZeroMQ (or ActiveMQ or even something else) without having to change anything but configuration information.
This requirement has two implications for a Kubernetes-based application.
First, it means that you must think about how your applications take in (and give out) information. For example, if you have a backing database, you wouldn&8217;t want to have a local Mysql instance, even if you&8217;re replicating it to other instances. Instead, you would want to have a separate container that handles database operations, and make those operations callable via an API. This way, if you needed to change to, say, PostgreSQL or a remotely hosted MySQL server, you could create a new container image, update the Pod definition, and restart the Pod (or more likely the Deployment or StatefulSet managing it).  
Similarly, if you&8217;re storing credentials or address information in environment variables backed by a ConfigMap, you can change that information and replace the Pod.
Note that both of these examples assume that though you&8217;re not making any changes to the source code (or even the container image for the main application) you will need to replace the Pod; the ability to do this is actually another principle of a 12 Factor App.
Principle V. Build, release, run
Principle 5 of the 12 Factor App is &8220;Strictly separate build and run stages&8221;.
These days it&8217;s hard to imagine a situation where this is not true, but a twelve-factor app must have a separate build stage.  In other words, you should be able to build or compile the code, then combine that with specific configuration information to create a specific release, then deliberately run the release.
Releases should be identifiable.  You should be able to say, &;This deployment is running Release 1.14 of this application&8221; or something similar, the same way we say we&8217;re running &8220;the OpenStack Ocata release&8221; or &8220;Kubernetes 1.6&8221;.  They should also be immutable; any changes should lead to a new release.  If this sounds daunting, remember that when we say &8220;application&8221; we&8217;re no longer talking about large, monolithic releases.  Instead, we&8217;re talking about very specific microservices, each of which has its own release, and which can bump releases without causing errors in consuming services.
All of this is so that when the app is running, that &8220;run&8221; process can be completely automated. Twelve factor apps need to be capable of running in an automated fashion because they need to be capable of restarting should there be a problem.
Translating this to the Kubernetes realm, we&8217;ve already said that the application needs to be stored in source control, then built with all of its dependencies.  That&8217;s your build process.  We talked about separating out the configuration information, so that&8217;s what needs to be combined with the build to make a release. And the ability to automatically run the application &8212; or multiple copies of the application &8212; is precisely what Kubernetes constructs like Deployments, ReplicaSets, and DaemonSets do.
Principle VI. Processes
Principle 6 of the 12 Factor App is &8220;Execute the app as one or more stateless processes&8221;.
Stateless processes are a core idea behind cloud native applications. Every twelve-factor application needs to run in individual, share-nothing processes. That means that any time you need to persist information, it needs to be stored in a backing service such as a database.
If you&8217;re new to cloud application programming, this might be deceptively simple; many developers are used to &8220;sticky&8221; sessions, storing information in the session with the confidence that the next request will come to the same server. In a cloud application, however, you must never make that assumption.
Instead, if you&8217;re running a Kubernetes-based application that hosts multiple copies of a particular pod, you must assume that subsequent requests can go anywhere.  To solve this problem, you will want to use some sort of backing volume or database for persistence.
One caveat to this principle is that Kubernetes StatefulSets can enable you to create Pods with stable network identities, so that you can, theoretically, direct requests to a particular pod. Technically speaking, if the process doesn&8217;t actually store state, and the pod can be deleted and recreated and still function properly, it satisfies this requirement &8212; but that&8217;s probably pushing it a bit.
Principle VII. Port binding
Principle 7 of the 12 Factor App is &8220;Export services via port binding&8221;.
In an environment where we&8217;re assuming that different functionalities are handled by different processes, it&8217;s easy to make the connection that these functionalities should be available via a protocol such as HTTP, so it&8217;s common for applications to be run behind web servers such as Apache or Tomcat.  Twelve-factor apps, however, should not be depend on an additional application in that way; remember, every function should be in its own process, isolated from everything else. Instead, the 12 Factor App manifesto recommends adding a web server library or something similar to the app itself, so that the app can await requests on a defined port, whether it&8217;s using HTTP or another protocol.
In a Kubernetes-based application, this is done partly through the architecture of the application itself, and partly by making sure that the application has all of its dependencies as part of the creation of the base containers on which the application is built.
Principle VIII. Concurrency
Principle 8 of the 12 Factor App is to &8220;Scale out via the process model&8221;.
When you&8217;re writing a twelve-factor app, make sure that you&8217;re designing it to be scaled out, rather than scaled up. That means that in order to add more capacity, you should be able to add more instances rather than more memory or CPU to the machine on which the app is running. Note that this specifically means being able to start additional processes on additional machines, which is, fortunately, a key capability of Kubernetes.
Principle IX. Disposability
Principle 9 of the 12 Factor App is to &8220;Maximize robustness with fast startup and graceful shutdown&8221;.
It seems like this principle was tailor made for containers and Kubernetes-based applications. The idea that processes should be disposable means that at any time, an application can die and the user won&8217;t be affected, either because there are others to take its place, because it&8217;ll start right up again, or both.
Containers are built on this principle, of course, and Kubernetes structures that manage multiple instances and maintain a certain level of availability even in the face of problems, such as ReplicaSets, complete the picture.
Principle X. Dev/prod parity
Principle 10 of the 12 Factor App is &8220;Keep development, staging, and production as similar as possible&8221;.
This is another principle that seems like it should be obvious, but is deeper than most people think. On the surface level, it does mean that you should have identical development, staging, and production environments, inasmuch as that is possible. One way to accomplish this is through the use of Kubernetes namespaces, enabling you to (theoretically) run code on the same actual cluster against the same actual systems while still keeping environments separate. In some situations, you can also use tools such as Minikube or kubeadm-dind-cluster to create near-clones of production systems.
At a deeper level, though, as the Twelve-Factor App manifesto puts it, it&8217;s about three different types of &8220;gaps&8221;:

The time gap: A developer may work on code that takes days, weeks, or even months to go into production.

The personnel gap: Developers write code, ops engineers deploy it.

The tools gap: Developers may be using a stack like Nginx, SQLite, and OS X, while the production deploy uses Apache, MySQL, and Linux.

The goal here is to create a Continuous Integration/Continuous Deployment situation in which changes can go into production virtually immediately (after testing, of course!), deployed by the developers who wrote it so they can actually see it in production, using the same tools on which the code was actually written in order to minimize the possibility of compatibility errors between environments.
Some of these factors are outside of the realm of Kubernetes, of course; the personnel gap is a cultural issue, for example. The time and tools gaps, however, can be helped in two ways.
For the time gap, Kubernetes-based applications are, of course, based on containers, which themselves are based on images that are stored in version-control systems, so they lend themselves to CI/CD. They can also be updated via rolling updates that can be rolled back in case of problems, so they&8217;re well-suited to this kind of environment.
As far as the tools gap is concerned, the architecture of Kubernetes-based applications make it easier to manage, both by making local dependencies simple to include in the various images, and by modularizing the application in such a way that external backing services can be standardized.
Principle XI. Logs
Principle 11 of the 12 Factor App is to &8220;Treat logs as event streams&8221;.
While most traditional applications store log information in a file, the Twelve-Factor app directs it, instead, to stdout as a stream of events; it&8217;s the execution environment that&8217;s responsible for collecting those events. That might be as simple as redirecting stdout to a file, but in most cases it involves using a log router such as Fluentd and saving the logs to Hadoop or a service such as Splunk.
In Kubernetes, you have at least two choices for automatic logging capture: Stackdriver Logging if you&8217;re using Google Cloud, and Elasticsearch if you&8217;re not.  You can find more information on setting Kubernetes logging destinations here.
Principle XII. Admin processes
Principle 12 of the 12 Factor App is &8220;Run admin/management tasks as one-off processes&8221;.
This principle involves separating admin tasks such as migrating a database or inspecting records from the rest of the application. Even though they&8217;re separate, however, they must still be run in the same environment and against the same base code and configuration as the application, and their code must be shipped alongside the application to prevent drift.
You can implement this a number of different ways in Kubernetes-based applications, depending on the size and scale of the application itself. For example, for small tasks, you might use kubectl exec to operate on a specific container, or you can use the Kubernetes Job to run a self-contained application. For more complicated tasks that involve orchestration of changes, however, you can also use Kubernetes Helm charts.
How many of these factors did you hit?
Unless you&8217;re still building desktop applications, chances are you can feel good about hitting at least a few of these essential principles of twelve-factor apps. But chances are you also found at least one or two you can probably work a little harder on.
So we want to know: which of these factors are the biggest struggle for you? Where do you think you need to work harder, and what would make it easier for you?  Let us know in the comments.
Thanks to Jedrzej Nowak, Piotr Shamruk, Ivan Shvedunov, Tomasz Napierala and Lukasz Oles for reviewing this article!
Check out more Kubernetes resources.
The post How do you build 12-factor apps using Kubernetes? appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Operations Engineer (STO Team)

The post Operations Engineer (STO Team) appeared first on Mirantis | Pure Play Open Cloud.
Mirantis is the leading global provider of Software and Services for OpenStack ™, a massively scalable and feature-rich Open Source Cloud Operating System. OpenStack is used by hundreds of companies, including AT&T, Cisco, HP, Internap, NASA, Dell, GE, and many more.As a leading global provider, Mirantis offers the leading OpenStack technology platform coupled by unique, cost-effective global services delivery model founded on years of deep software engineering experience for demanding Fortune 1000 companies.Mirantis Inc. are inviting enthusiastic Operations engineers, who will be extending OpenStack to support enterprise-grade private IaaS platforms for company&;s customers. We need talented engineers, who are willing to work on intersection of IT and software engineering, be passionate about open-source and not afraid of maintaining huge codebase, written by best developers in the area.Responsibilities:System Administration on Linux (Ubuntu, CentOS, etc).Technical support OpenStack products for customers.Test components for cloud applications using Python in case alarm conditions.Troubleshooting OpenStack installation and fixing bugs in OpenStack components.Participating in public activities: user groups, conferences, company’s blog both in Russia and USA.Requirements:Excellent Linux system administration and troubleshooting skills.Good knowledge of Python.Good understanding of networking concepts and protocols.Nice to have:Experience of working with and maintaining large Python codebases.Experience working with virtualization solutions (KVM, XEN).Understanding of NAS/SAN.Awareness of distributed file systems (Gluster, Ceph).Experience with configuring and extending monitoring tools (Nagios, Ganglia, Zabbix).Experience working with configuration management tools (Chef, Puppet, Cobbler).Experience of deploying and extending of OpenStack is a plus.Fluent english.We offer:Competitive salary (after interview).Career and professional growth.20-working days paid vacation, 100% paid sick list.Medical insurance.Benefit programThe post Operations Engineer (STO Team) appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Overloaded? Digital assistants to the rescue

Data is empowering us like never before. But there’s a flip side to having access to so much valuable data: information overload. That’s when data causes more pain than gain. It looks something like this:

It all feels like a bit too much.
The catch-22 of “knowledge work”
The term “knowledge workers” generally describes anyone with a desk job. That’s tens of millions of people across the globe. Each day, these workers must process, analyze and manage information to solve problems and innovate. Already, information overload is hurting workplace productivity.
IDC reports that digital data will surge to a trillion gigabytes by 2025. That’s 10 times the 16.1 zettabytes of data generated in 2016. On the other hand, the number of knowledge workers is shrinking. McKinsey Global Institute predicts a shortage of 80 million knowledge workers worldwide. So while workloads are increasing, the workforce is decreasing.
What’s the solution to this dilemma? Intelligent digital assistants.

Bring on the robots
Meet the next disruptor: the intelligent digital assistant. Enabled by artificial intelligence, these digital helpers can automate complex data work, helping employees do higher-value work. Sifting through mounds of data, prioritizing projects and managing tedious tasks are just a few of the many activities digital assistants can do.
Digital assistants in action: A usage scenario
How might a digital assistant make knowledge work easier? Let’s walk through a scenario.
Imagine Rob, a software account rep with more than 40 accounts, is having trouble staying on top of them. He&;s overloaded with information and tools, including Salesforce, Gmail, Slack, Google Sheets, and LeadLander, among others. Rob needs to constantly check these disparate systems and synthesize information to get the insights he needs to effectively serve his customers.
A digital assistant could do a lot of this work for him. For example, Rob could have his assistant monitor product usage and Salesforce to detect customers who are up for renewals in the next three months, but haven&8217;t been actively using the product. The assistant can send Rob notifications, enabling Rob to resolve any issues that may be occurring and increase the customer’s chances of renewing.
He could also have his assistant watch for new job postings from his clients on Indeed.com and LinkedIn. If the assistant finds a job ad from one of his clients, it can notify Rob that the customer may need additional software licenses for the new employees. The digital assistant can even proactively recommend additional tasks to offload to the assistant, enabling him to be more proactive. Collectively, these actions could add hours back to Rob’s work week while helping Rob better meet his goals.
Rob frequently works with Alice in customer support, who could also benefit from a digital assistant. To provide proactive service to customers, she could train her assistant to monitor new support tickets. If three or more customers report the same problem with the same product within a week, the assistant can automatically send the engineering team a high-priority ticket that includes a summary of the related tickets. The assistant can also send ongoing status updates to management, saving the support team significant time.
As these scenarios illustrate, there are three key capabilities that can make or break the effectiveness of a digital assistant app:
Usability
Many digital assistants require IT intervention because of complexity. That’s not exactly a motivator for adoption. Workers should look for a digital assistant that’s intuitive to set up and use—allowing them to easily create their own complex situations to detect and actions to take. The technology should also include a catalog of pre-built skills that can be personalized—to help users get started quickly.
Customization
Employees should look for a digital assistant that can work with the systems they use and accommodate their unique key performance indicators (KPIs) and work processes. In other words, the digital assistant should be as useful as a human assistant that they might train.
 
Intelligence
Detecting situations that matter, delivering context-driven notifications at the right times and automating actions are all intelligent capabilities. But after a while, the digital assistant should be able to learn from how workers use it, and make proactive recommendations. That’s what users would expect their human assistants to do. So employees should look for a digital assistant that they don’t have to micromanage.
To see how you can start optimizing your productivity with help from an intelligent digital assistant, check out this video.
The post Overloaded? Digital assistants to the rescue appeared first on news.
Quelle: Thoughts on Cloud

Account Executive

The post Account Executive appeared first on Mirantis | Pure Play Open Cloud.
We are transforming the industry and you will be helping us lead the charge.  As an account executive at Mirantis you will develop and execute a strategic and comprehensive business plan for your territory, including identifying core customers, mapping the benefits of OpenStack to customer’s business requirements. You will take full responsibility for accurate forecasting, regular quarterly revenue delivery, and facilitation of sales enablement and regulate the implementation of agreed account and business plans. Your overall focus areas will be in prospecting, developing business, responding to RFP&;s, developing proposals for presentation to customers, and selling Services and Products. Cross-functional teams from Mirantis’ Marketing, Solutions Engineering, Professional Services, and Product Development functions will provide support and tools for you to leverage to attain and exceed sales performance goals. Primary ResponsibilitiesPipeline Generation- acquire new customer database from calling into high level within prospect organizations, networking and various customer account lists.Participates in campaigns, conferences, works with marketing team to understand new offers and leads in assigned region, generates leads independently and follows-up appropriatelySolution Selling – consults with clients to determine their needs and works with application sales specialists to generate multi-product/service solutions. Takes initiative to learn new offers and products, as they become available. Able to apply technology knowledge in business development effortsProposal/Presentation Generation: incorporates executive summary, ROI analysis, and solution design to develop customer-specific proposals and presentations.Develop Scope of Work – works with the customer and engineering team to define and document the project scopeRelationship Management – develops and manages relationships with current clients to develop additional business as well as ensure a high level of client satisfactionAccurate Forecasting – captures activity information on a timely basis as client interactions occur to insure accurate product and services forecastingRequirementsAdvanced selling skills with a demonstrated track record of selling into complex organizations with multiple layers of decision makers. 10+ years selling experience with telecom and other technology products and solutions such as Cisco, EMC (Storage), VMware, NetApp, Oracle and managed services.Market knowledge (i.e. industry knowledge relevant to geographic area) and technical knowledge are necessary, and if assigned to vertical markets, knowledge of public sector is required.Must possess business experience to analyze client business requirements and develop creative solutions as well as utilize technical resources to complete an accurate and technically assured sales order.Exceptional communication skillsAbility to accept constructive criticism; and ability to maintain and develop positive team cohesivenessWork constructively across cultural boundaries in a globally distributed organization What We OfferWork in the Silicon Valley with established leaders in their industryWork with exceptionally passionate, talented and engaging colleaguesBe a part of cutting edge of open-source innovation since LinuxHigh-energy atmosphere of a young company, competitive compensation package with strong benefits plan and stock optionsLots of freedom for creativity and personal growthThe post Account Executive appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Here's How Accurate The Fitbit Alta HR Actually Is

We pitted Fitbit&;s new ultra-slim wristband against a chest strap to see just how accurately it could measure beats per minute.

There are a lot of reasons why you’d want to get a fitness tracker. Maybe you’re trying to gather insights about your sleep, or get fitter, or lose weight. Whatever your goal, one thing is true: a fitness tracker is useless if it can’t accurately measure whatever it is you’re trying to track.

Fitbit recently debuted the Alta HR, an ultra-slim wristband with a new feature: heart rate tracking. But just how accurately can the wearable track your beats per minute? When we did a first impressions review of the Alta HR last month, our preliminary tests suggested that the tracker’s heart rate technology wasn’t always on point. So we spent the last two weeks working up a sweat while wearing the new band — and as we originally suspected, the Alta HR struggled to keep up with exercises with a lot of movement and high intensity bursts. It did, however, reliably measure resting heart rate.

Experts say most people don’t actually need to know their exact heart rate during workouts, so this may not matter to you. But accuracy does matter for some, namely people with heart conditions and endurance athletes. Fitbit also heavily markets the heart-rate tracking capability of its latest devices, like the Charge HR, Blaze, and Surge — and last year, it faced a class-action lawsuit over its allegedly inaccurate technology. (The company has called the allegations “baseless” and contested the lawsuit, as well as noting that the devices are “not intended to be scientific or medical devices.”)

With all that in mind, we set out to answer the question: Should you still consider the $150 Alta HR? Here’s what we found.

Nicole Nguyen / BuzzFeed News

Unfortunately, we didn’t have access to an electrocardiograph, a medical device that’s considered the gold standard for heart-rate measuring. Instead we used the next best thing — a chest strap monitor, which multiple studies have shown to be significantly more accurate than wrist-based heart rate monitors, as our control. And we enlisted Open Lab Fellow/chart master Lam Vo to help us sort through all of the data.

For each run and bike ride, we wore a Polar H7 chest strap, in addition to the Fitbit Alta HR and Apple Watch. Nicole wore the devices on different wrists, while Stephanie wore the devices on the same wrist.

Nicole’s Long-ish Run: A Close Look at the Data

Nicole's Long-ish Run: A Close Look at the Data

During my first-impressions workout, a quick 17-minute run with short, intense uphill sprints, the Alta HR had trouble measuring my heart rate. So, this time around, I went for a longer interval run, switching between a light jog and uphill sprints, for about 40 minutes. The terrain was a mix of trails and concrete on rolling hills.

After the run, I checked each device’s workout summaries.The Polar Beat app measured a 160 beats per minute (bpm) average. Compared to that measurement, the Apple Watch overestimated the average by 1 beat (161 bpm), and the Fitbit Alta HR underestimated by 3 bpm, which is pretty impressive and consistent with Fitbit’s claim of average absolute error of less than 6 bpm versus an EKG (electrocardiograph) test strap.

I extracted the heart rate data from each device, and Lam created an interactive chart, so you can see exactly how the wearables performed throughout the duration of my run (try clicking the names of the device and hovering your mouse over the graph&;).

Nicole Nguyen / BuzzFeed News

Lam Vo / BuzzFeed News


View Entire List ›

Quelle: <a href="Here&039;s How Accurate The Fitbit Alta HR Actually Is“>BuzzFeed