Amazon ElastiCache Launches Enhanced Redis Backup and Restore with Cluster Resizing

We are excited to announce that Amazon ElastiCache now supports enhanced Redis Backup and Restore with Cluster Resizing. In October 2016, we launched support for Redis Cluster with Redis 3.2.4. In addition to scaling your Redis workloads across up to 15 shards with 3.5TiB of data, it also allowed creating cluster-level backups, which contain snapshots of each of the cluster’s shards. With this launch, we are adding the capability to restore a backup into a Redis Cluster with a different number of shards and slot distribution, allowing you to resize your Redis workload. ElastiCache will parse the Redis key space across the backup’s individual snapshots, and redistribute the keys in the new Cluster according to the requested number of shards and hash slots. Your new cluster can be either larger or smaller in size, as long as the data fits in the selected configuration.
Quelle: aws.amazon.com

Two Russian Spies Have Been Charged In The Massive Yahoo Email Hack

Dado Ruvic / Reuters

The Justice Department charged four men — two of whom are Russian Federal Security Service, or FSB, officers — Wednesday for stealing the personal information of at least 500 million Yahoo customers in a massive breach that rocked the company&;s reputation and slashed hundreds of millions of dollars off its sale to Verizon.

The two non-FSB defendants were criminal hackers hired by the Russian officials to breach Yahoo&039;s network. The stolen account information was used to gain additional content from customers&039; Yahoo accounts and accounts tied to other email providers, including Google.

Both Russian journalists and American diplomatic officials were then targeted using the data stolen in the hack. The charges for what was one of the largest computer intrusions in American history included conspiracy, economic espionage, wire fraud, and aggravated identity theft.

In a move that Acting Assistant Attorney General Mary McCord described as “beyond the pale,” the FSB officials behind the hack were members of a Russian unit that serves as the FBI&039;s liaison on cybercrime in Moscow. “These are the very people that we are supposed to work with cooperatively,” she said during a press conference Wednesday. “They turned against that type of work.”

One of the defendants, Alexsey Alexseyevich Belan, had been on the FBI&039;s most-wanted list for more than three years for cybercrime, McCord said. Another defendant, Karim Baratov, was arrested for the Yahoo breach yesterday in Canada. The US government will ask Russian law enforcement officials to extradite the remaining three defendants, who reside in Russia, said Paul Abbate, the executive assistant director of the FBI&039;s cyber branch.

“The indictment unequivocally shows the attacks on Yahoo were state-sponsored,” said Chris Madsen, Yahoo&039;s assistant general counsel and head of global law enforcement. “We’re committed to keeping our users and our platforms secure and will continue to engage with law enforcement to combat cybercrime.”

In December, Yahoo first revealed that hackers had stolen customer information from 1 billion Yahoo accounts in an attack dating back to 2013. The colossal breach was separate from the major intrusion that the Russian officials were charged with. That data breach was announced in September, when Yahoo said 500 million accounts had been compromised by a state-sponsored hacker in 2014. In both cases Yahoo said users&039; email addresses, telephone numbers, dates of birth, and passwords were likely stolen.

News of the attacks came just months after Verizon announced plans to buy Yahoo for $4.83 billion last summer. The embarrassing disclosures prompted Verizon to seek a nearly 20% discount of Yahoo&039;s sale price, totaling $925 million. But the two companies instead agreed to slash $395 million off the deal price because of the damage from the breaches.

Following the company&039;s review of the 2014 breach, Yahoo said CEO Marissa Mayer would not receive her 2016 annual bonus. Mayer also said she would forgo her 2017 equity award. Together, the pay cut appears to amount to a personal loss of $14 million, but Mayer will still receive a $23 million “golden parachute” once Verizon&039;s purchase of Yahoo is completed later this year.

Read the indictment here:

Quelle: <a href="Two Russian Spies Have Been Charged In The Massive Yahoo Email Hack“>BuzzFeed

Internal Metrics Show How Often Uber’s Self-Driving Cars Need Human Help

Jeff Swensen / Getty Images

Human drivers were forced to take control of Uber&;s self driving cars about once per mile driven in early March during testing in Arizona, according to an internal performance report obtained by BuzzFeed News. The report reveals for the first time how Uber’s self-driving car program is performing, using a key metric for evaluating progress toward fully autonomous vehicles.

Human drivers take manual control of autonomous vehicles during testing for a number of reasons, for example, to address a technical issue or avoid a traffic violation or collision. The self-driving car industry refers to such events as “disengagements,” though Uber uses the term “intervention” in the performance report reviewed by BuzzFeed News. During a series of autonomous tests the week of March 5, Uber saw disengagement rates greater than those publicly reported by some of its rivals in the self-driving car space.

When regulatory issues in December 2016 forced Uber to suspend a self-driving pilot program in San Francisco, the company sent some of its cars to Arizona. Since then, Uber has been testing its autonomous cars along two routes in the state. The first is a multi-lane street called Scottsdale Road — a straight, 24-mile stretch that runs through the city of the same name. According to Uber&039;s performance report on tests for the week of March 5, the company&039;s self-driving cars were able to travel an average of 0.67 miles on Scottsdale Road without human intervention and an average of 2 miles without a “bad experience” — Uber’s classification for incidents in which a car brakes too hard, jerks forcefully or behaves in a way that might startle passengers. Uber described the overall passenger experience for this particular week as “not great,” but noted improvement compared to the prior week&039;s tests, which included one “harmful” incident — and event that might have caused human injury.

Uber has also been testing its autonomous vehicles on a “loop” at Arizona State University. According to the performance report reviewed by BuzzFeed News, self-driving cars used on the ASU loop saw “strong improvement” during the week of March 5, traveling a total of 449 miles in autonomous mode without a “critical” intervention (a case where the system kicked control back to the driver, or the driver regained control to prevent a likely collision). The vehicles were able to drive an average of 7 miles without a “bad experience” that might cause passenger discomfort (a 22% improvement over the week prior) and an average of 1.3 miles without any human intervention (a 15% improvement over the week prior). The cars made 128 trips with passengers, compared to 81 the prior week.

Uber told BuzzFeed its disengagements could also include instances in which awhere the system kicks back control to a driver, and when the car returns control to a human driver toward the end of a trip. The company declined to comment on the internal metrics obtained by BuzzFeed News and its disengagement rate compared to those of competitors. Uber also declined to say how many miles and hours the vehicles in Arizona drove in total the week of March 5.

“To take out the safety drivers, you would want far better performance than these numbers suggest.”

Bryant Walker Smith, a University of South Carolina law professor and a member of member of the US Department of Transportation&039;s Advisory Committee on Automation in Transportation said it’s difficult to draw conclusions about the progress of Uber’s self-driving car program based on just one week of disengagement metrics, adding that the figures suggest that safety drivers appear to intervene regularly out of caution – even in cases where an accident may not be imminent.

“To take out the safety drivers, you would want far better performance than these numbers suggest, and you’d want that to be consistently better performance,” Walker Smith said. “If these are actual bad experiences for someone inside the vehicle, then that probably doesn’t compare very favorably to human driving. How often do people go 10 miles or 10 minutes and have a viscerally bad experience?”

Uber’s internal metrics are specific to its vehicles in Arizona. The state does not require companies testing there to release data on how their self-driving cars perform. California is the only state that requires companies that test self-driving cars on public roads to submit annual reports detailing how many times they “disengage” autonomous mode. Because Uber only returned some self-driving vehicles to San Francisco’s roads this month, after its trials were shut down in the state for not obtaining the proper permits December, it has not yet submitted a public report. But reports submitted by other companies to the California DMV do offer a point of comparison.

Alphabet’s Waymo said in a Jan. 5 report filed with the CA DMV that during the 636,000 miles its self-driving vehicles drove on public roads in California from December 2016 through November 2016, human drivers were forced to take control of their self-driving vehicles 124 times. That’s a rate of 0.2 disengagements per thousand miles — or 0.0002 interventions per mile, compared to Uber’s 0.67 and 1.3 rates on Scottsdale Road and the ASU loop, respectively. But Google’s report also notes that its figures don’t include all disengagements: “As part of testing, our cars switch in and out of autonomous mode many times a day. These disengagements number in the many thousands on an annual basis though the vast majority are considered routine and not related to safety.”

(Here are the CA DMV reports to compare Uber’s testing the week of March 5th in Arizona to the other companies that test on public roads in California and reported their statistics to the DMV for December 2015 through November 2016.)

Uber CEO Travis Kalanick has called self-driving cars “existential threat” to his ride-hail business. (If a competitor were to develop autonomous vehicles and run an Uber-like service that did not require giving a cut to drivers, the rides would be cheaper.) In February 2015, Uber poached dozens of top roboticists from Carnegie Mellon University to jump start a self-driving car program. Eighteen months later, Uber launched a pilot program in Pittsburgh that put passengers in the backseats of cars manned by a safety driver and a “copilot” riding shotgun. “Your self-driving Uber is arriving now,” the company wrote on its website. Headlines called it a “landmark” trial, and “the week self-driving cars became real.”

Uber’s self-driving program is quarterbacked by Anthony Levandowski, who helped build the first self-driving Google (now called Waymo) car before leaving to create his own startup, Otto. The ride-hail giant’s self-driving program is embroiled in a lawsuit from Alphabet over allegations that Levandowski stole a crucial part of Waymo’s self-driving technology before leaving. Uber acquired Otto in August, about three months after Levandowski launched the company out of stealth mode.

Levandowski became the self-driving program’s fourth leader in less than two years. Uber CEO Travis Kalanick described their relationship as “brothers from another mother,” saying the pair shared a desire to move autonomous technology from the research phase to the market. A few weeks after the Pittsburgh pilot launched, Levandowski set a new, ambitious goal for Uber’s engineers, according to an internal planning document viewed by BuzzFeed News: Prepare self-driving cars to run with no humans behind the wheel in San Francisco by January 2017.

In the end, in response to concerns raised by engineers who worried the goal was too aggressive, Uber did something far less ambitious. In December 2016, it launched a trial in San Francisco that mirrored its Pittsburgh pilot program: a human safety driver, accompanied by a copilot,” would man each self-driving Volvo on the road in San Francisco. On its first day, one of the vehicles was caught running a red light. Uber attributed the traffic violation to human error, but the New York Times reported in February that “the self-driving car was, in fact, driving itself when it barreled through the red light.”

“When they let us know they were doing the test, we kind of had to play catch-up because nobody had ever asked us that question before.”

Meanwhile, Uber’s self-driving truck division Otto has been working toward its own goals. In October, Ottomade headlines for completing the first publicly known self-driving truck delivery – a 120-mile beer haul along a public highway in Colorado for Anheuser-Busch, with the driver in the back seat.

“When they let us know they were doing the test, we kind of had to play catch-up because nobody had ever asked us that question before,” Mark Savage, deputy chief for the Colorado State Patrol, told BuzzFeed News. “We did put together a protocol that we had them walk through in order to determine whether the test was done safely and it was pretty involved.”

For one month ahead of the demo, the company performed trials along that route for 16 hours a day with human safety drivers behind the wheel, according to a Colorado state planning document obtained by BuzzFeed News. A video showed the truck driver crawling into the sleeper berth for the duration of the ride.

After completing five consecutive tests – a total of 625 miles – that did not necessitate human intervention, Otto embarked on a fully driverless demo at midnight on Oct. 20, with the state patrol “packaging” the truck with troopers during the event much like a motorcade, according to the planning document. The truck included two emergency stop buttons: one near the steering wheel, and one in the sleeper berth, where the driver sat during the ride, Uber told BuzzFeed. The company added the second button specifically for the delivery; In all other tests, Otto drivers remain behind the wheel.

Steven Shladover, chair of the federal Transportation Research Board’s vehicle highway automation committee, said Otto’s testing before the demonstration “tells nothing about whether the system is safe.” He said crashes occur when “some other driver happens to do something stupid. You’re not going to run into those circumstances by driving a few hundred hours.”

“Just the fact that they have however many hundred hours of driving doesn’t prove safety,” Shladover told BuzzFeed. “Putting together a show like that is nice for marketing purposes, but it doesn’t prove anything about the readiness of the technology to be put into public use.”

Quelle: <a href="Internal Metrics Show How Often Uber’s Self-Driving Cars Need Human Help“>BuzzFeed

AWS Step Functions Adds Customized Error Handling for AWS Lambda Functions

AWS Step Functions now gives you more flexibility in how AWS Lambda function error messages are handled in your workflow so you can improve the resiliency of serverless applications. Step Functions makes it easy to coordinate the components of distributed applications and microservices using visual workflows. Building applications from individual components that each perform a discrete function lets you scale and change applications quickly. With Step Functions, you can create state machines to orchestrate multiple Lambda functions and build multi-step serverless applications. 
Quelle: aws.amazon.com

RDO CI promotion pipelines in a nutshell

One of the key goals in is to provide a set of well tested and up-to-date
repositories that can be smoothly used by our users:

Operators deploying OpenStack with any of the available tools.
Upstream projects using RDO repos to develop and test their patches, as OpenStack
puppet modules, TripleO or kolla.

To include new patches in RDO packages as soon as possible, in RDO Trunk
repos
we build and publish new packages when commits are merged in upstream repositories.
To ensure the content of theses packages is trustworthy, we run different tests which
helps us to identify any problem introduced by the changes committed.

This post provides an overview of how we test RDO repositories. If you
are interested in collaborate with us in running an improving it, feel free to
let us know in rdo channel in freenode or rdo-list mailing list.

Promotion Pipelines

Promotion pipelines are composed by a set of related CI jobs that are executed
for each supported OpenStack release to test the content of a specific RDO repository.
Currently promotion pipelines are executed in diferent phases:

Define the repository to be tested. RDO Trunk repositories are identified
by a hash based on the upstream commit of the last built package. The content of
these repos doesn’t change over time. When a promotion pipeline is launched, it
grabs the latest consistent hash repo and sets it to be tested in following phases.

Build TripleO images. TripleO is
the recommended deployment tool for production usage in RDO and as such, is tested
in RDO CI jobs. Before actually deploying OpenStack using TripleO the required
images are built.

Deploy and test RDO. We run a set of jobs which deploy and test OpenStack
using different installers and scenarios to ensure they behave as expected. Currently,
following deployment tools and configurations are tested:

TripleO deployments. Using tripleo-quickstart
we deploy two different configurations, minimal
and minimal_pacemaker
which apply different settings that cover most common options.
OpenStack Puppet scenarios. Project puppet-openstack-integration (a.k.a. p-o-i)
maintains a set of puppet manifest to deploy different OpenStack services
combinations and configurations (scenarios) in a single server using OpenStack
puppet modules and run tempest smoke tests for the deployed services. The
tested services on each scenario can be found in the
README
for p-o-i. Scenarios 1, 2 and 3 are currently tested in RDO CI as
Packstack deployment. As part of the upstream testing, packstack defines
three deployment scenarios
to verify the correct behavior of the existing options. Note that tempest smoke tests
are executed also in these jobs. In RDO-CI we leverage those scenarios to test
new packages built in RDO repos.

Repository and images promotion. When all jobs in the previous phase succeed,
the tested repository is considered good and it is promoted so that users can use these
packages:

The repo is published using CentOS CDN in https://buildlogs.centos.org/centos/7/cloud/x86_64/rdo-trunk-<release>-tested/)
The images are copied to https://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/<release>/delorean/
to be used by TripleO users.

Tools used in RDO CI

Jobs definitions are managed using Jenkings Job Builder, JJB,
via gerrit review workflow in review.rdoproject.org
weirdo is the tool we
use to run p-o-i and Packstack testing scenarios defined upstream inside RDO CI.
It’s composed of a set of ansible roles and playbooks that prepares the environment
and deploy and test the installers using the testing scripts provided by
the projects.
TripleO Quickstart
provides a set of scripts, ansible roles and pre-defined configurations to deploy
an OpenStack cloud using TripleO
in a simple and fully automated way.
ARA is used to store and visualize
the results of ansible playbook runs, making easier to analize and troubleshoot
them.

Infrastructure

RDO is part of the CentOS Cloud Special Interest Group so we run promotion pipelines
in CentOS CI infrastructure where Jenkins
is used as continuous integration server..

Handling issues in RDO CI

An important aspect of running RDO CI is managing properly the errors found in the
jobs included in the promotion pipelines. The root cause of these issues sometimes
is in the OpenStack upstream projects:

Some problems are not catched in devstack-based jobs running in upstream gates.
In some cases, new versions of OpenStack services require changes in the deployment
tools (puppet modules, TripleO, etc…).

One of the contributions of RDO to upstream projects is to increase test coverage of
the projects and help to identify the problems as soon as possible. When we find them
we report it upstream as Launchpad bugs and propose fixes when possible.

Every time we find an issue, a new card is added to the TripleO and RDO CI Status
Trello board where
we track the status and activities carried out to get it fixed.

Status of promotion pipelines

If you are interested in the status of the promotion pipelines in RDO CI you can
check:

CentOS CI RDO view
can be used to see the result and status of the jobs for each OpenStack
release.

RDO Dashboard shows the overal
status of RDO packaging and CI.

More info

TripleO quickstart demostration by trown
Weirdo: A talk about CI in OpenStack and RDO by dmsimard.
ARA blog posts – from dmsimard blog
Ci in RDO: What do we test? – presentation in RDO and Ceph Meetup BCN.

Quelle: RDO

Detours on the way to microservices

In 2008, I first heard Adrian Cockcroft of Netflix describe microservices as “fine-grained service oriented architecture.” I’d spent the previous six years wrestling with the more brutish, coarse-grained service-oriented architecture, its standards and so-called “best practices.” I knew then that unlike web-scale offerings such as Netflix, the road to microservices adoption by companies would have its roadblocks and detours.
It&;s not quite ten years later, and I am about to attend IBM InterConnect, where microservice adoption by business seems inescapable. What better time to consider these detours and how to avoid them?
Conway&8217;s law may be bi-directional
Melvin Conway introduced the idea that&8217;s become known as Conway&8217;s Law: &;Organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.&;
But I saw it occur in reverse: When enterprise software organizations first decide to adopt microservices and its disciplines, I observed development teams organize themselves around the (micro-) services being built. When constructing enterprise applications by coding and “wiring up” small, independently-operating services, the development organization seemed to adjust itself to fit the software architecture, thereby creating silos and organizational friction.
More than the sum of its parts
When an organization first adopts microservices in its architecture, there are resource shortages. People who are skilled in the ways of microservices find themselves stretched far too thin. And specific implementation languages, frameworks or platforms can be in short supply. There&8217;s loss of momentum, attention and effective use of time because the “experts” must continually switch context and change the focus of their attention.
As is usually the case with resource shortage, the issue is one of prioritization: When there are hundreds or even thousands of microservices to build and maintain, how are allocations of scarce resources going to be made? Who makes them and on what basis?
The cloud-native management tax
The adoption of microservices requires a variety of specialized, independent platforms to which developer, test and operations teams must attend. Many of these come with their own forms of management and management tooling. In one case, I looked through the list of management interfaces and tools for a newly-minted, cloud-native application to discover more than forty separate management tools being used. These tools were covering: the different programming languages; authentication; authorization; reporting; databases; caches; platform libraries; service dependencies; pipeline dependencies; security threat model; audits; workflow; log aggregation and much more. The full list was astonishing.
The benefits of cloud-native architecture do not come without a price: organizations will need additional management tooling and the costs of becoming skilled in those management tools.
Carrying forward the technical debt
When a company embraces cloud migration or digital transformation, a team may be chartered to re-architect and re-implement an existing, monolithic application, its associated data, external dependencies and technical interconnections. Too often, I discovered that the shortcuts and hard-coded aspects of the existing application were being re-implemented as well. There seemed to be a part of the process that was missing when the objective was to migrate an application.
In an upcoming blog post, I&8217;ll consider some of the common detours and look to what practices and technologies are being used to avoid them.
Join me and other industry experts as we explore the world of microserivces at IBM InterConnect on March 19 – 23, 2017.
 
The post Detours on the way to microservices appeared first on news.
Quelle: Thoughts on Cloud

Learn Docker with our DockerCon 2017 Hands-On Labs

We’re excited to announce that 2017 will feature a comprehensive set of hands-on labs. We first introduced hands-on labs at DockerCon EU in 2015, and they were also part of DockerCon 2016 last year in Seattle. This year we’re offering a broader range of topics that cover the interests of both developers and operations personnel on both Windows and Linux (see below for a full list)
These hands-on labs are designed to be self-paced, and are run on the attendee’s laptop. But, don’t worry, all the infrastructure will be hosted again this year on Microsoft Azure. So, all you will need is a laptop capable of instantiating a remote session over SSH (for Linux) or RDP (for Windows).

We’ll have a nice space set up in between the ecosystem expo and breakout rooms for you to work on the labs. There will be tables and stools along with power and wireless Internet access as well as lab proctors to answer questions. But, because of the way the labs are set up, you could also stop by, sign up, and take your laptop to a quiet spot and work on your own.
As you can tell, we’re pretty stoked on the labs, and we think you will be to.
See you in Austin!
DockerCon 2017 Hands-on Labs

Title

Abstract

Orchestration

In this lab you can play around with the container orchestration features of Docker. You will deploy a Dockerized application to a single host and test the application. You will then configure Docker Swarm Mode and deploy the same application across multiple hosts. You will then see how to scale the application and move the workload across different hosts easily.

Docker Networking

In this lab you will learn about key Docker Networking concepts. You will get your hands dirty by going through examples of a few basic concepts, learn about Bridge and Overlay networking, and finally learning about the Swarm Routing Mesh.

Modernize .NET Apps &; for Devs.

A developer’s guide to app migration, showing how the Docker platform lets you update a monolithic application without doing a full rebuild. You’ll start with a sample app and see how to break components out into separate units, plumbing the units together with the Docker platform and the tried-and-trusted applications available on Docker Hub.

Modernize .NET Apps &8211; for Ops.

An admin guide to migrating .NET apps to Docker images, showing how the build, ship, run workflow makes application maintenance fast and risk-free. You’ll start by migrating a sample app to Docker, and then learn how to upgrade the application, patch the Windows version the app uses, and patch the Windows version on the host &8211; all with zero downtime.

Getting Started with Docker on Windows Server 2016

Get started with Docker on Windows, and learn why the world is moving to containers. You’ll start by exploring the Windows Docker images from Microsoft, then you’ll run some simple applications, and learn how to scale apps across multiple servers running Docker in swarm mode

Building a CI / CD Pipeline in Docker Cloud

In this lab you will construct a CI / CD pipeline using Docker Cloud. You&;ll connect your GitHub account to Docker Cloud, and set up triggers so that when a change is pushed to GitHub, a new version of your Docker container is built.

Discovering and Deploying Certified Content with Docker Store

In this lab you will learn how to locate certified containers and plugins on docker store. You&8217;ll then deploy both a certified Docker image, as well as a certified Docker plugin.

Deploying Applications with Docker EE (Docker DataCenter)

In this lab you will deploy an application that takes advantage of some of the latest features of Docker EE (Docker Datacenter). The tutorial will lead you through building a compose file that can deploy a full application on UCP in one click. Capabilities that you will use in this application deployment include:

Docker services
Application scaling and failure mitigation
Layer 7 load balancing
Overlay networking
Application secrets
Application health checks
RBAC-based control and visibility with teams

Vulnerability Detection and Remediation with Docker EE (Docker Datacenter)

Application vulnerabilities are a continuous threat and must be continuously managed. In this tutorial we will show you how Docker Trusted Registry (DTR) can detect known vulnerabilities through image security scanning. You will detect a vulnerability in a running app, patch the app, and then apply a rolling update to gradually deploy the update across your cluster without causing any application downtime.

 
Learn More about DockerCon:

What’s new at DockerCon?
5 reasons to attend DockerCon
Convince your manager to send you to DockerCon
DockerCon for Windows containers practitioners 

Check out all the Docker Hands-on labs at DockerCon To Tweet

The post Learn Docker with our DockerCon 2017 Hands-On Labs appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/