Volvo Is Opening A Self-Driving Car Research Center In Silicon Valley

Uber&;s Volvo XC90 self-driving car is shown in Pittsburgh, Pennsylvania, on September 13, 2016.

Aaron Josefczyk / Reuters

Volvo is opening a research and engineering center in Mountain View, California, where 70 engineers will work on developing autonomous driving, infotainment, and connectivity technology.

Employees will begin moving in as soon as next week. “We are putting the furniture in now as we speak,” Lex Kerssemakers, Volvo’s US chief executive, told BuzzFeed News in an interview.

The move puts Volvo closer to ride-hailing giant Uber, which is retrofitting Volvo XC90s with its own self-driving technology to put on the road in Pittsburgh. Volvo and Uber also recently announced they were partnering in a non-exclusive, $300 million deal to develop an autonomous car together.

The Swedish luxury carmaker follows a line of automakers who have opened up research and development centers in Silicon Valley recently. Ford set up shop in Palo Alto in 2015 and plans to soon double its staff of 130 people, and General Motors has an office in the area as well. Mercedes-Benz opened a research facility in Sunnyvale, California in 2013.

For automakers, building a base in the Bay Area provides an opportunity to create partnerships with tech companies and startups, and to scout out potential acquisitions to get ahead in the race to develop self-driving vehicles. Ford, for example, says it is working with more than 40 startups on new car technology. The company also purchased Chariot, a San Francisco-based shuttle service, earlier this month.

Besides its plans with Uber, Volvo has several other investments in self-driving vehicle technology, which proponents say could reduce the number of car accidents by removing the chance of human error. The company plans to launch a pilot program in London next year that will give 100 people fully autonomous vehicles. It will launch a similar program in Sweden in early 2018, and it’s negotiating with several cities in China as well. The pilot program is aimed at helping the company understand how real people would use autonomous vehicles on a day-to-day basis, and how they would spend their time while sitting in cars that drive themselves.

“We have this vision that nobody should be killed in a Volvo,” Kerssemakers said. “Autonomous driving plays a very important role for us in reaching our vision.”

Quelle: <a href="Volvo Is Opening A Self-Driving Car Research Center In Silicon Valley“>BuzzFeed

Microsoft publishes compliance guidelines for the German Cloud

Microsoft is the first to bring the sovereign Cloud to Germany. Built on a data trustee model, the Microsoft Cloud Germany enables customers in the European Union (EU) and European Free Trade Association (EFTA) to store and manage customer data in compliance with applicable German laws and regulations, as well as key international standards.

As a first step after our successful launch, we want to provide our customers the below workbooks. We listened to our customers’ high demand for regional regulations and compliance requirements, and IT Grundschutz is one of the most important methodologies published by the German Federal Office for Information Security (Bundesamt für Sicherheit in der Informationstechnik).

New workbooks for Microsoft Azure Germany

• Microsoft Cloud Germany – Compliance in the cloud for organizations in EU/EFTA is a new document published by Microsoft. It outlines the data trustee model that delivers the power and flexibility of Microsoft cloud services in an environment that provides both technical and legal protections for German customer data. With the data trustee model, all data that belongs to German, EU, and EFTA region customers is stored exclusively in datacenters on German soil, and a third party – the Data Trustee – alone controls access to Customer Data other than access initiated by Customer or Customer’s end users. Please visit this link to download the document.

• IT Grundschutz Compliance Workbook – Microsoft Azure Germany is a new workbook that was developed by Hisolutions AG, one of the most renowned consulting and auditing companies in Germany.  It supports our clients to achieve their IT Grundschutz certification with solutions and workloads deployed on Microsoft Azure Germany. It´s based on the most recent version of IT Grundschutz, covering the relevant sections for cloud usage. Please visit this link to download the workbook.

For access to any of the documents mentioned above or any other compliance certifications achieved by Microsoft Azure, visit our Service Trust Portal or Microsoft Trust Center.
Quelle: Azure

What’s the big deal about running OpenStack in containers?

The post What&;s the big deal about running OpenStack in containers? appeared first on Mirantis | The Pure Play OpenStack Company.
Ever since containers began their meteoric rise in the technical consciousness, people have been wondering what it would mean for OpenStack. Some of the predictions were dire (that OpenStack would cease to be relevant), some were more practical (that containers are not mini VMs, and anyway, they need resources to run on, and OpenStack still existed to manage those resources).
But there were a few people who realized that there was yet another possibility: that containers could actually save OpenStack.
Look, it&8217;s no secret that deploying and managing OpenStack is difficult at best, and frustratingly impossible at worst. So what if I told you that using Kubernetes and containers could make it easy?
Mirantis has been experimenting with container-based OpenStack for the past several years &; since before it was &;cool&; &8212; and lately we&8217;d decided on an architecture that would enable us to take advantage of the management capabilities and scalability that comes with the Kubernetes container orchestration engine.  (You might have seen the news that we&8217;ve also acquired TCP Cloud, which will help us jump our R&D forward about 9 months.)
Specifically, using Kubernetes as an OpenStack underlay lets us turn a monolithic software package into discrete services with well-defined APIs that can be freely distributed, orchestrated, recovered, upgraded and replaced &8212; often automatically based on configured business logic.
That said, it&8217;s more than just dropping OpenStack into containers, and talk is cheap. It&8217;s one thing for me to say that Kubernetes makes it easy to deploy OpenStack services.  And frankly, almost anything would be easier than deploying, say, a new controller with today&8217;s systems.
But what if I told you you could turn an empty bare metal node into an OpenStack controller just by adding a couple of tags to it?
Have a look at this video (you&8217;ll have to drop your information in the form, but it just takes a second):
Containerizing the OpenStack Control Plane on Kubernetes: auto-scaling OpenStack services
I know, right? Are you as excited about this as I am?
The post What&8217;s the big deal about running OpenStack in containers? appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

New Dockercast episode with Mano Marks from Docker

In case you missed it, we launched , the official Docker Podcast last month including all the DockerCon 2016 sessions available as podcast episodes.
In this podcast, we meet Mano Marks, Director of Developer Relations at Docker.  Mano catches us up on a lot of the new cool things that are going on with Docker.  We get into the new Docker 1.12 engine/swarm built-in orchestration. We also talk about some cool stuff that is happening with Docker and Windows as well as Raspberry Pi and Docker.
You can find the latest Dockercast episodes on the Itunes Store or via the SoundCloud RSS feed.
 
 

New dockercast episode w/ host @botchagalupe & our very own @manomarks as a guest!Click To Tweet

The post New Dockercast episode with Mano Marks from Docker appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Bringing Pokémon GO to life on Google Cloud

Posted by Luke Stone, Director of Customer Reliability Engineering

Throughout my career as an engineer, I’ve had a hand in numerous product launches that grew to millions of users. User adoption typically happens gradually over several months, with new features and architectural changes scheduled over relatively long periods of time. Never have I taken part in anything close to the growth that Google Cloud customer Niantic experienced with the launch of Pokémon GO.

As a teaser, I’ll start with a picture worth a thousand words:

Launch target and worst case estimates of player traffic. The game’s popularity surged to more than 50 times the initial target.

Our peers in the technical community have asked about the infrastructure that helped bring Pokémon GO to life for millions of players. Niantic and the Google Cloud teams put together this post to highlight some of the key components powering one of the most popular mobile games to date.

A shared fate
At our Horizon event today, we’ll be introducing Google Customer Reliability Engineering (CRE), a new engagement model in which technical staff from Google integrates with customer teams, creating a shared responsibility for the reliability and success of critical cloud applications. Google CRE’s first customer was Niantic, and its first assignment the launch of Pokémon GO — a true test if there ever was one!

Within 15 minutes of launching in Australia and New Zealand, player traffic surged well past Niantic’s expectations. This was the first indication to Niantic’s product and engineering teams that they had something truly special on their hands. Niantic phoned in to Google CRE for reinforcements, in anticipation of the US launch planned the next day. Niantic and Google Cloud — spanning CRE, SRE, development, product, support and executive teams — braced for a flood of new Pokémon Trainers, as Pokémon GO would go on to shatter all prior estimates of player traffic.

Creating the Pokémon game world
Pokémon GO is a mobile application that uses many services across Google Cloud, but Cloud Datastore became a direct proxy for the game’s overall popularity given its role as the game’s primary database for capturing the Pokémon game world. The graph opening this blog post tells the story: the teams targeted 1X player traffic, with a worst-case estimate of roughly 5X this target. Pokémon GO’s popularity quickly surged player traffic to 50X the initial target, ten times the worst-case estimate. In response, Google CRE seamlessly provisioned extra capacity on behalf of Niantic to stay well ahead of their record-setting growth.

Not everything was smooth sailing at launch! When issues emerged around the game’s stability, Niantic and Google engineers braved each problem in sequence, working quickly to create and deploy solutions. Google CRE worked hand-in-hand with Niantic to review every part of their architecture, tapping the expertise of core Google Cloud engineers and product managers — all against a backdrop of millions of new players pouring into the game.

Pokémon powered by containers
Beyond being a global phenomenon, Pokémon GO is one of the most exciting examples of container-based development in the wild. The application logic for the game runs on Google Container Engine (GKE) powered by the open source Kubernetes project. Niantic chose GKE for its ability to orchestrate their container cluster at planetary-scale, freeing its team to focus on deploying live changes for their players. In this way, Niantic used Google Cloud to turn Pokémon GO into a service for millions of players, continuously adapting and improving.

One of the more daring technical feats accomplished by Niantic and the Google CRE team was to upgrade to a newer version of GKE that would allow for more than a thousand additional nodes to be added to its container cluster, in preparation for the highly anticipated launch in Japan. Akin to swapping out the plane’s engine in-flight, careful measures were taken to avoid disrupting existing players, cutting over to the new version while millions of new players signed up and joined the Pokémon game world. On top of this upgrade, Niantic and Google engineers worked in concert to replace the Network Load Balancer, deploying the newer and more sophisticated HTTP/S Load Balancer in its place. The HTTP/S Load Balancer is a global system tailored for HTTPS traffic, offering far more control, faster connections to users and higher throughput overall — a better fit for the amount and types of traffic Pokémon GO was seeing.

The lessons-learned from the US launch — generous capacity provisioning, the architectural swap to the latest version of Container Engine, along with the upgrade to the HTTP/S Load Balancer — paid off when the game launched without incident in Japan, where the number of new users signing up to play tripled the US launch two weeks earlier.

The Google Cloud GKE/Kubernetes team that supports many of our customers like Niantic

Other fun facts

The Pokémon GO game world was brought to life using over a dozen services across Google Cloud.
Pokémon GO was the largest Kubernetes deployment on Google Container Engine ever. Due to the scale of the cluster and accompanying throughput, a multitude of bugs were identified, fixed and merged into the open source project.
To support Pokémon GO’s massive player base, Google provisioned many tens of thousands of cores for Niantic’s Container Engine cluster.
Google’s global network helped reduce the overall latency for Pokémon Trainers inhabiting the game’s shared world. Game traffic travels Google’s private fiber network through most of its transit, delivering reliable, low-latency experiences for players worldwide. Even under the sea!

Niantic’s Pokémon GO was an all-hands-on-deck launch that required quick and highly informed decisions across more than a half-dozen teams. The sheer scale and ambition of the game required Niantic to tap architectural and operational best-practices directly from the engineering teams who designed the underlying products. On behalf of the Google CRE team, I can say it was a rare pleasure to be part of such a memorable product launch that created joy for so many people around the world.
Quelle: Google Cloud Platform

Google Cloud Platform sets a course for new horizons

Posted by Brian Stevens, Vice President, Google Cloud

As we officially move into the Google Cloud era, Google Cloud Platform (GCP) continues to bring new capabilities to more regions, environments, applications and users than ever before. Our goal remains the same: we want to build the most open cloud for all businesses and make it easy for them to build and run great software.

Today, we’re announcing new products and services to deliver significant value to our customers. We’re also sharing updates to our infrastructure to improve our ability to not only power Google’s own billion-user products, such as Gmail and Android, but also to power businesses around the world.

Delivering Google Cloud Regions for all
We’ve recently joined the ranks of Google’s billion-user products. Google Cloud Platform now serves over one billion end-users through its customers’ products and services.

To meet this growing demand, we’ve reached an exciting turning point in our geographic expansion efforts. Today, we announced the locations of eight new Google Cloud Regions — Mumbai, Singapore, Sydney, Northern Virginia, São Paulo, London, Finland and Frankfurt — and there are more regions to be announced next year.

By expanding to new regions, we deliver higher performance to customers. In fact, our recent expansion in Oregon resulted in up to 80% improvement in latency for customers. We look forward to welcoming customers to our new Cloud Regions as they become publicly available throughout 2017.

Embracing the multi-cloud world
Not only do applications running on GCP benefit from state-of-the-art infrastructure, but they also run on the latest and greatest compute platforms. Kubernetes, the open source container management system that we developed and open-sourced, reached version 1.4 earlier this week, and we’re actively updating Google Container Engine (GKE) to this new version.

GKE customers will be the first to benefit from the latest Kubernetes features, including the ability to monitor cluster add-ons, one-click cluster spin-up, improved security, integration with Cluster Federation and support for the new Google Container-VM image (GCI).

Kubernetes 1.4 improves Cluster Federation to support straightforward deployment across multiple clusters and multiple clouds. In our support of this feature, GKE customers will be able to build applications that can easily span multiple clouds, whether they are on-prem, on a different public cloud vendor, or a hybrid of both.

We want GCP to be the best place to run your workloads, and Kubernetes is helping customers make the transition. That’s why customers such as Philips Lighting have migrated their most critical workloads to run on GKE.

Accelerating the move to cloud data warehousing and machine learning
Cloud infrastructure exists in the service of applications and data. Data analytics is critical to businesses, and the need to store and analyze data from a growing number of data sources has grown exponentially. Data analytics is also at the foundation for the next wave in business intelligence — machine learning.

The same principles of data analytics and machine learning apply to large-scale businesses: to derive business intelligence from your data, you need access to multiple data sources and the ability to seamlessly process it. That’s why GKE usage doubles every 90 days and is a natural fit for many businesses. Now, we’re introducing new updates to our data analytics and machine learning portfolio that help address this need:

Google BigQuery, our fully managed data warehouse, has been significantly upgraded to enable widespread adoption of cloud data analytics. BigQuery support for Standard SQL is now generally available, and we’ve added new features that improve compatibility with more data tools than ever and foster deeper collaboration across your organization with simplified query sharing. We also integrated Identity and Access Management (IAM) that allows businesses to fine-tune their security policies. And to make it accessible for any business to use BigQuery, we now offer unlimited flat-rate pricing that pairs unlimited queries with predictable data storage costs.
Cloud Machine Learning is now available to all businesses. Integrated with our data analytics and storage cloud services such as Google BigQuery, Google Cloud Dataflow, and Google Cloud Storage, it helps enable businesses to easily train quality machine learning models on their own data at a faster rate. “Seeing is believing” with machine learning, so we’re rolling out dedicated educational and certification programs to help more customers learn about the benefits of machine learning for their organization and give them the tools to put it into use.

To learn more about how to manage data across all of GCP, check out our new Data Lifecycle on GCP paper.

Introducing a new engagement model for customer support
At Google, we understand that the overall reliability and operational health of a customer’s application is a shared responsibility. Today, we’re announcing a new role on the GCP team: Customer Reliability Engineering (CRE). Designed to deepen our partnership with customers, CRE is comprised of Google engineers who integrate with a customer’s operations teams to share the reliability responsibilities for critical cloud applications. This integration represents a new model in which we share and apply our nearly two decades of expertise in as an embedded part of a customer’s organization. We’ll have more to share about this soon.

One of the CRE model’s first tests was joining Niantic as they launched Pokémon GO, scaling to serve millions of users around the world in a span of a few days.

The Google Cloud GKE/Kubernetes team that supports many of our customers like Niantic

The public cloud is built on customer trust, and we understand that it’s a significant commitment for a customer to entrust a public cloud vendor with their physical infrastructure. By offering new features to help address customer needs and collaborating with them to usher in the future with tools like machine learning, we intend to accelerate the usability of the public cloud and bring more businesses into the Google Cloud fold. Thanks for joining us as we embark toward this new horizon.
Quelle: Google Cloud Platform