Snap Acquires Ctrl Me Robotics, A Small Drone Maker Based In LA

Snap Acquires Ctrl Me Robotics, A Small Drone Maker Based In LA

Snap has acquired Ctrl Me Robotics, a small drone manufacturer based in Los Angeles that should give the company added muscle in its push into hardware.

Snap’s interest in drone manufacturers has been rumored for months, and the company’s acquisition of Ctrl Me suggests it has more than just a passing interest in hardware. Its only previous foray was the introduction of Spectacles, a sunglasses product with a built-in camera, that brought in relatively little revenue and largely helped as a pre-IPO marketing push.

Ctrl Me shares a neighborhood with Snap. Both operate in Venice, California. Ctrl Me was in the process of winding down when it approached Snap about a possible deal, a source with knowledge of the situation told BuzzFeed News. The deal was something of an acquihire, with Ctrl Me founder and drone entrepreneur Simon Saito Nielsen joining Snap, along with some company assets and equipment. The purchase price was less than one million dollars, according to sources familiar with the deal.

Instagram: @ctrlme

Snap declined to comment.

Snap calls itself a “camera company,” and cracking into the hardware market could potentially open up a second source of meaningful revenue outside advertising, where it faces an uphill battle fighting against the likes of Google and Facebook, two companies so successful they've been nicknamed “The Duopoly.” The booming drone industry could be a good place to make a move, especially since drone photography and videography are popular on social media, where Snap is a major player.

Nielsen founded Ctrl Me Robotics, Inc. (pronounced “control me”) in 2013 to provide aerial footage capture tools to movie studios, using third-party drones and custom-built solutions. According to a video posted on YouTube in 2014, the company said it was developing products for the movie industry and planned to diversify into the oil industry and agriculture. In one Instagram post, Ctrl Me displayed a drone and an experiment gimbal that held a smartphone, allowing users to Snapchat from the air.

While Ctrl Me raised a small amount of seed funding from investors in 2014, the startup never gained traction, and it’s unclear if it ever started developing its own flying robot. Ctrl Me’s website has since been taken offline while a phone number listed for the startup now redirects to a different Los Angeles-based drone company, Drone Fleet Aerospace Management.

Instagram: @ctrlme

After a poor showing in its first quarter on the market, Snap’s stock dropped significantly last month. Many of its investors are betting on the ability of Snap co-founder and CEO Evan Spiegel to produce more hit products. (Hardware also presents an opportunity to create things Facebook can’t easily copy.)

This isn’t Snap’s first dance with a drone company. Last year, the Los Angeles-based company also met with Lily Robotics, a much hyped drone startup that sold $34 million in pre-orders, before it filed for bankruptcy in January.

Quelle: <a href="Snap Acquires Ctrl Me Robotics, A Small Drone Maker Based In LA“>BuzzFeed

Docker Security at PyCon: Threat Modeling & State Machines

The Docker Security Team was out in force at PyCon 2017 in Portland, OR, giving two talks focussed on helping the Python Community to achieve better security. First up was David Lawrence and Ying Li with their “Introduction to Threat Modelling talk”.

Threat Modelling is a structured process that aids an engineer in uncovering security vulnerabilities in an application design or implemented software. The great majority of software grows organically, gaining new features as some critical mass of users requests them. These features are often implemented without full consideration of how they may impact every facet of the system they are augmenting.
Threat modelling aims to increase awareness of how a system operates, and in doing so, identify potential vulnerabilities. The process is broken up into three steps: data collection, analysis, and remediation. An effective way to run the process is to have a security engineer sit with the engineers responsible for design or implementation and guide a structured discussion through the three steps.
For the purpose of this article, we’re going to consider how we would  threat model a house, as the process can be applied to both real world scenarios in addition to software.

Data Collection
Five categories of data must be collected in a threat model:

External Dependencies – are services that elements of the model will interact with, but will not be decomposed during the course of the current threat model. Our house has external dependencies on an alarm monitoring service, and various utilities; power, water, etc…
Entry points – defines the ways in which your system can receive input and provide output. A completely closed system is secure by design, but often not very useful. Our house has three intentional entry points, front and back doors, and a garage door. It also has a number of unintentional, but usable entry points: the windows! For the purposes of this model, we’ll keep things simple and assume, as paranoid security wonks, we’ve nailed our windows shut.
Assets – includes anything we care about protecting in our system. These are both things an attacker can carry away with them like sensitive user data and  resources an attacker might consume. In our house, we care about valuables, important papers, and irreplaceable data like family photos. We also care about our utility bills and want to ensure we’re not paying for somebody else’s car wash. Not everything is an asset though, we don’t care about toilet paper, as long as there’s at least one roll left.
Trust Levels – defines the tiers of access within the system. We have four trust levels concerning our house:

The residents – people who live in the house and have the highest levels of access.
Guests – friends and family invited to stay overnight
Visitors – people invited into the home but restricted to common areas like the living room, kitchen, and backyard.
Passers by – strangers who pass by the house but will not be invited inside.

Data Flows – how data moves around the system. The primary data flow of our house is shown below. What we see is that there are Trust Boundaries at our entry points, indicating a change in the trust associated with the data that made it across the boundary. In this case, the boundary itself is the lock on the doors, and the possession of a garage door opener. Once somebody has crossed one of these boundaries though, they have full access to the house, garage, and associated storage.

Analysis
From the data collected, and a deep understanding of how the system works, we can begin to look for and inspect anomalies. For example, if a data flow indicates there is no trust boundary between two processes, this should be carefully analyzed. In our data flow diagram, we see there is no trust boundary between the house and the garage. This is probably undesirable but let us further analyze the data to objectively establish why and how we’ll fix it.
While there are many ways to analyze and score vulnerabilities, we have found the STRIDE classification system, and DREAD scoring system to be effective and straightforward. STRIDE is an acronym denoting 6 categories of vulnerability:

Spoofing – an entity pretending to be something it’s not, generally by capturing a legitimate user’s credentials
Tampering – the modification of data persisted within the system
Repudiation – the ability to perform operations that cannot be tracked, or for which the attacker can actively cover their tracks
Information Disclosure – the acquisition of data by a trust level that should not have access to it
Denial of Service – preventing legitimate users from accessing the service
Elevation of Privilege – an attack aimed at allowing an entity of lower trust level to perform actions restricted to a higher trust level

One takes each category and looks for behaviour permitted by the system that creates vulnerabilities within that category. It is common find a single vulnerability that spans multiple STRIDE categories. Some example vulnerabilities for our house may be:

Spoofing: When a plumber knocks on our door, if we didn’t schedule them directly (maybe they claim a housemate called them out), we don’t necessarily know they are legitimate. They could just be trying to gain entry to steal our valuables while we’re not looking.
Tampering: One of our housemates likes to smoke but doesn’t like going outside in inclement weather, so they disable the smoke alarm in their bedroom.
Repudiation: Some neighbourhood kid kicked a ball through the front window but we have no way to prove who it was.
Information Disclosure: We only just moved in and haven’t gotten around to installing curtains yet. Anybody walking by can see who is in the house!
Denial of Service: a local vandal thought it would be funny to troll the new neighbours by squirting glue in our locks… now we can’t get the doors open and we’re stuck outside.
Elevation of Privilege: It hasn’t happened yet, but we’ve heard garage doors are pretty insecure. Somebody that can get our garage door open can immediately get in to the rest of the house.

Having defined as many vulnerabilities as we can find, we score each one. The DREAD system defines five metrics on which a vulnerability must be scored. Each is generally scored on a consistent scale, often 1 to 10, with 1 being least severity, and 10 being greatest severity. The sum of the scores subsequently allows us to prioritize our vulnerabilities relative to each other.
The 5 DREAD metrics are:

Damage: how bad would the financial and reputation damage be to your organization and its users.
Reproducibility: how easy is it to trigger the vulnerability. Most vulnerabilities will score a “10” here but those that, for example, involve timing attacks would generally receive lower vulnerabilities as they may not be triggered 100% of the time.
Exploitability: a measure of what resources are required to use the attack. The lowest score of 1 would generally be reserved for nation states, while a score of 10 might indicate the attack could be done through something as simple as URL manipulation in a browser.
Affected Users: a measure of how many users are affected by the attack. Maybe for example it only affect a specific class of user.
Discoverability: how easy it is to uncover the vulnerability. A score of 10 would indicate it’s easily findable through standard web scraping tools and open source pentest tools. At the other end of the scale, a vulnerability requiring intimate knowledge of a system’s internals would likely score a 1.

Lets score our Information Disclosure vulnerability against our Elevation of Privilege vulnerability to see how they compare.

Metric

Information Disclosure Score

Elevation of Privilege Score

Explanation

Damage

1

10

Knowing who is in our house is very low damage, and this could also be observed from who enters and leaves. Gaining access to our house however is severe.

Reproducibility

10

10

Both vulnerabilities can be reproduced 100% of the time. There are no timing elements involved.

Exploitability

10

5

While it’s easy to look in the windows, it’s not as easy to get hands on a garage door opener to effect the initial compromise of the garage. We’re relying on our car to be somewhat secure, and for none of our residents, guests, or visitors to leave the garage open.

Affected Users

10

10

All residents, guests, and visitors are affected by both vulnerabilities.

Discoverability

10

7

It’s obvious to everyone that there are no window coverings. It is less obvious to an external observer that there is no boundary between the garage and house. It might be observable from outside under the right conditions, so we’re estimating this to be easy to discover, but not entirely trivial.

Remediation
For each of our categories in STRIDE there is an associate class of security control used to mitigate it. Exactly how the control is implemented will depend on the system being modelled.

Spoofing – Authentication; the ability to confirm the validity of the request. We would agree with our housemates to always use a plumber from a specific service, and be able to call the head office to confirm the credentials of the plumber.
Tampering – Integrity; we would regularly, and randomly, audit the smoke alarms in the house to ensure nobody has disabled them.
Repudiation – Non-Repudiation; we’re going to install some security cameras to ensure we capture images of the next kid to kick a ball through the window.
Information Disclosure – Confidentiality; we’ll install some thick curtains that can be closed when we don’t want the inside of the house to be outwardly visible.
Denial of Service – Availability; we’re going to install a lock that has both traditional keys and a digital code. This gives us multiple ways to unlock the door, should one be broken or otherwise fail.
Elevation of Privilege – Authorization; we’re going to install a single cylinder lock between the house and the garage, requiring a key on the garage side, but no key on the house side. This prevents garage accessing being pivoted to house access, but still makes it easy to move from the highly privileged house, to the relatively less privileged garage.

Having completed all these steps, it’s time to go and implement the actual fixes!
Look out for our next Security Team blog post on Ashwini Oruganti’s talk “Designing Secure APIs with State Machines”.

Introduction to Threat Modeling & State Machines by @endophage @cyli @docker Security team…Click To Tweet

The post Docker Security at PyCon: Threat Modeling & State Machines appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

OpenShift Commons Briefing #73: Securing Applications on OpenShift and Kubernetes

OpenShift Commons Briefing Summary In this session, Aqua’s Tsvi Korren gives an overview of Aqua’s framework for effective application security in a containerized environment. It begins in the development process as images are built, continuing through assurance of image authorization, and protects running containers. Even in containers, application security still matters. Running applications in containers […]
Quelle: OpenShift

Amazon AppStream 2.0 now allows you to use your VPC security groups to control network traffic

Amazon AppStream 2.0 now allows you to control network traffic between your streaming instances and the resources in your VPC by using security groups. Security groups provide granular, network-level access controls to streaming instances, and you can use them to manage users’ access to databases, license servers, file shares, or application servers from the streamed applications they use. 
Quelle: aws.amazon.com

The British Prime Minister Wants To Pressure Tech Companies To Fight Terrorism

Justin Tallis / AFP / Getty Images

Today at the G7 summit of world leaders in Sicily, British Prime Minister Theresa May called on those in attendance — including President Trump — to pressure social networks to crack down on terroristic and extremist content.

May's decision to call for a session on digital policing comes just days after a deadly suicide attack in Manchester on Monday evening that killed 22 people and wounded dozens more. An official close to May told the Evening Standard that the threat of harm from terrorists and extremists has moved from “the battlefield to the internet.” The official also noted that internet materials circulated by extremist organizations “has in the past been linked to acts of violence and the less of this material that is on the internet, that is clearly for the better.”

May's call to action today during the Summit did not single out any tech companies specifically. Instead, she urged world leaders to put pressure on “communication service providers and social media companies to substantially increase their efforts to address terrorist content.”

And this morning The Guardian reported that May “apparently had the backing of Trump” for the session.

When reached for comment on May's call to action, some of tech's biggest companies expressed their desire to partner with governments, while also highlighting the work they've been doing to try to combat extremism.

“We are committed to working in partnership with governments and NGOs to tackle these challenging and complex problems, and share the government’s commitment to ensuring terrorists do not have a voice online,” Peter Barron, Google's VP Communications for Europe, the Middle East, and Africa, told BuzzFeed News in a statement.

The rest of the statement is below:

“We are already working with industry colleagues on plans for an international forum to help accelerate and strengthen our existing work in this area. We employ thousands of people and invest hundreds of millions of pounds to fight abuse on our platforms, and will continue investing and adapting to ensure we are part of the solution to addressing these challenges”

Monika Bickert, the Head of Global Policy Management at Facebook, touted the company's own technology and human reviewers in its fight to police digital extremism on its platform and urged that the problem “can only be tackled with strong partnerships.”

Here's Facebook's full statement:

“We want to provide a service where people feel safe. That means we do not allow groups or people that engage in terrorist activity, or posts that express support for terrorism. Using a combination of technology and human review, we work aggressively to remove terrorist content from our platform as soon as we become aware of it — and if there is an emergency involving imminent harm to someone's safety, we notify law enforcement. Online extremism can only be tackled with strong partnerships. We have long collaborated with policymakers, civil society, and others in the tech industry, and we are committed to continuing this important work together.”

Others, like the Anti-Defamation League's CEO Jonathan Greenblatt, praised May's move:

And while May's session at the G7 has been universally lauded, the task of ridding the internet of terroristic and extremist content remains a herculean problem that Silicon Valley has struggled to solve for years. In March, Twitter — which has vigorously policed its platform for terroristic content — announced it purged 376,890 accounts promoting terrorism between July and the end of December 2016. Since August 2015, Twitter says it has removed 636,248 accounts for terrorism alone.

And just a few months ago, Facebook, Microsoft, Twitter, and YouTube partnered to create a shared industry database to police terroristic content. The database contains “hashes” or unique digital fingerprints for images and videos that are produced or simply just used by terrorist organizations, including ISIS. The goal of the partnership is to help all four companies identify and slow the spread of terrorist content across the internet.

Twitter, AT&T, and Comcast did not immediately respond to a request for comment.

Quelle: <a href="The British Prime Minister Wants To Pressure Tech Companies To Fight Terrorism“>BuzzFeed