Intelligence Committee Condemns Snowden In Scathing Report

Mathias Loevgreen Bojesen / AFP / Getty Images

Edward Snowden isn&;t a whistleblower, nor is he a patriot. He&039;s “a criminal” — at least according to every member of the House Permanent Select Committee on Intelligence.

Following what lawmakers have described as an exhaustive two-year investigation, the committee released a scathing report Thursday condemning the former NSA contractor as a liar and a thief whose disclosures have endangered national security in ways we have yet to understand.

“[T]he vast majority of the documents he stole have nothing to do with programs impacting individual privacy interests — they instead pertain to military, defense, and intelligence programs of great interest to America&039;s adversaries,” states an unclassified summary of the report.

The summary advances several key findings, which the committee presents as a rebuttal to the idea of Snowden as an earnest whistleblower working to reform an oppressive surveillance state. The committee found that Snowden&039;s disclosures helped to diminish the government&039;s ability to collect information about foreign intelligence targets, failed to express his concerns of legal or moral wrongdoing to any official government channel, and that Snowden “was, and remains, a serial exaggerator and fabricator.”

“Edward Snowden is no hero — he’s a traitor who willfully betrayed his colleagues and his country,” said committee chair Devin Nunes in a statement. “I look forward to his eventual return to the United States, where he will face justice for his damaging crimes.” Ranking member Adam Schiff said of Snowden, “The Committee’s Review — a product of two years of extensive research — shows his claims to be self-serving and false, and the damage done to our national security to be profound.”

Upon the report&039;s release, Snowden took to Twitter to rebut its accusations. “The American people deserve better,” he wrote. “This report diminishes the committee.”

The classified report on Snowden&039;s disclosures comes on the heels of a public campaign by the ACLU, Human Rights Watch, and others calling on President Obama to pardon him. The House Intelligence Committee today took issue with that effort as well. In a separate document signed by all its members, the committee urged Obama not to pardon Snowden, who they say “perpetrated the largest and most damaging public disclosure of classified information in our nation&039;s history.”

While the four-page summary is available to the public, the full 36-page report is classified; however, every member of the House of Representatives will have access to it, according to the Intelligence Committee.

Quelle: <a href="Intelligence Committee Condemns Snowden In Scathing Report“>BuzzFeed

The Samsung Galaxy Note 7 Has Been Formally Recalled In The US For Explosion Risks

The Samsung Galaxy Note 7 Has Been Formally Recalled In The US For Explosion Risks

Roughly one million Samsung Galaxy Note 7 phones have been formally recalled by the US Consumer Product Safety Commission (CPSC) due to the danger of the phone’s lithium ion batteries overheating and exploding.

According to the statement, users are entitled to a replacement or a refund of their phones, which retail for between $850 and $890.

Any phone sold before September 15, 2016 is subject to recall. If you own a Samsung Note 7 and want to find out if your phone has been recalled, examine the IMEI number on the phone and either call Samsung— preferably on a separate phone&; — or visit samsung.com.

The recall statement reads, “Samsung has received 92 reports of the batteries overheating in the U.S., including 26 reports of burns and 55 reports of property damage, including fires in cars and a garage.” Mexico and Canada have also recalled Note 7, which went on sale August 19.

As early as September 2, the CPSC issued a warning about the potential for the battery cell in the phone to explode. Samsung by that point had already said it would “voluntarily replace” users’ devices because of the dangerous battery.

Following the cautionary statement, American airlines have been asking passengers to turn their Note 7 phones off for the duration of flights. In a September 9th statement, the CPSC recommended that users stop charging the device altogether and power it down. The current recall reiterates those sentiments.

However, while sales of the Note 7 dropped after reports of exploding batteries started surfacing, data from Apteligent shows that most people who already owned the phone hadn’t stopped using it.

Samsung’s president of Samsung’s mobile business, Koh Dong-jin said in a September 2 press conference, “It has been confirmed that it was a battery cell problem. There was a tiny problem in the manufacturing process, so it was very difficult to find out.”

The debacle has already cost Samsung $25 billion in market value, and the recall costs are estimated to be over $1 billion.

The recall could inspire other big markets, namely China, to recall the phone. Chinese media have noted that while Samsung has recalled 2.5 million phones in 13 countries, it has only recalled just under 2,000 phones in China.

youtube.com

Quelle: <a href="The Samsung Galaxy Note 7 Has Been Formally Recalled In The US For Explosion Risks“>BuzzFeed

Red Hat Announces Schedule and Speaker Line-Up for OpenShift Commons Gathering November 7th in Seattle

The OpenShift Commons Gathering will bring together the brightest technical minds to discuss the future of OpenShift and it’s related upstream open source projects. The 2016 event will gather developers, DevOps professionals and SysAdmins together to explore the next steps in making container technologies successful and secure.
Quelle: OpenShift

AWS Service Catalog updated access policies now available

Starting today, you can set access-level policies on AWS Service Catalog post-launch actions. Previously, users would have access either to any provisioned product in the account or only to those which they themselves launched. Now, you can customize the access level for each action, with support for user, role, and account levels. This feature allows users to be granted access to view, update, terminate, and manage provisioned products created under their role or the account to which they are logged in. For more information about policies for these actions, see the AWS Service Catalog documentation including the example policies.
Quelle: aws.amazon.com

Security through Community: Introducing the Vendor Security Alliance

Today is proud to announce that we are founding member of the Vendor Security Alliance (), a coalition formed to help organizations streamline their vendor evaluation processes by establishing a standardized questionnaire for appraising a vendor’s security and compliance practices.The VSA was established to solve a fundamental problem: how can IT teams conform to its existing security practices when procuring and deploying third-party components and platforms?
The VSA solves this problem by developing a required set of security questions that will allow vendors to demonstrate to their prospective customers that they are doing a good job with security and data handling. Good security is built on great technology paired with processes and policies. Until today, there was no consistent way to discern if all these things were in place. Doing a proper security evaluation today tends to be a hard, manual process. A large number of key questions come to mind when gauging how well a third-party company manages security.
As an example, these are the types of things that IT teams must be aware of when assessing a vendor’s security posture:

Do they securely handle sensitive customer data?
Do they have the ability to detect when attacks occur on their infrastructure?
Do they train their developers on secure coding best practices?
Do they follow industry best practices for configuring the systems?

Docker joins the Vendor Security Alliance’s founding team of security conscious companies including Uber, Dropbox, Palantir, Twitter, Square, Atlassian, Godaddy and Airbnb. The founding team has worked together to provide a pragmatic and approachable questionnaire. The collective team draws from a wide variety of backgrounds and experiences, including mobile, enterprise, and infrastructure companies which have provided a unique set of perspectives that has informed a strong common security lexicon. We expect this questionnaire to be the basis for all companies to understand their security posture with tangible, actionable questions that will help improve software security across all industries. In service of that goal, we are releasing the questionnaire so that it is freely available to everyone. At the beginning of October, a copy of the questionnaire will be available for everyone at https://www.vendorsecurityalliance.org/.
As a founding member of the Vendor Security Alliance, Docker has taken an important step towards helping companies secure their processes and infrastructure.  At Docker we talk a lot about helping organizations build secure infrastructure using Docker’s tools like Docker Content Trust and the Docker Engine’s runtime isolation, both of which were influenced by diligent feedback from our customers. But technology isn’t the whole equation. Assessing yourself against best practices and understanding how well your vendors manage their programs is an important step when it comes to building a security program at any company. Docker will also be using  this questionnaire to assess our own vendors, while looking outward to see how it will help the industry with shared practices and consistent evaluation criteria.

docker is a founding member of the Vendor Security Alliance VSA Click To Tweet

The post Security through Community: Introducing the Vendor Security Alliance appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Six Google Cloud Platform features that can save you time and money

Posted by Greg Wilson, Head of Developer Advocacy, Google Cloud Platform

Google Cloud Platform (GCP) has launched a ton of new products and features lately, but I wanted to call out six specific features that were designed specifically to help save customers money (and time).

VM Rightsizing Recommendations
Rightsizing your VMs is a great way to avoid overpaying — and underperforming. By monitoring CPU and RAM usage over time, Google Compute Engine’s VM Rightsizing Recommendations feature helps show you at a glance whether your machines are the right size for the work they perform. You can then accept the recommendation and resize the VM with a single click.

Docs
Google Compute Engine VM Rightsizing Recommendations announcement

Cloud Shell

Google Cloud Shell is a free VM for GCP customers integrated into the web console with which to manage your GCP resources, to test, build, etc. Cloud Shell comes with many common tools pre-installed, including Google Cloud SDK, Git, Mercurial, Docker, Gradle, Make, Maven, npm, nvm, pip, iPython, MySQL client, gRPC compiler, Emacs, Vim, Nano and more. It also has language support for Java, Go, Python, Node.js, PHP and Ruby, and has built-in authorization to access GCP Console projects and resources.

Google Cloud Shell documentation

Google Cloud Shell overview
Google Cloud Shell documentation
Using Cloud Shell: YouTube demo
Google Cloud Shell GA announcement

Custom Machine Types
Compute Engine offers VMs in lots of different sizes, but when there’s not a perfect fit, you can create a custom machine type with exactly the number of cores and memory you need. Custom Machine Types has saved some customers as much as 50% over a standard-sized instance. 

Google Custom MachineTypes overview

Google Compute Engine Custom Machine Types documentation

Creating Custom Google Compute Engine Instances: YouTube Cloud Minute (YouTube video) Announcement 

Preemptible VMs 

For batch jobs and fault-tolerant workloads, preemptible VMs can cost up to 70% less than normal VMs. Preemptible VMs fill the spare capacity in our datacenters, but let us reclaim them as needed, helping us optimize our datacenter utilization. This allows the pricing to be highly affordable. 
Preemptible VMs overview
Preemptible VMs docs
Preemptible VMs announcement 
Preemptible VMs price drop

Cloud SQL automatic storage increases
When this Cloud SQL feature is enabled, the available database storage is checked every 30 seconds, and more is added as needed in 5GB to 25GB increments, depending on the size of the database. Instead of having to provision storage to accommodate future database growth, the storage grows as the database grows. This can reduce the time needed for database maintenance and save on storage costs.

Cloud SQL automatic storage increases documentation

Online resizing of persistent disks without downtime
When a Google Compute Engine persistent disk is reaching full capacity, you can resize it in-place, without causing any downtime.

Google Cloud Persistent Disks announcement
Google Cloud Persistent Disks documentation
Adding Persistent Disks: YouTube demo

As you can see, there are plenty of ways to save money and improve performance with GCP features. Have others? Let us know in the comments.

Quelle: Google Cloud Platform

Azure DocumentDB powers the modern marketing intelligence platform

Affinio is an advanced marketing intelligence platform that enables brands to understand their users in a deeper and richer level. Affinio’s learning engine extracts marketing insights for its clients from mining billions of points of social media data. In order to store and process billions of social network connections without the overhead of database management, partitioning, and indexing, the Affinio engineering team chose Azure .

You can learn more about Affinio’s journey in this newly published case study.  In this blog post, we provide an excerpt of the case study and discuss some effective patterns for storing and processing social network data.

 

Why are NoSQL databases a good fit for social data?

Affinio’s marketing platform extracts data from social network platforms like Twitter and other large social networks in order to feed into its learning engine and learn insights about users and their interests. The biggest dataset consisted of approximately one billion social media profiles, growing at 10 million per month. Affinio also needs to store and process a number of other feeds including Twitter tweets (status messages), geo-location data, and machine learning results of which topics are likely to interest which users.

A NoSQL database is a natural choice for these data feeds for a number of reasons:

The APIs from popular social networks produced data in JSON format.
The data volume is in the TBs, and needs to be refreshed frequently (with both the volume and frequency expected to increase rapidly over time).
Data from multiple social media producers is processed downstream, and each social media channel has its own schema that evolves independently.
And crucially, a small development team needs to be able to iterate rapidly on new features, which means that the database must be easy to setup, manage, and scale.

Why does Affinio use DocumentDB over AWS DynamoDB and Elasticsearch

The Affinio engineering team initially built their storage solution on top of Elasticsearch on AWS EC2 virtual machines. While Elasticsearch addressed their need for scalable JSON storage, they realized that setting up and managing their own Elasticsearch servers took away precious time from their development team. They then evaluated Amazon’s DynamoDB service which was fully-managed, but it did not have the query capabilities that Affinio needed.

Affinio then tried Microsoft Azure DocumentDB, Microsoft’s planet-scale NoSQL database service. DocumentDB is a fully-managed NoSQL database with automatic indexing of JSON documents, elastic scaling of throughput and storage, and rich query capabilities which meets all their requirements for functionality and performance. As a result, Affinio decided to migrate its entire stack off AWS and onto Microsoft Azure.

“Before moving to DocumentDB, my developers would need to come to me to confirm that our Elasticsearch deployment would support their data or if I would need to scale things to handle it. DocumentDB removed me as a bottleneck, which has been great for me and them.”

-Stephen Hankinson, CTO, Affinio

Modeling Twitter Data in DocumentDB – An Example

As an example, we take a look at how Affinio stored data from Twitter status messages in DocumentDB. For example, here’s a sample JSON status message (truncated for visibility). 

{
"created_at":"Fri Sep 02 06:43:15 +0000 2016",
"id":771599352141721600,
"id_str":"771599352141721600",
"text":"RT @DocumentDB: Fresh SDK! DocumentDB SDK v1.9.4 just released!",
"user":{
"id":2557284469,
"id_str":"2557284469",
"name":"Azure DocumentDB",
"screen_name":"DocumentDB",
"location":"",
"description":"A blazing fast, planet scale NoSQL service delivered by Microsoft.",
"url":"http://t.co/30Tvk3gdN0"
}
}

Storing this data in DocumentDB is straightforward. As a schema-less NoSQL database, DocumentDB consumes JSON data directly from Twitter APIs without requiring schema or index definitions. As a developer, the primary considerations for storing this data in DocumentDB are the choice of partition key, and addressing any unique query patterns (in this case, searching with text messages). We&;ll look at how Affinio addresses these two.

Picking a good partition key:  DocumentDB partitioned collections require that you specify a property within your JSON documents as the partition key. Using this partition key value, DocumentDB automatically distributes data and requests across multiple physical servers. A good partition key has a number of distinct values and allows DocumentDB to distribute data and requests across a number of partitions. Let’s take a look at a few candidates for a good partition key for social data like Twitter status messages.

"created_at" – has a number of distinct values and is useful for accessing data for a certain time range. However, since new status messages are inserted based on the created time, this could potentially result in hot spots for certain time value like the current time
"id" – this property corresponds to the ID for a Twitter status message. It is a good candidate for a partition key, because there are a large number of unique users, and they can be distributed somewhat evenly across any number of partitions/servers
"user.id" – this property corresponds the ID for a Twitter user. This was ultimately the best choice for a partition key because not only does it allow writes to be distributed, it also allows reads for a certain user’s status messages to be efficiently served via queries from a single partition

With "user.id" as the partition key, Affinio created a single DocumentDB partitioned collection provisioned with 200,000 request units per second of throughput (both for ingestion and for querying via their learning engine).

Searching within the text message: Affinio needs to be able to search for words within status messages, and didn’t need to perform advanced text analysis like ranking. Affinio runs a Lucene tokenizer on the relevant fields when it needs to search for terms, and it stores the terms as an array inside a JSON document in DocumentDB. For example, "text" can be tokenized as a "text_terms" array containing the tokens/words in the status message. Here’s an example of what this would look like:

{
"text":"RT @DocumentDB: Fresh SDK! DocumentDB dotnet SDK v1.9.4 just released!",
"text_terms":[
"rt",
"documentdb",
"dotnet",
"sdk",
"v1.9.4",
"just",
"released"
]
}

Since DocumentDB automatically indexes all paths within JSON including arrays and nested properties, it is now possible to query for status messages with certain words in them like “documentdb” or “dotnet” and have these served from the index. For example, this is expressed in SQL as:

SELECT * FROM status_messages s WHERE ARRAY_CONTAINS(s.text_terms, "documentdb")

Next Steps

In this blog post, we looked at why Affinio chose Azure DocumentDB for their market intelligence platform, and some effective patterns for storing large volumes of social data in DocumentDB.

Read the Affinio case study to learn more about how Affinio harnesses DocumentDB to process terabytes of social network data, and why they chose DocumentDB over Amazon DynamoDB and Elasticsearch.
Learn more about Affinio from their website.
If you’re looking for a NoSQL database to handle the demands of modern marketing, ad-technology and real-time analytics applications, try out DocumentDB using your free trial, or schedule a 1:1 chat with the DocumentDB engineering team.  
Stay up-to-date on the latest DocumentDB news and features by following us on Twitter @DocumentDB.

Quelle: Azure