AWS Schema Conversion Tool Copies Schemas and Optimizes for the Cloud

We are pleased to announce that you can now use the AWS Schema Conversion Tool to copy your existing database schema from a legacy database to a new database on EC2 or RDS for homogeneous replications. The conversion engine has also been enhanced to offer even more automated conversions should you wish to switch from a commercial database to a cloud-native, open-source solution.
Quelle: aws.amazon.com

Dynamic Provisioning and Storage Classes in Kubernetes

Storage is a critical part of running containers, and Kubernetes offers some powerful primitives for managing it. Dynamic volume provisioning, a feature unique to Kubernetes, allows storage volumes to be created on-demand. Without dynamic provisioning, cluster administrators have to manually make calls to their cloud or storage provider to create new storage volumes, and then create PersistentVolume objects to represent them in Kubernetes. The dynamic provisioning feature eliminates the need for cluster administrators to pre-provision storage. Instead, it automatically provisions storage when it is requested by users. This feature was introduced as alpha in Kubernetes 1.2, and has been improved and promoted to beta in the latest release, 1.4. This release makes dynamic provisioning far more flexible and useful.What’s New?The alpha version of dynamic provisioning only allowed a single, hard-coded provisioner to be used in a cluster at once. This meant that when Kubernetes determined storage needed to be dynamically provisioned, it always used the same volume plugin to do provisioning, even if multiple storage systems were available on the cluster. The provisioner to use was inferred based on the cloud environment – EBS for AWS, Persistent Disk for Google Cloud, Cinder for OpenStack, and vSphere Volumes on vSphere. Furthermore, the parameters used to provision new storage volumes were fixed: only the storage size was configurable. This meant that all dynamically provisioned volumes would be identical, except for their storage size, even if the storage system exposed other parameters (such as disk type) for configuration during provisioning.Although the alpha version of the feature was limited in utility, it allowed us to “get some miles” on the idea, and helped determine the direction we wanted to take.The beta version of dynamic provisioning, new in Kubernetes 1.4, introduces a new API object, StorageClass. Multiple StorageClass objects can be defined each specifying a volume plugin (aka provisioner) to use to provision a volume and the set of parameters to pass to that provisioner when provisioning. This design allows cluster administrators to define and expose multiple flavors of storage (from the same or different storage systems) within a cluster, each with a custom set of parameters. This design also ensures that end users don’t have to worry about the the complexity and nuances of how storage is provisioned, but still have the ability to select from multiple storage options.How Do I use It?Below is an example of how a cluster administrator would expose two tiers of storage, and how a user would select and use one. For more details, see the reference and example docs.Admin ConfigurationThe cluster admin defines and deploys two StorageClass objects to the Kubernetes cluster:kind: StorageClassapiVersion: extensions/v1beta1metadata:  name: slowprovisioner: kubernetes.io/gce-pdparameters:  type: pd-standardThis creates a storage class called “slow” which will provision standard disk-like Persistent Disks.kind: StorageClassapiVersion: extensions/v1beta1metadata:  name: fastprovisioner: kubernetes.io/gce-pdparameters:  type: pd-ssdThis creates a storage class called “fast” which will provision SSD-like Persistent Disks.User RequestUsers request dynamically provisioned storage by including a storage class in their PersistentVolumeClaim. For the beta version of this feature, this is done via the volume.beta.kubernetes.io/storage-class annotation. The value of this annotation must match the name of a StorageClass configured by the administrator.To select the “fast” storage class, for example, a user would create the following PersistentVolumeClaim:{  “kind”: “PersistentVolumeClaim”,  “apiVersion”: “v1″,  “metadata”: {    “name”: “claim1″,    “annotations”: {        “volume.beta.kubernetes.io/storage-class”: “fast”    }  },  “spec”: {    “accessModes”: [      “ReadWriteOnce”    ],    “resources”: {      “requests”: {        “storage”: “30Gi”      }    }  }} This claim will result in an SSD-like Persistent Disk being automatically provisioned. When the claim is deleted, the volume will be destroyed.Defaulting BehaviorDynamic Provisioning can be enabled for a cluster such that all claims are dynamically provisioned without a storage class annotation. This behavior is enabled by the cluster administrator by marking one StorageClass object as “default”. A StorageClass can be marked as default by adding the storageclass.beta.kubernetes.io/is-default-class annotation to it.When a default StorageClass exists and a user creates a PersistentVolumeClaim without a storage-class annotation, the new DefaultStorageClass admission controller (also introduced in v1.4), automatically adds the class annotation pointing to the default storage class.Can I Still Use the Alpha Version?Kubernetes 1.4 maintains backwards compatibility with the alpha version of the dynamic provisioning feature to allow for a smoother transition to the beta version. The alpha behavior is triggered by the existance of the alpha dynamic provisioning annotation (volume.alpha.kubernetes.io/storage-class). Keep in mind that if the beta annotation (volume.beta.kubernetes.io/storage-class) is present, it takes precedence, and triggers the beta behavior.Support for the alpha version is deprecated and will be removed in a future release.What’s Next?Dynamic Provisioning and Storage Classes will continue to evolve and be refined in future releases. Below are some areas under consideration for further development.Standard Cloud ProvisionersFor deployment of Kubernetes to cloud providers, we are considering automatically creating a provisioner for the cloud’s native storage system. This means that a standard deployment on AWS would result in a StorageClass that provisions EBS volumes, a standard deployment on Google Cloud would result in a StorageClass that provisions GCE PDs. It is also being debated whether these provisioners should be marked as default, which would make dynamic provisioning the default behavior (no annotation required).Out-of-Tree ProvisionersThere has been ongoing discussion about whether Kubernetes storage plugins should live “in-tree” or “out-of-tree”. While the details for how to implement out-of-tree plugins is still in the air, there is a proposal introducing a standardized way to implement out-of-tree dynamic provisioners.How Do I Get Involved?If you’re interested in getting involved with the design and development of Kubernetes Storage, join the Kubernetes Storage Special-Interest-Group (SIG). We’re rapidly growing and always welcome new contributors.– Saad Ali, Software Engineer, GoogleDownload KubernetesGet involved with the Kubernetes project on GitHub Post questions (or answer questions) on Stack Overflow Connect with the community on SlackFollow us on Twitter @Kubernetesio for latest updates
Quelle: kubernetes

The Internet Is Pissed Yahoo Built The US A Custom Tool For Email Spying

In what seems to be an unprecedented act, troubled internet giant Yahoo Inc built custom software for US intelligence to probe hundreds of millions of Yahoo users’ emails, according to Reuters.

Citing a few unnamed sources familiar with the matter, Reuters reported that the software could search any incoming email to a Yahoo account for specific sequences of characters sought by the US government. The software could also store the information for later retrieval by US spy operatives. It&;s still unclear whether this software searched only US citizens&039; email accounts, or if its scope was more broad. Reuters notes that it is likely that the government demanded that other email providers comply with its spying directive as well.

Yahoo&039;s compliance with government spying at this scale seems to be previously unheard of, especially because it built the customized software for the government&039;s spying purposes. Earlier this year, Apple successfully fought a publicized battle with the FBI after the agency demanded that Apple develop software to break encryption on an iPhone owned by one of the San Bernardino shooters.

The ACLU responded in a prepared statement: “The order issued to Yahoo appears to be unprecedented and unconstitutional…It is deeply disappointing that Yahoo declined to challenge this sweeping surveillance order, because customers are counting on technology companies to stand up to novel spying demands in court.”

And after the Snowden leaks uncovered the PRISM scandal in 2013, tech companies, from Yahoo to Google to Apple, denied their involvement in the NSA&039;s surveillance and emphasized how they had fought the government over orders for data from the Foreign Intelligence Surveillance Court. In 2007, Yahoo did wage a scrappy fight against a Foreign Intelligence Surveillance Act demand for the company to search specific email accounts without a warrant, but it was ultimately unsuccessful.

Cybersecurity and legal experts are not, to say the least, pleased with Yahoo&039;s latest news.

Yahoo replied to BuzzFeed News&039; requests for comment only with the following statement: “Yahoo is a law abiding company, and complies with the laws of the United States.”

Reuters also reported that Yahoo’s security team, which was not informed about the company&039;s development of the spying software, discovered the program in May 2015, just weeks after its installation, and thought it was a hack. Yahoo’s email engineers developed the program.

Yahoo’s Chief Information Security Officer, Alex Stamos, left the company after discovering the compliance with US intelligence. According to Reuters, he advised Yahoo that hackers beyond simply US spies would be able to access the stored emails due to a programming flaw. Recently, news broke that Yahoo endured a hack of 500 million user accounts in 2014, which does not seem to be related the installation of the email spying software.

The company also announced a new app, Newsroom, which it called “Reddit for the masses,” on the same day as the spying became public. Yahoo is in the midst of a $4.8 billion sale to Verizon.

Quelle: <a href="The Internet Is Pissed Yahoo Built The US A Custom Tool For Email Spying“>BuzzFeed

Ceph/RDO meetup in Barcelona at OpenStack Summit

If you’ll be in Barcelona later this month for OpenStack Summit, join us for an evening with RDO and Ceph.

Tuesday evening, October 25th, from 5 to 8pm (17:00 – 20:00) we’ll be at the Barcelona Princess, right across the road from the Summit venue. We’ll have drinks, light snacks, and presentations from both Ceph and RDO.

If you can’t make it in person, we’ll also be streaming the event on YouTube

Topics we expect to be covered include (not necessarily in this order):

RDO release status (aarch64, repos, workflow)
RDO repos overview (CBS vs Trunk, and what goes where)
RDO and Ceph (maybe TripleO and Ceph?)
Quick look at new rpmfactory workflow with rdopkg
CI in RDO – what are we testing?
CERN – How to replace several petabytes of Ceph hardware without downtime
Ceph at SUSE
Ceph on ARM
3D Xpoint & 3D NAND with OpenStack and Ceph
Bioinformatics – Openstack and Ceph used in large scale cancer research projects

If you expect to be at the event, please consider signing up on Eventbrite so we have an idea of how many people to expect. Thanks!
Quelle: RDO

Powering geospatial analysis: public geo datasets now on Google Cloud

Posted Peter Birch, Product Manager Google Earth Engine

With dozens of public satellites in orbit and many more scheduled over the next decade, the size and complexity of geospatial imagery continues to grow. It has become increasingly difficult to manage this flood of data and use it to gain valuable insights. That’s why we’re excited to announce that we’re bringing two of the most important collections of public, cost-free satellite imagery to Google Cloud: Landsat and Sentinel-2.

The Landsat mission, developed under a joint program of the USGS and NASA, is the longest continuous space-based record of Earth’s land in existence, dating back to 1972 with the Landsat 1 satellite. Landsat imagery sets the standard for Earth observation data due to the length of the mission and the rich data provided by its multispectral sensors. Landsat data has proven invaluable to agriculture, geology, forestry, regional planning, education, mapping, global change and disaster response. This collection includes the complete USGS archive of the Landsat 4, 5, 7 and 8 satellites, and the data is updated as new data arrives from Landsat 7 and 8. The collection is updated daily and contains a total of 4 million scenes and 1.3 petabytes of data covering 1984 to the present — over 35 years of imagery of our Earth ready for immediate analysis.

Sentinel-2, part of the European Union’s ambitious Copernicus Earth observation program, raised the bar for Earth observation data, with a Multi-Spectral Instrument (MSI) that produces images of the Earth with a resolution of up to 10 meters per pixel, far sharper than that of Landsat. Sentinel-2 data is especially useful for agriculture, forestry and other land management applications. For example, it can be used to study leaf area and chlorophyll and water content, to map forest cover and soils, and to monitor inland waterways and coastal areas. Images of natural disasters such as floods and volcanic eruptions can also be used for disaster mapping and humanitarian relief efforts. The collection currently contains 970,000 images and over 430 terabytes of data, updated daily.

Brisbane, Australia, as viewed by Sentinel-2

Here at Google, we have years of experience working with the Landsat and Sentinel-2 satellite imagery collections. Our Google Earth Engine product, a cloud-based platform for doing petapixel-scale analysis of geospatial data, was created to help make analyzing these datasets quick and easy. Earth Engine’s vast catalog of data, with petabytes of public data, combined with an easy to use scripting interface and the power of Google infrastructure, has helped to revolutionize Earth observation. Now, by bringing the two most important datasets from Earth Engine into Google Cloud, we’re also enabling customer workflows using Google Compute Engine, Google Cloud Machine Learning and any other Google Cloud services.

One customer that has taken advantage of the powerful combination of Google Cloud and these datasets is Descartes Labs. Descartes Labs is focused on combining machine learning and geospatial data to forecast global crop production. “For an early stage technology startup, satellite imagery can be impossibly expensive,” said Descartes Labs CEO Mark Johnson. “To make accurate machine learning models of major crops, we needed decades of satellite imagery from the entire globe. Thanks to Google Earth Engine hosting the entire Landsat archive publicly on Google Cloud, we can focus on algorithms instead of worrying about collecting petabytes of data. Earth observation will continue to improve with every new satellite launch and so will our ability to forecast global food supply. We’re excited that Google sees the potential in hosting open geospatial data on Google Cloud, since it will enable companies like ours to better understand the planet we live on.”

Humboldt, Iowa (Landsat 8, USGS)

Agricultural field edge boundaries and field segmentation from July 2016 of Humboldt, Iowa, generated using machine learning and Landsat data on Google Cloud.

Spaceknow is another company using Google Cloud to mine Landsat data for unique insights. Spaceknow brings transparency to the global economy by tracking global economic trends from space. Spaceknow’s Urban Growth Index analyzes massive amounts of multispectral imagery in China and elsewhere. Using a TensorFlow-based deep learning framework capable of predicting semantic labels for multi-channel satellite imagery, Spaceknow determines the percentage of land categorized as urban-type for a specified geographic region. Furthermore, its China Satellite Manufacturing Index uses proprietary algorithms to analyze Landsat 7 and 8 imagery of over 6,000 industrial facilities across China, measuring levels of Chinese manufacturing activity. Using 2.2 billion satellite observations, this index covers over 500,000 square kilometers, and it can be quickly updated when new images arrive from the satellites. According to Pavel Machalek, the CEO of Spaceknow: “Google Cloud provides us with the unique capability to develop, train and deploy neural networks at unprecedented scale. Our customers depend on the information we provide for critical, day-to-day decision making.”

   Fuzhou, China 2000 (Landsat 7, UCGS)                          Fuzhou, China 2016 (Landsat 8 USGS)

With over a petabyte of the world’s leading public satellite imagery data available at your fingertips, you can avoid the cost of storing the data and the time and cost required to download these large datasets and focus on what matters most: building products and services for your customers and users. Whether you’re using Google Cloud’s leading machine learning and compute services or Earth Engine for simple and powerful analysis, we can help you turn pixels into knowledge to help your organization make better decisions.

Learn more about these new geo imagery datasets at http://cloud.google.com/storage/docs/public-datasets/ and about the full range of public datasets at http://cloud.google.com/public-datasets/.
Quelle: Google Cloud Platform

The best choice for ITAR and Defense Industrial Base customers

Here at Azure Government, we are committed to meeting the highest bars for security and compliance requirements, including those for ITAR and Defense Industrial Base customers.

Microsoft provides certain cloud services or service features that can support customers with ITAR obligations. While there is no compliance certification for the ITAR, Microsoft operates and has designed the in-scope Microsoft Azure Government services to be capable of supporting a customer’s ITAR obligations and compliance program. For more information on ITAR, check out the ITAR site on the Microsoft Trust Center. To learn more about the services we added to the ITAR product catalog, check out our post.

Azure Government also enables Defense Industrial Base companies to build systems with assurance that they will inherit controls and processes, meeting  NIST 800-171 and other DFARS requirements. We also provide a single platform that meets the most stringent controls associated with the current mix of applicable standards.

If you want to learn more about how Microsoft helps DIB contractors and is compliant with the Department of Defense requirements, check out this post for more information.

For all things security, privacy, transparency and compliance related check out the Microsoft Trust Center.

To stay up to date on all things Azure Government, be sure to subscribe to our RSS feed and to receive emails by clicking “Subscribe by Email!” on the Azure Government Blog. To experience the power of Azure Government for your organization, sign up for an Azure Government Trial.
Quelle: Azure