Note-able news: Evernote to use Google Cloud Platform

Posted by Brian Stevens, Vice President, Google Cloud Platform

Today, Evernote announced it’s moving to Google Cloud Platform to host its productivity service used by over 200 million people to store billions of notes and attachments. Consumers and businesses using Evernote — on the web or their device of choice — will soon benefit from the security, scalability and data processing power of Google’s public cloud infrastructure.

Moving to the public cloud was a natural progression for the company, as it looks to provide a seamless experience for its users and boost productivity with new features and services. Evernote initially built a private cloud infrastructure that serves users and data on any device, anywhere in the world. By moving its data center operations to Google’s cloud, Evernote can focus on its core competency: providing customers with the best experience for taking, organizing and archiving notes.

Evernote takes customer data protection seriously, so it’s no surprise that security was at the top of its list of selection criteria. With Google Cloud Platform, Evernote users will benefit from our world class security, while strengthening the company’s commitment to its own Three Laws of Data Protection.

Evernote evaluated multiple public cloud vendors and specifically chose Google Cloud Platform for our advanced data analytics and machine learning capabilities. By taking advantage of the advancements in machine learning such as voice recognition and translation, Evernote will continue to explore innovative new features that allow users to naturally capture their ideas at “the speed of thought.” You can learn more about Evernote’s plans and selection criteria here.

We here at Google Cloud Platform are excited to build on our partnership that began with the integration of Google Drive and Evernote. We welcome Evernote and look forward to the exciting journey ahead of us!
Quelle: Google Cloud Platform

Service Fabric on Linux support available this month

Over the past few years, it’s become increasingly clear that businesses are relying on cloud applications to fuel innovation and gain competitive advantage. Much of this pressure is falling on the shoulders of developers, who need to be able to rapidly create new, game-changing applications that have the potential to disrupt and transform entire industries. These same developers need to be able to release and update their applications more quickly so they can respond to customer feedback and have faster time to market than their business’s competitors.

A number of trends are emerging to make these possibilities a reality for developers and businesses alike – and one of these is microservice architectures that enable developers to create applications using multiple single-purpose, independently versioned services to provide a scalable way to build cloud-native applications and enable rapid innovation. Service Fabric is Microsoft’s microservices application platform that was released last year to help developers build and manage cloud-scale applications. Battle-hardened internally at Microsoft for almost a decade, Service Fabric has been powering highly scalable services like Cortana, Intune, Azure SQL Database, Azure DocumentDB, and Azure’s infrastructure.  We’ve seen tremendous response from our customers and great momentum since our recent GA at Build 2016, including BMW, CareOtter, Ilyriad, Bentley Systems and Assurant.

Given its beginnings, Service Fabric supports Windows servers and .NET applications, but many enterprises today run heterogeneous workloads, including Windows and Linux servers, .Net and Java applications, and SQL and NoSQL databases. That’s why I am excited to announce today that the preview of Service Fabric for Linux will be publicly available at our Ignite conference on September 26.  With today’s announcement customers can now provision Service Fabric clusters in Azure using Linux as the host operating system and deploy Java applications to Service Fabric clusters. Service Fabric on Linux will initially be available for Ubuntu, with support for RHEL coming soon.

In addition, with CLI (Command-Line Interface), Eclipse and Jenkins support, developers can use the tools they know to build and deploy on Service Fabric on Linux. Just as on Windows, developers can build and test their Service Fabric applications on Linux on a one-box setup, meaning you don’t need a cluster in Azure to build and test your Service Fabric app. Our vision is to enable developers to build Service Fabric applications on the OS of their choice and run them wherever they want. In the near future, we will release a Linux standalone installer to enable Service Fabric to be used outside of Azure for on-premises, hybrid and multi-cloud deployments. We also plan on open sourcing parts of the platform, beginning with Service Fabric’s programming models. This will allow developers to enhance the standard programming models and use them as starting points to create their own programming models and to support other languages.
 
We’re excited that with our ongoing enhancements of Service Fabric’s capabilities and reach, more businesses will be able to take advantage of our innovations to power their own applications. To learn more about how to get started with Service Fabric on Linux, check out our episode on Channel 9.
Quelle: Azure

New Dockercast Episode with Docker Captain, Nirmal Mehta

In case you missed it, we recently launched , the official Docker Podcast including all the DockerCon 2016 sessions available as podcast episodes.

In this podcast I catch up with Nirmal Mehta at Booz Allen Hamilton.  Nirmal has been a big part of the Docker community and is also a Docker Captain.
Nirmal works with some large government organizations and we discussed why these types of institutions seemed to be early adopters of Docker.  As most would answer, speed was an obvious driver, however, we discuss that security was also an early driver.  Turns out due to tighter boundaries of Docker containers some of these organizations felt that the potential security opportunities stretched better than virtualization.  We discuss these ideas as well as what is it like to be a Docker Captain.
 

 
You can find the latest Dockercast episodes on the Itunes Store or via the SoundCloud RSS feed.
 

New dockercast episode w/ host @botchagalupe & @normafaults from @BoozAllen as a guest!Click To Tweet

The post New Dockercast Episode with Docker Captain, Nirmal Mehta appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

How VMware and IBM offer choice with consistency

There are still questions about the strategic partnership between VMware and IBM announced back in February.
Many centered around two things: How much IBM will invest to ensure this partnership can scale and provide real value to both existing and new customers, as well as how the partnership would fit into the IBM vision of cloud.
Fast forward six months later to last week’s VMworld event in Las Vegas and those questions now have solid answers.
At the VMworld general session, IBM Senior Vice President of Cloud Robert LeBlanc shared the stage with VMware CEO Pat Gelsinger to announce that IBM is the first company to be recognized as a certified partner in VMware Cloud Foundation an on-demand software-defined data center (SDDC) for hybrid cloud deployments.
Concurrently, Forbes.com ran a story about the partnership and IBM’s overall investment. To date, IBM has more than four thousand service professionals training in the technology stack to support customers. In addition, the company has multiple beta clients making us of this new offering, with use cases ranging from disaster recovery to cloud bursting. To say that IBM has “invested” might be a bit of an understatement.
So what is Cloud Foundation?
Simply put, it is the automated, hyper-converged deployment of three central VMware offerings: Vsphere for virtualization and management, Vsan for software-defined storage and NSX for software defined networking.
What would once take weeks to stand up a new environment has been reduced to less than twelve hours through automation developed jointly by VMware and IBM. Once initially provisioned, scaling becomes even easier, with the ability to deploy new capacity in both directions in under an hour. All aspects of the offering are provided as a service, from licensing all the way down to the hardware satisfying all requirements of cloud for the customer.  With the additional capabilities of NSX, such as micro-segmentation, policy and host based controls, and the decreased dependency of enterprise storage hardware through VSAN, customers will gain a bevy of benefits with the click of a button on their browser.
In a world with multiple public cloud providers all vying for supremacy in the market, customers are faced with tough decisions. The complexity of transforming workloads makes the journey to cloud difficult, as most enterprise consumers are still running millions of legacy workloads that don’t necessarily fit the born on cloud model.
However, there is a bright spot. Most enterprises are predominantly virtual, with the largest share of those workloads running on VMware software. IBM can ease the stress of migration through this hybrid model. With more than 40 data centers worldwide, the build out of the Cloud Foundation on IBM Cloud makes a perfect solution for customers begin, advance and facilitate their cloud journey without having to reinvent the wheel.
As Jason Mcgee, CTO of IBM Cloud Platform wrote, “Hybrid is a not transitional state; it’s a destination.” This partnership clearly reflects the principles of “choice with consistency.”
For more information on the VMware and IBM partnership please click here.
The post How VMware and IBM offer choice with consistency appeared first on .
Quelle: Thoughts on Cloud

Prototyping kit gets your IoT app on Google Cloud Platform, fast

Posted by Preston Holmes, Head of IoT Solutions

The Internet of Things provides businesses with the opportunity to connect their IT infrastructure beyond the datacenter to an ever-increasing number of sensors and actuators that can convert analog information to digital data, and we believe Google Cloud Platform (GCP) is a great landing place for that valuable information. Whether it’s handling event ingest in Google Cloud Pub/Sub, processing the streams of data from multiple devices with Google Cloud Dataflow, storing time-series data in Google Bigtable, or asking questions across IoT and non-IoT data with Google BigQuery, GCP’s data and analytics products can help you manage that IoT data and turn it into something relevant.

Just like software, it’s useful to prototype and validate your IoT project quickly. Unfortunately, not all businesses have a bench of electrical engineers and embedded software developers on staff. That’s why we’ve teamed up with Seeed Studio and Beagleboard.org to bring you the BeagleBone Green Wireless IoT Developer prototyping kit for GCP.

Features of the BeagleBone IoT prototyping kit include:
New improvements to the original BeagleBone Green, including built in Wi-Fi and Bluetooth radios
A fully open hardware design
Built-in Grove connectors that allow for prototyping without the need for soldering or complex breadboard work
Built-in onboard flash that lets you treat SD cards as optional, removable storage
Built-in PRU real-time co-processors that are well suited for certain industrial protocols
Built-in analog-to-digital conversion, key for many IoT prototyping situations

With the BeagleBone Green Wireless IoT Developer prototyping kit, you’ll be able to get data from the world around you directly onto GCP within minutes. From there, you can use any of our client libraries on the board’s familiar Debian Linux operating system.

Learn more about the kit and demo! Don’t have the kit yet? Buy one here, or use your phone as a simulated device. Most importantly, let us know how it goes.

Quelle: Google Cloud Platform