Announcing Update 3.0 for StorSimple 8000 series

We are pleased to announce update 3.0 for StorSimple 8000 series. This release has the following new features and improvements (for full list, check the link below to the release notes):

This release improves the read and write performance to the cloud. This results in faster backups to the cloud, reading of tiered data from cloud or reading/restoring data post failover during a disaster recovery operation
Enables the standby controller on the 8000 series physical appliance to perform space reclamation and ensure active controller resources focus on serving data for active I/O
We have made improvements to the monitoring charts which are available on the StorSimple Management Service

The automated update is released in a phased approach over the coming months and will be available for all customers to apply from the StorSimple Management Service in Azure.  The update can also be manually applied using hotfix method (see link below).

Next steps:

StorSimple 8000 Series Update 3 release notes

Install Update 3 on your StorSimple device
Quelle: Azure

A No Brainer: Why Twitter Should Clone Snapchat Stories, Also

A No Brainer: Why Twitter Should Clone Snapchat Stories, Also

At the beginning of this year, Twitter product head Kevin Weil left the company for Instagram where he promptly cloned Snapchat Stories. (Hi, Instagram Stories). The brash, shameless move gave Instagram a way to spark more sharing of the carefree, fun videos it previously lacked. And, by taking the heat (and even some praise) for blatantly copying another app, Weil and company made it easier for others to follow along — including his former employer.

In fact, if executed properly, Stories could make even more sense within Twitter than it does within Instagram or Snapchat.

Skeptical? Consider the following:

  • Twitter&;s wants to be the destination for live updates and the Stories format is proving itself as one of (if not the) best ways to share video of what’s happening in the moment. Stories take little editing and are necessarily short form (like Twitter itself). They are also snackable, unlike many live streaming videos that require concentrated attention.
  • If someone shot a compelling video on Twitter Stories, people could easily share it with the world via a tweet. And thanks to the Retweet, a key Twitter feature that makes tweets blow up, a video contained in a Twitter Story could go viral faster than a video on any other platform.
  • The chance of a Twitter Story spreading like wildfire would incentivize public figures and brands to invest the time and energy needed to make Stories great.
  • Since you follow people on Twitter because you&039;re interested in what they&039;re saying and doing, you could expect their Twitter Stories to be relevant to you in a way that transcends mere friendship.
  • Currently, when you share a video on Twitter, there&039;s pressure for it to be really good. If Twitter added disappearing videos that don&039;t appear in the timeline automatically, there would be less pressure involved, and hence more videos shared to Twitter. This is critical at a time when video is becoming the dominant content format on Twitter.
  • Having Stories reside in a top bar of Twitter would give people using the platform access to interesting videos every time they open the app, without having to scroll through the timeline.

When you find a worthy video within Stories a &039;Tweet&039; button could get it into the timeline quickly. This could give added life to videos within Twitter Stories in a way that Instagram simply can&039;t provide. Or even Snapchat itself.

Alex Kantrowitz / Via Taylor Lorenz

And heck, Weil, and Instagram&039;s CEO Kevin Systrom, have already largely made the argument for Twitter cloning Instagram&039;s by-all-accounts successful Snapchat clone:

“I think that the Stories format is seeing broad adoption and I think will be adopted by a lot of folks,” Weil told BuzzFeed News when Instagram Stories launched. “It’s a new format, it’s a powerful format, it’s one that I think will be adopted across the industry.”

Asked if it should consider introducing Stories to its product, Twitter declined to comment.

Quelle: <a href="A No Brainer: Why Twitter Should Clone Snapchat Stories, Also“>BuzzFeed

One cloud to rule them all — or is it?

The post One cloud to rule them all &; or is it? appeared first on Mirantis | The Pure Play OpenStack Company.
So you’ve sold your organization on private cloud.  Wonderful!  But to get that ROI you’re looking for, you need to scale quickly and get paying customers from your organization to fund your growing cloud offerings.
It’s the typical Catch-22 situation when trying to do something on the scale of private cloud: You can’t afford to build it without paying customers, but you can’t get paying customers without a functional offering.
In the rush to break the cycle, you onboard more and more customers.  You want to reach critical mass and become the de-facto choice within your organization.  Maybe you even have some competition within your organization you have to edge out.  Before long you end up taking anyone with money.  
And who has money?  In the enterprise, more often than not it&;s the bread and butter of the organization: the legacy workloads.
Promises are made.  Assurances are given.  Anything to onboard the customer.  “Sure, come as you are, you won’t have to rewrite your application; there will be no/minimal impact to your legacy workloads!”
But there&8217;s a problem here. Legacy workloads &8212; that is, those large, vertically scaled behemoths that don&8217;t lend themselves to &;cloud native&; principles &8212; present both a risk and an opportunity when growing your private cloud, depending how they are handled.
(Note: Just because a workload has been virtualized does not make it &8220;cloud-native&8221;. In fact, many virtualized workloads, even those implemented using SOA, service-oriented architecture, will not be cloud native. We&8217;ll talk more about classifying, categorizing and onboarding different workloads in a future article.)
&8220;Legacy&8221; cloud vs &8220;Agile&8221; cloud
The term &8220;legacy cloud&8221; may seem like a bit of an oxymoron, but hear me out. For years, surveys that ask people about their cloud use have had to include responses from people who considered vSphere cloud because the line between cloud and virtualization is largely irrelevant to most people.
Or at least it was, when there wasn&8217;t anything else.
But now there&8217;s a clear difference. Legacy cloud is geared towards these legacy workloads, while agile cloud is geared toward more &8220;cloud native&8221; workloads.
Let’s consider some example distinctions between a “Legacy Cloud” and an “Agile Cloud”. This table shows some of the design trade-offs between environments built to support legacy workloads versus those built without those restrictions:

Legacy Cloud
Agile Cloud

No new features/updates (platform stability emphasis), or very infrequently, limited & controlled
Regular/continuous deployment of latest and greatest features (platform agility emphasis)

Live Migration Support (redundancy in the platform instead of in the app), DRS (in case of ESXi hypervisors managed by VMWare)
Highly scalable and performant local storage, ability to support other performance enhancing features like huge pages.  No live migration security and operational burdens.

VRRP for Neutron L3 router redundancy
DVR for network performance & scalability; apps built to handle failure of individual nodes

LACP bonding for compute node network redundancy
SR-IOV for network performance; apps built to handle failure of individual nodes

Bring your own (specific) hardware
Shared, standard hardware defrayed with tenant chargeback policies (white boxes)

ESXi hypervisor or bare metal as a service (Ironic) to insulate data plane, and/or separate controllers to insulate control plane
OpenStack reference KVM deployment

A common theme here are features that force you to choose whether you are designing for performance & scalability (such as Neutron DVR) versus HA and resiliency (such as VRRP for Neutron L3 agents).
It’s one or the other, so introducing legacy workloads into your existing cloud can conflict with other objectives, such as increasing development velocity.
So what do you do about it?
If you find yourself in this situation, you basically have three choices:

Onboard tenants with legacy workloads and force them to potentially rewrite their entire application stack for cloud
Onboard tenants with legacy workloads into the cloud and hope everything works
Decline to onboard tenants/applications that are not cloud-ready

None of these are great options.  You want workloads to run reliably, but you also want to make the onboarding process easy without imposing large barriers of entry to tenants applications.
Fortunately, there&8217;s one more option: split your cloud infrastructure according to the types of workloads, and engineer a platform offering for each. Now, that doesn&8217;t necessarily mean a separate cloud.
The main idea is to architect your cloud so that you can provide a legacy-type environment for legacy workloads without compromising your vision for cloud-aware applications. There are two ways to do that:

Set up a separate cloud with an entirely new control plane for associated compute capacity.  This option offers a complete decoupling between workloads, and allows for changes/updates/upgrades to be isolated to other environments without exposing legacy workloads to this risk.
Use compute nodes such as ESXi hypervisor or bare metal (e.g., Ironic) for legacy workloads. This option maintains a single OpenStack control plane while still helping isolate workloads from OpenStack upgrades, disruptions, and maintenance activities in your cloud.  For example, ESXi networking is separate from Neutron, and bare metal is your ticket out of being the bad guy for rebooting hypervisors to apply kernel security updates.

Keep in mind that these aren’t mutually exclusive options; it is possible to do both.  
Of course each option come with their own downsides as well; an additional control plane involves additional overhead (to build and operate), and running a mixed hypervisor environment has its own set of engineering challenges, complications, and limitations.  Both options also add overhead when it comes to repurposing hardware.
There&8217;s no instant transition
Many organizations get caught up in the “One Cloud To Rule Them All” mentality, trying to make everything the same and work with a single architecture to achieve the needed economies of scale, but ultimately the final decision should be made according to your situation.
It&8217;s important to remember that no matter what you do, you will have to deal with a transition period, which means you need to provide a viable path for your legacy tenants/apps to gradually make the switch.  But first, asses your situation:

If your workloads are all of the same type, then there’s not a strong case to offer separate platforms out of the gate.  Or, if you’re just getting started with cloud in your organization, it may be premature to do so; you may not yet have the required scale, or you may be happy with onboarding only those applications which are cloud ready.
When you have different types of workloads, with different needs &8212; for example, Telco/NFV vs Enteprise/IT vs BigData/IoT workloads &8212; you may want to think about different availability zones inside the same cloud, so specific nuances for each type can be addressed inside it’s own zone while maintaining one cloud configuration, life cycle management and service assurance perspective, including having similar hardware. (Having similar hardware makes it easier to keep spares on hand.)
If you find yourself in a situation where you want to innovate with your cloud platform, but you still need to deal with legacy workloads with conflicting requirements, then workload segmentation is highly advisable.  In this case, you&8217;ll probably want to break from the “One Cloud” mentality in favor of the flexibility of multiple clouds  If you try to satisfy both your &8220;innovation&8221; mindset and your legacy workload holders on one cloud, you&8217;ll likely disappoint both.

After making this choice, you may then plan your transition path accordingly.
Moving forward
Even if you do create a separate legacy cloud, you probably don&8217;t want to maintain it in perpetuity.  Think about your transition strategy; a basic and effective carrot and stick approach is to limit new features and cloud-native functionality to your agile cloud, and to bill/chargeback at higher rates in your legacy cloud (which are, at any rate, justified by the costs incurred to provide and support this option).
Whatever you ultimately decide, the most important thing to do is make sure you&8217;ve planned it out appropriately, rather than just going with the flow, so to speak. If you need to, contact a vendor such as Mirantis; they can help you do your planning and get to production as quickly as possible.
The post One cloud to rule them all &8212; or is it? appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

The retailer cloud journey: An incremental climb

As a Cloud Advisor who supports several retail clients, it’s impressive to see a company mature in its cloud adoption and realize true value in its business transformation.
One such story is of a major U.S. retailer’s journey to cloud. It’s a story that’s still being written.  But how did this story begin?
It started with the retailer’s vision for rapid retail innovation with lower IT costs. The company was looking to support a more customer-focused infrastructure, enabled for faster design and larger scale experimentation of its digital initiatives.
The retailer required a solution that struck an optimal balance between performance, customer service, cost management and self-service automation and converged on a continuously available architecture on IBM Cloud.
Here&;s a breakdown of the details:
Continuous availability
The basic technical concept that enables continuous availability is the capacity to run a service from multiple, geo-dispersed “clouds” in parallel. Each cloud is capable of running the business service independently of its peers, yet replicates state and persistent data to its peer clouds. This requires uniform, reliable and performant network access to replicate state and persistent data.
IBM Cloud delivers on both promises providing global high-performance infrastructure capable of supporting applications at “internet scale.” Further, SoftLayer’s standardized data center and pod design provide modular hardware configurations. Global, unmetered, network-private backbone access facilitates data replication across these geo-dispersed clouds.
Self-service automation
With the architecture proved in all seasons of the retail business cycle, the focus of the retailer is on the road to self-service automation and learning from initial deployments. SoftLayer was built with automation in sight. Everything in the platform — including the provisioning and de-provisioning of services, logging, billing and alerts — is automated and controlled by the Infrastructure Management System (IMS). Each function provided in the IMS is available via APIs supporting the company’s goals for full-stack automation.
With cost efficiencies derived from cloud behind them, the retailer continues to mature into an environment for innovation and business value.
Teams defined their principles based on deep-rooted cultural values. Keeping the focus on customer centricity translates into ensuring each design element and each realized feature benefits the customer. Realizing the benefits derived through automation, application development teams focused on reducing the length of the development cycle. This approach  has served well for years in the retailer’s  product development teams. It allows the application development teams to rapid test, experiment and learn quickly.
Interested? Want to know how to get started? This process is well aligned with the IBM Garage method, combining industry best practices for design thinking, lean startup, agile development, DevOps, and cloud to build and deliver innovative solutions.
This major retailer’s cloud transformation story is still being written and IBM is proud to be its partner along this journey.  We continue to actively partner on this journey, bringing clarity to how IBM and industry cloud practices and technologies can help achieve business objectives.
An Elite Cloud Advisor, Jyoti Chawla specializes in developing enterprise transformation strategies and architecture for global enterprises to transform to Cloud and emerging technologies. Follow Jyoti on Twitter and Linkedin.
The post The retailer cloud journey: An incremental climb appeared first on Cloud Computing.
Quelle: Thoughts on Cloud