4 cloud adoption challenges for the Asia-Pacific region

Business leaders must weigh a number of options in the process of choosing a cloud computing platform or service. Not only do they have to pick the service that will best fit their business needs, but they also have to find just the right mix of options to ensure security, accessibility, meet regulatory requirements, and [&;]
The post 4 cloud adoption challenges for the Asia-Pacific region appeared first on Thoughts On Cloud.
Quelle: Thoughts on Cloud

OpenStack Swift mid-cycle hackathon summary

OpenStack Swift mid-cycle hackathon summary

Last week more than 30 people from all over the world met at the Rackspace
office in San Antonio, TX for the Swift mid-cycle hackathon. All major companies
contributing to Swift sent people, including Fujitsu, HPE, IBM, Intel, NTT,
Rackspace, Red Hat, and Swiftstack. As always it was a packed week with a lot
of deep technical discussions around current and future changes within Swift.

There are always way more topics to discuss than time, therefore we collected
topics first and everyone voted afterwards. We came up with the following major
discussions that are currently most interesting within our community:

Hummingbird replication
Crypto – what’s next
Partition power increase
High-latency media
Container sharding
Golang – how to get it accepted in master
Policy migration

There were a lot more topics, and I like to highlight a few of them.

H9D aka Hummingbird / Golang

This was a big topic – as expected. It has been shown by Rackspace already that
H9D improves the performance of the object servers and replication
significantly compared to the current Python implementation. There were also
some investigations if it would be possible to improve the speed using PyPy and
other improvements; however the major problem is that Python blocks processes
on file I/O, no matter if it is async IO or not. Sam wrote a very nice summary
about this earlier on [1].

NTT also benchmarked H9D, and showed some impressive numbers as well. Shortly
summarized, throughput increased 5-10x depending on parameters like object size
and the like. It seems disks are no longer the bottleneck – now the proxy CPU is
the new bottleneck. That said, inode cache memory seems to be even more
important because with H9D one can do many more disk requests.

Of course there were also discussions about another proposal to accept golang
within OpenStack and discussions will continue [2]. My personal view is that
the H9D implementation has some major advantages and hopefully (a refactored
subset) will be accepted to be merged to master.

Crypto retro & what’s next

Swift 2.9.0 has been released the past week and includes the merged crypto
branch [3]. Kudos to everyone involved, especially Janie and Alistair! This
middleware make it possible for operators to fully encrypt object data on
disk.

We did a retro on the work done so far; it has been the third time that we used
a feature branch and a final soft-freeze to land a major change within Swift.
There are pros and cons for this, but overall it worked pretty well again. It
also made sense that reviewers stepped in late in the process, because this
added new sights onto the whole work. Soft freezes also enforce more reviewers
to contribute to it and get it merged finally.

Swiftstack benchmarked the crypto branch; as expected the throughput decreases
somewhat with crypto enabled (especially with small objects), while proxy CPU
usage increases. There were some discussions about improving the performance,
and it seems the impact from checksumming is significant here.

Next steps to improve the crypto middleware is to work on some external key
master implementations (for example using Barbican) as well as key rotation.

Partition power increase

Finally there is a patch ready for review now, that will allow an operator to
increase the partition power without downtime for end users [4].

I gave an overview about the implementation, and also showcased a demo how this
works. Based on discussions during the last week I spotted some minor
eventualities that have been fixed meanwhile, and I hope to get this merged
before Barcelona. We somewhat dreamed about a future Swift that might be usable
with automatic partition power increase, where an operator needs to think about
this much less than today.

Various middlewares

There are some proposed middlewares that are important to their authors, and we
discussed quite a few of them. This includes:

High-latency media (aka archiving)
symlinks
notifications
versioning

The idea to support high-latency media is to use cold storage (like tape or
other public cloud object storage with a possible multi-hour latency) for less
frequently accessed data and especially to offer a low-cost long-term archival
solution based on Swift [5]. This is somewhat challenging for the upstream
community, because most contributors don’t have access to large enterprise tape
libraries for testing. In the end this middleware needs to be supported by the
community, and a stand-alone repository outside of Swift itself might make most
sense therefore (similar to the swift3 middleware [6]).

A new proposal to implement true history-based versioning has been proposed
earlier on, and some open questions have been talked about. This should land
hopefully soon, adding an improved way to versioning compared to today’s
stack-based versioning [7].

Sending out notifications based on writes to Swift have been discussed earlier
on, and thankfully Zaqar now supports temporary signed urls, solving some of
the issues we faced earlier on. I’ll update my patch shortly [8]. There is
also another option to use oslo.messaging. All in all, the whole idea will be
to use a best-effort approach – it’s simply not possible to guarantee a
notification has been delivered successfully without blocking requests.

Container sharding

As of today it’s a good idea to avoid billions of objects in a single container
in Swift, because writes to that container can get slow then. Matt started
working on container sharding sometime ago [9], and iterated once again because
he faced new problems with the previous ideas. My impression is that the new
idea is getting much closer to something that will eventually be merged, thanks
to Matt’s persistence on this topic.

Summary

There were a lot more (smaller) topics that have been discussed, but this
should give you an overview of the current work going on in the Swift
community and the interesting new features that we’ll see hopefully soon in
Swift itself. Thanks everyone who contributed and participated and special
thanks to Richard for organizing the hackathon – it was a great week and I’m
looking forward to the next months!
Quelle: RDO

Recent RDO blogs, July 19, 2016

Here’s what RDO enthusiasts have been blogging about in the last week.

OpenStack 2016.1-1 release Haïkel Guémar

The RDO Community is pleased to announce a new release of openstack-utils.

… read more at http://tm3.org/7x

Improving RDO packaging testing coverage by David Simard

DLRN builds packages and generates repositories in which these packages will be hosted.
It is the tool that is developed and used by the RDO community to provide the repositories on trunk.rdoproject.org. It continuously builds packages for every commit for projects packaged in RDO.

… read more at http://tm3.org/7y

TripleO deep dive session (TripleO Heat Templates by Carlos Camacho

This is the second video from a series of “Deep Dive” sessions related to TripleO deployments.

… watch at http://tm3.org/7z

How to build new OpenStack packages by Chandan Kumar

Building new OpenStack packages for RDO is always tough. Let’s use DLRN to make our life simpler.

… read more at http://tm3.org/7-

OpenStack Swift mid-cycle hackathon summary by cschwede

Last week more than 30 people from all over the world met at the Rackspace office in San Antonio, TX for the Swift mid-cycle hackathon. All major companies contributing to Swift sent people, including Fujitsu, HPE, IBM, Intel, NTT, Rackspace, Red Hat, and Swiftstack. As always it was a packed week with a lot of deep technical discussions around current and future changes within Swift.

… read more at http://tm3.org/80
Quelle: RDO

Recent RDO blogs, August 1, 2016

Just a few blog posts from the RDO community this week:

ControllerExtraConfig and Tripleo Quickstart by Adam Young

Once I have the undercloud deployed, I want to be able to quickly deploy and redeploy overclouds. However, my last attempt to affect change on the overcloud did not modify the Keystone config file the way I intended. Once again, Steve Hardy helped me to understand what I was doing wrong.

… read more at http://tm3.org/85

OPENSTACK 6TH BIRTHDAY, LEXINGTON, KY by Rich Bowen

Yesterday I spent the day at the University of Kentucky at the OpenStack 6th Birthday Meetup. The day was arranged by Cody Bumgardner and Kathryn Wong from the UK College of Engineering.

… read more at http://tm3.org/86

TripleO deep dive session (Puppet modules) by Carlos Camacho

This is the fourth video from a series of “Deep Dive” sessions related to TripleO deployments.

… read more at http://tm3.org/87
Quelle: RDO

Recent RDO blogs, July 25, 2016

Here’s what RDO enthusiasts have been writing about over the past week:

TripleO deep dive session (Overcloud deployment debugging) by Carlos Camacho

This is the third video from a series of “Deep Dive” sessions related to TripleO deployments.

… read (and watch) more at http://tm3.org/81

How connection tracking in Open vSwitch helps OpenStack performance by Jiri Benc

By introducing a connection tracking feature in Open vSwitch, thanks to the latest Linux kernel, we greatly simplified the maze of virtual network interfaces on OpenStack compute nodes and improved its networking performance. This feature will appear soon in Red Hat OpenStack Platform.

… read more at http://tm3.org/82

Introduction to Red Hat OpenStack Platform Director by Marcos Garcia

Those familiar with OpenStack already know that deployment has historically been a bit challenging. That’s mainly because deployment includes a lot more than just getting the software installed – it’s about architecting your platform to use existing infrastructure as well as planning for future scalability and flexibility. OpenStack is designed to be a massively scalable platform, with distributed components on a shared message bus and database backend. For most deployments, this distributed architecture consists of Controller nodes for cluster management, resource orchestration, and networking services, Compute nodes where the virtual machines (the workloads) are executed, and Storage nodes where persistent storage is managed.

… read more at http://tm3.org/83

Cinder Active-Active HA – Newton mid-cycle by Gorka Eguileor

Last week took place the OpenStack Cinder mid-cycle sprint in Fort Collins, and on the first day we discussed the Active-Active HA effort that’s been going on for a while now and the plans for the future. This is a summary of that session.

… read more at http://tm3.org/84
Quelle: RDO